id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.15555
Three-dimensional $\mathcal{P}\mathcal{T}$-symmetric topological phases with Pontryagin index
We report on a certain class of three-dimensional topological insulators and semimetals protected by spinless $\mathcal{P}\mathcal{T}$ symmetry, hosting an integer-valued bulk invariant. We show using homotopy arguments that these phases host multi-gap topology, providing a realization of a single $\mathbb{Z}$ invariant in three spatial dimensions that is distinct from the Hopf index. We identify this invariant with the Pontryagin index, which describes BPST instantons in particle physics contexts and corresponds to a 3-sphere winding number. We study naturally arising multi-gap linked nodal rings, topologically characterized by split-biquaternion charges, which can be removed by non-Abelian braiding of nodal rings, even without closing a gap. We additionally connect the describing winding number in terms of gauge-invariant combinations of non-Abelian Berry connection elements, indicating relations to Pontryagin characteristic class in four dimensions. These topological configurations are furthermore related to fully non-degenerate multi-gap phases that are characterized by a pair of winding numbers relating to two isoclinic rotations in the case of four bands and can be generalized to an arbitrary number of bands. From a physical perspective, we also analyze the edge states corresponding to this Pontryagin index as well as their dissolution subject to the gap-closing disorder. Finally, we elaborate on the realization of these novel non-Abelian phases, their edge states and linked nodal structures in acoustic metamaterials and trapped-ion experiments.
Zory Davoyan, Wojciech J. Jankowski, Adrien Bouhon, Robert-Jan Slager
2023-08-29T18:21:48Z
http://arxiv.org/abs/2308.15555v2
# \(\mathcal{PT}\)-symmetric topological phases with Pontryagin index in three spatial dimensions ###### Abstract We report on a certain class of three-dimensional topological insulators and semimetals protected by spinless \(\mathcal{PT}\) symmetry, hosting an integer-valued bulk invariant. We show using homotopy arguments that these phases host multi-gap topology, providing a realization of a single \(\mathbb{Z}\) invariant in three spatial dimensions that is distinct from the Hopf index. We identify this invariant with the Pontryagin index, which describes BPST instantons in particle physics contexts and corresponds to a 3-sphere winding number. We study naturally arising multi-gap linked nodal rings, topologically characterized by split-biquaternion charges, which can be removed by non-Abelian braiding of nodal rings, even without closing a gap. We additionally connect the describing winding number in terms of gauge-invariant combinations of non-Abelian Berry connection elements, indicating relations to Pontryagin characteristic class in four dimensions. These topological configurations are furthermore related to fully non-degenerate multi-gap phases that are characterized by a pair of winding numbers relating to two isoclinic rotations in the case of four bands and can be generalized to an arbitrary number of bands. From a physical perspective, we also analyze the edge states corresponding to this Pontryagin index as well as their dissolution subject to the gap-closing disorder. Finally, we elaborate on the realization of these novel non-Abelian phases, their edge states and linked nodal structures in acoustic metamaterials and trapped-ion experiments. ## I Introduction The study of topological insulators and semimetals provides for an active area of current research that connects various theoretical as well as experimental impe-tunes [1; 2; 3], offering, amongst others, a condensed matter realization of the \(\theta\)-vacuum and according magnetoelectric polarizability, quantum field-theoretic anomalies and axion electrodynamics [4; 5; 6; 7; 8; 9]. While the inclusion of spatial symmetries, defects and even out-of-equilibrium contexts has provided for an extensive landscape of characterizations [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20], the past years a rather general viewpoint has emerged. Namely, using readily implementable relations between band representations at high symmetry momenta, general constraint equations that match equivariant K-theory computations [21] can be derived. The emerging classes in momentum space can subsequently be compared to real space band representations [22; 23; 24] to discern whether they are compatible with an atomic limit and, accordingly, their topological nature [25; 26]. Although these symmetry-indicated techniques map out a large fraction of topological insulators and semimetals, the past few years have seen the rise of novel types of topologies that depend on multi-gap conditions that a priori cannot be captured by these schemes. These multi-gap topologies are characterized by finer topological structures and homotopy invariants [27] that pertain to band spaces (groups of isolated bands) that in turn depend on multi-gap conditions. A particular example in this regard is Euler class, being the analogue of the Chern number, that arises in systems enjoying \(C_{2}\mathcal{T}\), that is two-fold rotations combined with time-reversal symmetry (TRS), or \(\mathcal{PT}\) symmetry, involving parity and TRS. In such scenarios band degeneracies residing between different bands carry non-Abelian charges [27; 28; 29; 30; 31], being the band structure incarnation of a \(\pi\)-disclination in a bi-axial nematic [32; 33; 34], and braiding them around in momentum space can result in two-band subspaces that have band nodes with similar, rather than oppositely-valued charges, whose obstruction to be annihilated is directly proportional to the Euler class characterizing that two-band subspace [30]. While these new insights have in first stages furnished deeper understanding of finer topologies and the relation to flag manifolds [35; 36; 27], recent advances promise progress in novel quantum geometric structures [37]. More importantly, these multi-gap topologies are increasingly being related to real physical settings. For example, novel multi-gap out-of-equilibrium phases [38] and in particular quench effects [39] have in been seen in trapped ion insulators [40], while non-Abelian braiding and multi-gap physics for both bulk and boundary properties have been predicted in phonon [41; 42] as well as electronic spectra of real materials subjected to stress/strain or temperature-induced structural phase transitions [43; 30; 44]. Finally, this new multi-gap topologies are particularly appealing in the context of meta-materials in which an increasing number of theoretical as well as experimental results are being reported [45; 46; 47; 48; 49; 50]. We note that these pursuits fit in a wider research activity that concerns the exploration of phases beyond the ten-fold way [51] such as fragile phases [52] and Hopf insulators [53], where the first refers to phases in which the topology can be undone by closing the gap with trivial bands (as opposed to a K-theoretical invariant that necessitates a gap closing with a band of opposite charge), while the second type of invariant arises by virtue of the target space under the Hamiltonian mapping. That is, the mapping from a three-dimensional Brillouin zone torus to a 2-sphere (\(S^{2}\)) target space allows for a identification with a Hopf invariant. We reiterate however that multi-gap phases in principle do not have to be symmetry indicated (all bands can be in the same irreducible representation) and that, while a band gap closing with a trivial band can undo the topology, a "debraiding" process is needed to accomplish this, signaling a different stability [54, 55] and characterization. Similarly, with regard to the Hopf map, we stress that, although out-of-equilibrium effects of two-dimensional Euler insulators [39] or three-dimensional \(\mathcal{PT}\)-symmetric (with a four-band target space) can be associated with Hopf maps [56], these maps are generalizations of the standard Hopf map and depend on multi-gap conditions, that is the partitioning of the bands. It is in this setting of refined band partitions and homotopy invariants where we find the subsequent results. In particular, we show that for simple four band systems the classification of three-dimensional real topological phases can be extended with another type of \(\mathbb{Z}\) invariant, which relates to a generalized Pontryagin index representing BPST instantons in non-Abelian gauge field theories such as \(SU(2)\) Yang-Mills theory [57, 58]. We demonstrate that this invariant in some sense relates to the \(\mathbb{Z}\oplus\mathbb{Z}\)-valued Hopf indices characterizing such four-band systems [56], but in fact is a different entity beyond this classification. Concretely, the index corresponds to the elements of the third homotopy group \(\pi_{3}(S^{3})\cong\mathbb{Z}\), describing higher-dimensional winding numbers on the 3-sphere \(S^{3}\). Upon adding a real positive tuning parameter \(t\) providing an additional dimension in the parameter space, we then establish a link to the so-called characteristic Pontryagin class, a real relative of the second Chern number, accessible in four dimensions. Very interestingly, we also show that this \(\mathbb{Z}\) invariant actually characterizes the topology of fully gapped phases (i.e. with complete flag classifying spaces) of even arbitrarily many isolated bands. We moreover provide systematically generated minimal models exhibiting this type of topology, which offers a very direct route towards experimental simulations in optical lattices and metamaterials. Using these models, we also show an interplay between non-Abelian Wilson loops and non-trivial Zak phases [59, 60, 61, 62] and edge modes induced at the surfaces and numerically study their robustness to uniform disorder up to the closing of bulk gap. As a side result, we find that this type of topology enables braiding of non-Abelian nodal rings in 3D, which yields exotic nodal structures in explicit and surprisingly simple Hamiltonians. Accordingly, we propose concrete metamaterial realizations to capture these linking structures as well as the bulk invariant and its bulk-boundary correspondence, thereby impacting active experimental pursuits. This paper is organized as follows. In Section II, we introduce mathematical definitions and constructs, including classifying spaces, which capture the topology encoded in models with non-trivial Pontryagin index. Section III then presents model realizations of these types of topology, including a minimal one that naturally realizes a non-Abelian linked nodal ring structure. In Section IV, we elaborate on the manifestations of the introduced topology, demonstrating bulk-boundary correspondence, associated topological phase transitions, and possible debraiding mechanisms with and without closing the bulk gap that admit removal of these linked structures, which naturally emerge due to the one-dimensional topology of the flag manifold underlying the algebra of nodal rings. Subsequently, we discuss in Section V the full flag limit and a multi-gap invariant on removing the nodal structures, accessing a nontrivial topological phase with a fully non-degenerate band structure, while in Section VI we analyze the robustness to disorder of the edge modes induced by the introduced bulk invariants. Finally, we comment on connections with experimental realizations in Section VII, before concluding in Section VIII. ## II Non-Abelian Pontryagin topology We begin with a general introduction to the non-Abelian real topology realized in the three-dimensional four-band models proposed in the subsequent section. After introducing the relevant classifying spaces, we then elaborate on the Pontryagin index and its relation to the Pontryagin class related to the realized bulk topology. ### Relevant classifying spaces As alluded to above, the topology of the system is fully set by the target space as induced by the Hamiltonian mapping. This is quantified by the notion of the associated classifying space. Here, we introduce the classifying spaces relevant for the Pontryagin topology. The classifying space \(\mathcal{G}\) is defined to minimally capture the topology of a particular Hamiltonian, which can be induced by the following mappings: \(T^{d}\to S^{d}\rightarrow\mathcal{G}\), where the Brillouin zone (BZ) is identified with a torus, \(BZ\cong T^{d}\)[14, 37, 27]. In the subsequent we assume the first mapping to be trivial, thereby neglecting possible inducing weak invariants, while the second map is classified by a homotopy group \(\pi_{d}(\mathcal{G})\) capturing all possible nontrivial winding, or topology, of the Hamiltonian map. We require four Bloch bands, spanning a four-dimensional real vector space as a fiber at each crystal momentum \(\mathbf{k}\) in three spatial dimensions, which relates to the topology of a rank-4 vector bundle over the BZ hypertorus \(BZ\cong T^{3}\) as the base space. We map the BZ to a 3-sphere \(T^{3}\to S^{3}\), on which the non-trivial winding of the Hamiltonian captured by the Pontryagin index will be induced by the winding of the isolated band corresponding to the normal bundle of the 3-sphere, \(NS^{3}\), with the other three potentially degenerate bands spanning the tangent bundle \(TS^{3}\). The classifying space, and hence topology, is then set by the partitioning of flattened bands. In particular, partitioning the system as a 3-band and single band subspace and assuming a real valued Hamiltonian due to the presence of spinless spatiotemporal inversion \(\mathcal{PT}\), the classifying space becomes [27] \[\mathsf{Gr}_{1,4}(\mathbb{R})=O(4)/(O(1)\times O(3))\cong S^{3}/\mathbb{Z}_{2 }\cong\mathbb{RP}^{3}, \tag{1}\] manifestly dividing by the group of gauge transformations corresponding to the specific partitioning of the band subspaces. The above manifold corresponds to a real Grassmannian \(\mathsf{Gr}_{k,N}(\mathbb{R})\), where \[\mathsf{Gr}_{k,N}(\mathbb{R})=O(N)/(O(k)\times O(N-k)). \tag{2}\] We note that the above formulation also directly elucidates the existence of the Hopf insulator [63; 64; 65; 66; 53]. That is, a two-band system is characterized by the complex Grassmannian \(\mathsf{Gr}_{1,2}(\mathbb{C})\), being the Riemann sphere. Considering the third homotopy then coincides with Hopf fibration \(S^{3}\to S^{2}\)[53]. As a next step, on fixing the orientation, corresponding to inducing orientation on the fibers by enforcing the Bloch eigenvectors to span oriented frames [27], the effective target space of the introduced four-band Hamiltonian extends to an oriented real Grassmanian, \[\widetilde{\mathsf{Gr}}_{1,4}(\mathbb{R})=SO(4)/SO(3)\cong S^{3}, \tag{3}\] where we note that a general oriented real Grassmannian is defined as \[\widetilde{\mathsf{Gr}}_{k,N}(\mathbb{R})=SO(N)/(SO(k)\times SO(N-k)). \tag{4}\] Hence, in three-dimensions, the Pontryagin index characterising winding on a 3-sphere, as in the high-energy physics of BPST instantons [57; 58], is a natural invariant introduced by the classifying spaces of real four-band Hamiltonians \(H(\mathbf{k})\) in which one partitions the system into a 3-band and single-band subspace. This is consistent with general classification results on real topology [27], and is equivalent to the elements of third-homotopy groups, independent of the orientability \[\pi_{3}(\widetilde{\mathsf{Gr}}_{1,4}(\mathbb{R}))\cong\pi_{3}(S^{3})\cong \mathbb{Z}, \tag{5}\] \[\pi_{3}(\mathsf{Gr}_{1,4}(\mathbb{R}))\cong\pi_{3}(\mathbb{RP}^{3})\cong \mathbb{Z}. \tag{6}\] We remark, that such real topology can be viewed as a higher-dimensional analogue of orientable and non-orientable three-band Euler Hamiltonians in two-dimensions, which have been realized experimentally in acoustic metamaterials [49; 46]. Such perspective, as well as the experimental accessibility of three spatial dimensions, offers a platform for realizing the Hamiltonian which we detail in the subsequent. ### Pontryagin index We first identify the Pontryagin index as a \(\mathbb{Z}\)-valued bulk invariant realized in our settings. The Pontryagin index captures a winding on a 3-sphere \(S^{3}\). This winding can be explicitly imposed on a Bloch eigenvector corresponding to the isolated band subspace, or equivalently to the normal bundle \(NS^{3}\). The fourth Bloch band \(n_{4}(\mathbf{k})\equiv|u_{4}(\mathbf{k})\rangle\) generates the winding of the Hamiltonian, analogously to the third band constituting the frame basis of the 2-sphere normal bundle \(NS^{2}\), \(n_{3}(\mathbf{k})=|u_{3}(\mathbf{k})\rangle=|u_{1}(\mathbf{k})\rangle\times|u _{2}(\mathbf{k})\rangle\) in a two-dimensional Euler insulator with Hamiltonian \(H^{\chi}(\mathbf{k})=2n_{3}(\mathbf{k})\cdot n_{3}^{T}(\mathbf{k})-\mathbb{1} _{3}\)[39; 67; 30]. The associated higher-dimensional invariant, equating to the Pontryagin index, is given by \[Q=\frac{1}{2\pi^{2}}\int_{S^{3}}\mathrm{d}^{3}k\ \epsilon^{ijpq}( \mathbf{n}_{4})_{i}\partial_{k_{x}}(\mathbf{n}_{4})_{j}\partial_{k_{y}}( \mathbf{n}_{4})_{p}\partial_{k_{z}}(\mathbf{n}_{4})_{q}\;, \tag{7}\] where \((\mathbf{n}_{4})_{i}\) labels the components of the winding vector \(\mathbf{n}_{4}\). This formula is a higher-dimensional (\(S^{3}\) instead of \(S^{2}\)) winding analogue to the Euler invariant \(\chi\) in two-dimensional non-Abelian insulators that can be deduced from the skyrmion number formula [39; 30]. That is, \[\chi=\frac{1}{2\pi}\int_{S^{2}}\mathrm{d}^{2}k\ \mathbf{n}_{3}\cdot(\partial_{k_{ x}}\mathbf{n}_{3}\times\partial_{k_{y}}\mathbf{n}_{3}). \tag{8}\] Moreover, with any vector \(\mathbf{n}_{4}\), we can associate an \(SU(2)\)-valued quaternion matrix \[U=(\mathbf{n}_{4})_{0}\mathbb{1}_{2}+i(\mathbf{n}_{4})_{j}\sigma_{j}\;, \tag{9}\] where Einstein summations are implied and \(\sigma_{j}\) with \(j=x,y,z\) correspond to the usual Pauli matrices. We can interpret such unitary matrix in terms of a non-Abelian \(\mathfrak{su}(2)\) connection form \(G\), which is _not_ the general non-Abelian Berry connection commonly used in studying the band topology [61], appears as the connection on the principal \(G\)-bundle for the \(SU(2)\) instantons [58], and is defined as \[G=U^{-1}\mathrm{d}U, \tag{10}\] with associated non-Abelian curvature \[\mathcal{F}=\mathrm{d}G+G\wedge G. \tag{11}\] In these terms, the Pontryagin index can be written as \[Q=\frac{1}{24\pi^{2}}\int_{S^{3}}\mathrm{Tr}(U^{-1}\mathrm{d}U)^{3}=\frac{1}{ 24\pi^{2}}\int_{S^{3}}\mathrm{Tr}G^{3}. \tag{12}\] Interestingly, one may show that for four-band phases split into one- and 3-band subspaces, the Pontryagin index, that is the bulk invariant corresponding to the winding number of the isolated Bloch vector of the proposed models, can be constructed in terms of the non-Abelian Berry connection. The non-Abelian Berry connection elements are defined as \[A^{a}_{ij}=\left\langle u_{i}\right|\partial_{k_{a}}\left|u_{j}\right\rangle\;, \tag{13}\] with band indices \(i,j=1,2,3,4\) and momentum indices \(a=1,2,3\). As detailed in Appendix A, one may show that \[Q = \frac{1}{2\pi^{2}}\int_{T^{3}}\mathrm{d}^{3}k\ \mathbf{A}_{41} \cdot(\mathbf{A}_{42}\times\mathbf{A}_{43})\] \[\equiv \frac{1}{2\pi^{2}}\int_{T^{3}}\mathrm{d}^{3}k\ \Big{[}\mathbf{A}_{41},\mathbf{A}_{42},\mathbf{A}_{43}\Big{]},\] where the \(\mathfrak{so}(4)\) connection elements connect the occupied and unoccupied band subspaces, analogously to the other invariants characterizing non-Abelian phases [56; 49]. For example, in two spatial dimensions, the 3-band Euler invariant can be rewritten as \[\chi=\frac{1}{2\pi}\int_{T^{2}}\mathrm{d}^{2}k\ \epsilon_{\alpha\beta}A^{ \alpha}_{31}A^{\beta}_{32}\;, \tag{15}\] where \(\epsilon_{\alpha\beta}\) is a \(2\times 2\) real antisymmetric matrix with unit determinant. From the perspective of Eq. (14), the invariant is viewed as an integral of the connection field volume 3-form from connection vectors in the Bloch bundle over the base hypertorus \(T^{3}\). ### Relation to Pontryagin class As next step, we elaborate on the connections of the bulk Pontryagin index to the closely related higher-dimensional characteristic class, the Pontryagin class. The Pontryagin class characterising four-dimensional topological phases with reality condition, can be written in terms of \(SO(4)\)-valued non-Abelian Berry curvature as [48] \[P_{1}=\frac{1}{8\pi^{2}}\int_{T^{4}}\mathrm{d}^{4}k\ \epsilon^{ijpq}F^{\alpha \beta}_{ij}F^{\alpha\beta}_{pq}, \tag{16}\] where the integration measure includes all momenta \(k_{i}\) with \(i=1,2,3,4\) present, or more generally four parameters of a parameter space, e.g. three momenta and additional parameter \(t\), in the context of this work. Upon relaxing the reality condition, a complexification of the real Bloch bundle allows to redefine the associated characteristic class as a second Chern number \[C_{2}=\frac{1}{8\pi^{2}}\int_{T^{4}}\mathrm{d}^{4}k\ \epsilon^{ijpq}\tilde{F}^{ \alpha\beta}_{ij}\tilde{F}^{\alpha\beta}_{pq}\;, \tag{17}\] where \(\tilde{F}_{ij}\) is the non-Abelian Berry curvature over the complexified bundle, traced over the occupied bands. This is consistent with the relation between characteristic classes [68] \[p_{k}(E)=(-1)^{k}c_{2k}(E\oplus iE)\;, \tag{18}\] where \(E\) denotes the total space of a real Bloch bundle \(\mathcal{B}\) and \(E\oplus iE\) is its complexification for any arbitrary positive integer \(k\). We stress that the non-triviality of first Pontryagin class demands _reality_ of the bundle, hence the necessity for enforcing a symmetry such as \(\mathcal{PT}\). Additionally, Pontryagin class is only defined for vector bundles of dimension \(4k\), as in terms of cohomology rings \(p_{k}(E)\in H^{4k}(S^{4k},\mathbb{Z})\cong\mathbb{Z}\), meaning that the lowest-dimensional Pontryagin insulator requires four dimensions. However, a four-dimensional Pontryagin insulator requires at least six bands for non-triviality of the invariant [48]. Hence, it cannot be dimensionally reduced to our three-dimensional model in the manner in which an axion insulator can be seen as a descendant of a second Chern insulator [4]. We may however construct an artificial setup to relate to the Pontryagin class in four dimensions, without inducing extra bands. For this, we begin by dimensionally extending the eigenvectors to generate a new Hamiltonian \(H(\mathbf{k},t)=2\left|u_{4}(\mathbf{k},t)\right\rangle\left\langle u_{4}( \mathbf{k},t)\right|-\mathbb{1}_{4}\). We demand \(t\) to be real parametrization in range \(t=0\to t=\infty\). Setting, \[\left|u_{4}(\mathbf{k},t)\right\rangle=\sqrt{f(t)}\left|u_{4}(\mathbf{k}) \right\rangle\;, \tag{19}\] with a smooth function \(f(t)=\frac{t^{2}}{t^{2}+1}\), we may induce an extended non-Abelian connection, which is specifically of \(\mathfrak{su}(2)\) type exactly in the \(t=\infty\) limit, with [58] \[G=f(t)U^{-1}\mathrm{d}U. \tag{20}\] In this picture, physically the state \(|u_{4}(\mathbf{k},t)\rangle\) does not change as soon as \(t\neq 0\). This can be understood by recognizing the transformation as a time-dependent rescaling, which can be removed by normalisation, as long as the vector is non-vanishing (\(t\neq 0\)). We stress that contrary to the non-Abelian Berry connection, the extended connection, here used only to establish a link to Pontryagin characteristic class, does not require normalization of the Bloch states \(|u_{4}(\mathbf{k},t)\rangle\), contrary to the states \(|u_{4}(\mathbf{k})\rangle\) used for the quaternion construction of the \(SU(2)\) matrices \(U\), which necessarily need to be normalized to achieve the unitarity of \(U\). Mapping to the associated curvature 2-form upon taking an exterior derivative, the connection yields \[Q=\frac{1}{8\pi^{2}}\int_{D_{4}}\mathrm{Tr}\mathcal{F}\wedge\star\mathcal{F}=P _{1}, \tag{21}\] defined on an open disk \(D_{4}\), which is bounded by a 3-sphere \(S^{3}\) parametrized with crystal momentum. Here, the fourth \(k\) coordinate \(t\) can be interpreted as a tuning parameter that allows for the expansion of the sphere [58]. The above thus shows that one may establish a connection between the Pontryagin index invariant present in the proposed three-dimensional models and, on dimensional extension, one of the four characteristic classes used to capture the topology of vector bundles corresponding to band structures. ## III Models Within this section we formulate concrete models allowing to induce non-trivial Pontryagin index topology. We begin by formulating a minimal model, after which a systematic generic framework is introduced to generate general models exhibiting arbitrary values of the invariant. ### Flat band limit An effective approach to formulate a minimal model is upon appealing to the flat band limit to capture the real topology of topological insulators with non-trivial Pontryagin class. We use a minimal construction, which explicitly induces a winding on \(S^{3}\) by the fourth-band [39, 30, 67, 48], while keeping three other bands separated from the forth by a gap, namely, \[H^{Q}(\mathbf{k})=2n_{4}(\mathbf{k})\cdot n_{4}^{T}(\mathbf{k})-\mathbb{1}_{4}. \tag{22}\] The non-trivial Pontryagin index corresponding to the above system (22) is then simply given as \[\mathbf{n}_{4}(\mathbf{k})=\frac{1}{\mathcal{N}}\begin{pmatrix}\sin p_{x}k_{ x}\\ \sin p_{y}k_{y}\\ \sin p_{z}k_{z}\\ m-\sum_{i=1}^{3}\cos p_{i}k_{i}\end{pmatrix}. \tag{23}\] In the above, \(\mathcal{N}\) represents a normalization factor and the parameters \(p_{x},p_{y},p_{z}\in\mathbb{Z}\) are introduced to allow for higher Pontryagin indices \(Q=2p_{x}p_{y}p_{z}\) and \(Q=p_{x}p_{y}p_{z}\) on changing \(m\). We reiterate that viewed as winding on \(S^{3}\), any Pontryagin index corresponds to a member of the third homotopy group \(\pi_{3}(S^{3})\cong\mathbb{Z}\). We note however that models of the type of Eq. (23) cannot generate an arbitrary, e.g. prime \(Q>2\), index, given the factorization in \(p_{x},p_{y},p_{z}\), and that hence a more general procedure for inducing the invariant is needed. This will be achieved in the next subsection. Importantly, the possibility of an arbitrarily high associated winding invariant \(Q\), namely Pontryagin index, definitionally implies the presence of a \(\mathbb{Z}\)-type invariant, as we later verify by bulk-boundary correspondence. ### Plucker embedding As alluded to above, one may also more generally construct a Hamiltonian of any Pontryagin index \(Q\). The framework is based on the Plucker embedding approach [27, 37, 70], previously used in the context of two-dimensional Euler insulators [27] and second Euler insulators [48], which are close relatives of the truely four-dimensional Pontryagin insulator [48]. In brief, the embedding encapsulates equipping elements of the classifying Grassmannians with multi-vectors, which form the basis for matrix construction of the Hamiltonian [27]. We start with a flattened Hamiltonian and a Bloch band matrix \(R(\mathbf{k})\) \[H^{Q}(\mathbf{k})=R(\mathbf{k})\begin{pmatrix}-1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}R(\mathbf{k})^{T}. \tag{24}\] The matrix \(R(\mathbf{k})\in SO(4)\) is generated via a parametrisation with three angles \((\phi,\psi,\theta)\), where those are provided by maps \[\psi(\mathbf{k})=\pi\mathrm{max}\{k_{x},k_{y},k_{z}\}\;, \tag{25}\] \[\theta({\bf k})=\cos^{-1}(k_{z}/\sqrt{k_{x}^{2}+k_{y}^{2}+k_{z}^{2}})\;, \tag{26}\] \[\phi({\bf k})=Q\tan^{-1}(k_{x}/k_{y})\;, \tag{27}\] for any \(k\)-point in the BZ isomorphic to a 3-torus \(T^{3}\). The parametrization is provided by a map \[R({\bf k})={\rm e}^{i\theta\Gamma_{20}}{\rm e}^{i\phi\Gamma_{21}}{\rm e}^{i \psi\Gamma_{12}}\;, \tag{28}\] with corresponding matrices \(\Gamma_{ij}=\sigma_{i}\otimes\sigma_{j}\). We stress that Fourier transforming the components of \(H^{Q}({\bf k})\) would in principle yield long-range hoppings, which can be truncated to finitely ranged neighbour hoppings without changing the topology as long as the gap is not closed upon truncation, which we effectively corroborate at each stage of the analysis [27; 37; 61]. The Bloch vectors, which can be identified with columns of \(R({\bf k})\) as seen through the spectral decomposition, can be projected on \(S^{3}\) as a base space to form a vector bundle, such that the 3-band subspace spans its tangent bundle \(TS^{3}\), while the last band constitutes the normal bundle \(NS^{3}\). The maps introduced above generate the winding in the bundle and the Pontryagin index can be viewed as the winding of the fiber across the bundle. ## IV Physical manifestations of topology Having introduced the mathematical aspects of the Pontryagin topology and, accordingly, model realizations, we proceed to the main results corresponding to the manifestations of non-trivial hyperspherical windings and underlying non-Abelian structures, as captured by the well established \(S^{3}\cong SU(2)\) isomorphism. ### Bulk-boundary correspondence We begin by investigating the spectrum and bulk-boundary correspondence in the minimal model with \((p_{x},p_{y},p_{z})=(1,1,1)\) set as in Eq. (23), with occupied 3-band subspace, and unoccupied band inducing the non-trivial Pontryagin index. The band structure of the insulator with \(Q=2\), as well as its projections reflecting the edge states, are shown in Fig. 1. We notice that its one-dimensional projection is similar to the two-dimensional Euler Hamiltonians with the presence of a similar edge mode, which is a manifestation of the fact that \(k_{z}=0\) and \(k_{z}=\pi\) momentum-space planes host the sub-dimensional 2D Euler phases [39] with renormalized mass. We observe that despite the presence of the edge modes at any energy within the gap, these do not need to connect valence and conduction bands. Such polarization modes were also reported in two-dimensional meronic Euler insulators characterized by non-Abelian topology [46]. Upon taking \(m=2\), these edge states can be separated by an energy gap, much like in axion insulator (Fig. 1). However, contrary to the axion insulator, there is no necessity of \(\mathcal{T}\) symmetry breaking for the presence of gapped edge state pockets, the edge state branches can be disconnected from the projected bulk. Accordingly, the \(\theta\) angle on integrating Chern-Simons form is vanishing \(mod\) angle acquired by an \(SO(3)\) gauge transformation for arbitrary \(Q\), while the present \(\mathcal{PT}\)-symmetry is spinless. More explicitly, the axion \(\theta\) angle, proportional to magnetoelectric polarizability is given by an integral of Chern-Simons 3-form \[\theta=\frac{1}{4\pi}\int_{T^{3}}{\rm Tr}\left[{\rm d}A\wedge A+\frac{2}{3}A \wedge A\wedge A\right], \tag{29}\] where the trace is evaluated over occupied states. In \(3\oplus 1\) band partitioning case, a gauge transformation on the occupied states in degenerate limit acts as \(g\in SO(3)\) yielding [71] Figure 1: (**a**) Phase diagram of the minimal model with \(p_{x}=p_{y}=p_{z}=1\) supporting non-trivial Pontryagin indices \(Q\). The mass term controls the \(\mathbb{Z}\)-valued invariant, similarly to the two-dimensional models with Euler topology. (**b**) Projection of bulk bands and edge states along a one-dimensional section of a three-dimensional insulator with non-trivial Pontryagin index \(Q=1\) set in the minimal model. \[\theta\rightarrow\theta+\frac{1}{12\pi}\int_{T^{3}}\mathrm{d}^{3}k\ \epsilon^{pqr}\mathrm{Tr}\big{[}(g^{-1}\partial_{p}g)(g^{-1}\partial_{q}g)(g^{-1} \partial_{r}g)\big{]}. \tag{30}\] The gauge term can be identified with the \(\mathbb{Z}\)-valued generator of the cohomology \(H^{3}(SO(3),\mathbb{Z})\cong\mathbb{Z}\), namely \(\frac{1}{48\pi^{2}}\int_{S^{3}/\mathbb{Z}_{2}}\mathrm{Tr}(g^{-1}\mathrm{d}g)^{3}\)[72], where we made use of the isomorphism \(SO(3)\cong S^{3}/\mathbb{Z}_{2}\). The map to \(T^{3}\), requiring a double cover, halves the prefactor to \(\frac{1}{96\pi^{2}}\) for the \(\mathbb{Z}\)-valued generator, yielding \(\theta\rightarrow\theta+8\pi n\) where \(n\) is integer, hence \(\theta\) is defined mod \(8\pi\). If, we additionally require \(\mathcal{T}^{2}=1\) symmetry, \(\theta=-\theta\), implying that a phase with non-trivial magnetoelectric effect manifests it by values \(\theta=4\pi\) mod \(8\pi\). We find the latter for odd \(Q\), while \(\theta=0\) mod \(8\pi\) for \(Q\) even. We note that the introduced mod \(8\pi\) classification due to the reality condition enforced by the spinless \(\mathcal{PT}\) symmetry, as applied to a 3-band subspace, can be contrasted with the mod \(4\pi\) axion \(\theta\) angle classification applicable to the three-dimensional _bosonic_ topological insulators [73]. We also address this distinction in Section VIII. ### Topological phase transitions We can furthermore construct a phase diagram from the minimal model introduced above. When \(p_{x}=p_{y}=p_{z}=1\), we observe two topological phase transitions at \(m=\pm 1\) and \(m=\pm 3\). Correspondingly, the Pontryagin index changes from \(Q=2\) to \(Q=1\) and to \(Q=0\), as shown in Fig 1. We notice that this finding is different from the two-dimensional finding in orientable Euler insulators, where \(\chi=2N\) with \(N\in\mathbb{Z}\), admitting only a direct transition with \(\Delta\chi=\pm 2\), e.g. from \(\chi=2\) to \(\chi=0\) alongside a removal of Euler nodes, on closing the principal gap. Interestingly, as the trivialisation occurs on closing the principal gap in three-dimensional Pontryagin models, the nodal structure also disappears, as we detail in the subsequent. The reduction of Pontryagin index through topological phase transitions can be viewed in terms of the trivialisation of sub-dimensional Euler invariants over two-dimensional sections of BZ at the high-symmetry planes \(k_{z}=0\) and \(k_{z}=\pi\). We suggest that the momentum-space construction of the Hamiltonian reflects the construction of a strong three-dimensional, spinful, \(\mathbb{Z}_{2}\) insulator, where the strong invariant is induced on appropriately coupling two-dimensional \(\mathbb{Z}_{2}\) quantum spin Hall insulator models at \(k_{z}=0\) and \(k_{z}=\pi\). Here, the induced strong invariant is the Pontryagin index, which is \(\mathbb{Z}\)-valued as an effect of a similar construction from Euler insulators hosting \(\mathbb{Z}\) invariant. We note that other reminiscent subdimensional relations in terms of two-dimensional fragile, though symmetry-indicated rather than symmetry-indicator free [27], topology, were established in the context of axion insulators with \(\theta=\pi\)[5; 74; 75]. In the next section, after discussing the nodal structures that naturally arise in the 3-band subspaces of our models, we further propose topological phase transitions to more general multi-gap flag limit phases with four isolated bands, classified by an oriented flag variety \[\widetilde{Fl}_{1,1,1,1}(\mathbb{R})=SO(4)\cong S^{3}\times S^{3}\;, \tag{31}\] that thereby serve as an unambiguous reference to further interpret the outlined topological structures. Here we note that a general oriented flag manifold is defined as \[\widetilde{Fl}_{p_{1},\ldots,p_{N}}(\mathbb{R})=SO(4)/(SO(p_{1})\times...\times SO (p_{N})). \tag{32}\] Such multi-gap flag phases where all bands are fully partitioned can be accessed on addition of a proper term to the Hamiltonian with Pontryagin index \(Q\), or smooth reparametrization of the diagonal matrix in the embedding construction, in both cases necessarily removing any band crossings to enter the full multi-gap regime. We will moreover see that the connection to the flag limit will provide for an unambiguous reference to fully elucidate the above topological features. That is, we will show that such topological transitions can be tracked with the bulk axion angle, bulk-edge correspondence of the distinct phase with associated degeneracies of the surface states in different, neighbouring gaps. Before turning to these aspects we however first comment on the naturally emerging linking structures in our models. ### Nodal structures In this section, we discuss the nodal structures present in the 3-band subspaces within the band structures of the introduced models. Such characterisation is necessary to further understand the bulk invariant \(Q\) induced by \(\pi_{3}(\mathrm{Gr}_{1,4})\cong\pi_{3}(S^{3})\cong\mathbb{Z}\), central to this work, given that Figure 2: Nodal structures over the three-dimensional Brillouin zone for the minimal model of the Hamiltonian provided in Eq. (22). (**a**) \(Q=2\), with four linked nodal rings. (**b**) \(Q=1\), with two nodal rings present. (**c**) \(Q=0\) with trivial nodal structure and vanishing linking numbers. We show that the structures are not protected by the Pontryagin index \(Q\) but are easily realized in the context of the presented models. the nodal topology is supported by the tangent bundle \(TS^{3}\) of the classifying space, which is parallelizable [76], in contrast with the winding of isolated band in the normal bundle \(NS^{3}\) inducing the invariant. The parallelizability of the tangent bundle of \(S^{3}\) implies that the nodal structure cannot be responsible for the value of \(Q\), manifestation of which we demonstrate by explicit debraiding construction outlined in this section. We stress that these nodal links appear very naturally in our models and can be debraided [31] quite efficiently to further enter the full flag limits introduced in the previous subsection. As result, we empirically observe that the presented model setting provides for an excellent platform to accomplish braiding in rather simple four band models that should appeal to meta-materials settings, see also Section VII. Here, we elaborate on the non-Abelian charges carried by the nodal structures present in the models and the debraiding necessary to access the reference flag limits further discussed in the next section. We include explicit parametrizations of the corresponding braiding Hamiltonians in the Appendix C. To characterize the nodal topology, we start by noticing that the frames \(\{\ket{u_{1}},\ket{u_{2}},\ket{u_{3}},\ket{u_{4}}\}\) constituting a \(vierbein\) at any \(k\)-point. can acquire an accumulated angle on being parallel transported around any node due to band touching. These are captured by the first homotopy group of an unoriented flag variety \(\mathsf{Fl}_{1,1,1,1}\) with \(O(4)\) representing general rotations of the vierbein and each \(\mathbb{Z}_{2}\) capturing the gauge freedom of single Bloch vector, \(\ket{u_{i}}\rightarrow-\ket{u_{i}}\), enforced by the real symmetry. The according fundamental group is \[\pi_{1}(\mathsf{Fl}_{1,1,1,1})=\pi_{1}\Big{(}\frac{O(4)}{\mathbb{Z}_{2}^{4}} \Big{)}\cong\bar{P}_{3}, \tag{33}\] where \(\bar{P}_{3}\) is the Salingaros vee group of Clifford algebra \(Cl_{0,3}\), which is a non-Abelian group of rank sixteen, with ten conjugacy classes. Therefore, a quaternion group is a subgroup of such group, implying its non-Abelian character. The Salingaros vee group is obtained from \(Cl_{0,3}\) by first defining a basis for the Clifford algebra as \[\mathcal{B}=\{1,e_{1},e_{2},e_{3},e_{1}e_{2},e_{2}e_{3},e_{1}e_{3},e_{1}e_{2}e _{3}\}\equiv\{e_{\mathbf{i}}\}, \tag{34}\] where the set \(\{e_{1},e_{2},e_{3}\}\) generates the algebra - this fact is important when assigning the non-Abelian charges to nodes in the energy spectrum. The vee group is then defined as \[\mathsf{G}=(\{\pm e_{\mathbf{i}}\mid e_{\mathbf{i}}\in\mathcal{B}\},\times), \tag{35}\] where \(\times\) represents the Clifford algebra multiplication. There exists a ring isomorphism between \(Cl_{0,3}\) and the _split-biquaternions_, which are a type of hypercomplex number based on the quaternions. While quaternions are of the form \(w+x\mathbf{i}+y\mathbf{j}+z\mathbf{k}\) and have real coefficients \(\{w,x,y,z\}\), biquaternions have complex coefficients multiplying the imaginary units, lifting the number of real dimensions from \(4\) to \(8\). The split-biquaternions are then obtained by having the coefficients be _split-complex_ numbers, which are of the form \(z=x+\mathbf{i}y\), with \(\mathbf{i}^{2}=1\) rather than \(\mathbf{i}^{2}=-1\). The split-quaternions are also isomorphic to \(\mathbb{H}\oplus\mathbb{H}\), where \(\mathbb{H}\) are the quaternions. This shows how the quaternion charges of \(2\oplus 1\) band models can be found as a subgroup of the charges in \(3\oplus 1\) band ones. The ring isomorphism also implies a group isomorphism between the vee group of \(Cl_{0,3}\) and the group of split-biquaternions, and the charges can be assigned as shown in the Appendix B. As the nodal topology of four-band non-Abelian insulators with real topology requires non-triviality of the _first_ homotopy group of corresponding flag variety, a similar classification was also achieved in one-dimensional and two-dimensional phases [49], which however, cannot realize the nodal ring structures and associated debraiding, requiring three spatial dimensions as discussed in this work. We find that by construction of the minimal models introduced in Eq. (22), the linking number of _all_ nodal rings corresponding to nodes in different gaps is equal to the Pontryagin index \(Q\) therein. The topological phase transitions introduced in the previous sections cause the disappearance of the nodal structures through an associated debraiding, as shown in Fig. 3. This process involves subsequently: creation of additional nodal rings on gap closure, debraiding which flips the nodal charges, reconnecting the rings and finally contracting them. On debraiding, which allows unlinking the structure on flipping the charges, the contraction can remove the nodes, ultimately gapping out the phase. We give an explicit parametrization of this process in Appendix C. However, as we explain, the debraiding of nodal ring structure does not necessarily require closing the principal gap (the gap between the 3-band and single-band subspace), see Fig. 4, which is supported by the fact that a tangent bundle of a 3-sphere \(TS^{3}\) hosting 3-band subspace of our models is parallelizable. This property admits a removal of singularities in the tangent bundle due to the nodes, without accessing the fourth band from the normal bundle \(NS^{3}\) by the principal gap-closing degeneracies. We show by explicit construction that the presence of split-biquaternion nodes is _not_ intrinsically due to the non-triviality of Pontryagin index as a bulk invariant, which would be equivalent to the necessity of closing a gap for the removal and unbraiding of nodal structure \(only\), see Fig. 3. Namely, we find that the nodal structure can be unbraided \(without\) closing a neighbouring gap, bringing the band structure to a state where nodes of opposite split-biquaternion charges can be annihilated with local perturbations. However, we emphasize that such debraiding process, involving introduction of additional rings, and refined twisting of the nodal rings (Fig. 3(b-d)) to flip charges on debraiding enabling further ring annihilation, is highly-non local, effectively providing a protection against arbitrary local perturbations. We also provide an explicit parameterization of this process in Appendix C. Interestingly, we find the nodal rings to enclose two-dimensional regions, which traversed on parallel-transporting eigenstate frames, induce \(\pi\)-phase shifts associated with the discontinuities in the Berry connection over BZ. We refer to these as Dirac sheets, in analogy to the lower-dimensional analogues, namely Dirac strings, corresponding to the gauge connection-discontinuities in two-dimensional Euler phases. In the further parallel-transport study, for each value \(Q\neq 0\) we additionally obtain Wilson loop spectrum, showing non-trivial winding across the BZ, contrary to \(Q=0\), see Fig. 5. The winding obtained is even, as in Euler phases, contrary to the odd winding found in Stiefel-Whitney insulators [36]. ## V Reference Flag Limits In this section, we study the multi-gap topology of the full flag limit of the Hamiltonian obtained from the models with non-trivial Pontryagin index, when all bands are non-degenerate across the entire BZ, as we annihilate the nodes after unbariing the nodal structure without closing the principal bulk energy gap. As all nodes are removed, the classifying space is given by \(\mathsf{Fl}_{1,1,1,1}\) and through homotopy classification we obtain the following homotopy classes of Hamiltonians \(\pi_{3}(\mathsf{Fl}_{1,1,1,1})\cong\pi_{3}(SO(4))\cong\mathbb{Z}\oplus\mathbb{ Z}\). In other words, by removing the degeneracies between bands, a phase transition occurred which allows the new system to host two \(\mathbb{Z}\) invariants \((w_{L},w_{R})\) rather than one \((Q)\). We find that these invariants are not independent. These flag limits and limits hosting known topologies serve as an important reference to further elucidate our above findings. As the classifying space has a direct link to \(SO(4)\), it is useful to consider how the \(\mathbb{Z}\oplus\mathbb{Z}\) invariant arises in the case of \(SO(4)\) matrices. In this regard it is useful to call upon the well known isomorphism \[SO(4)\cong\frac{S_{L}^{3}\times S_{R}^{3}}{\mathbb{Z}_{2}}, \tag{36}\] which is a consequence of the fact that \(SO(4)\) rotations can be split into two _isoclinic_ rotations (left and right) acting on the vector of interest from the right and left. An arbitrary \(SO(4)\) rotation leaves two planes invariant in the sense that any vector within these planes stays in its plane during the rotation. The third homotopy group of this space can then be considered to arise from the 3D winding number of two copies of the 3-sphere: \[\pi_{3}(S_{L}^{3}\times S_{R}^{3})\cong\pi_{3}(S_{L}^{3})\oplus\pi_{3}(S_{R}^{ 3})\cong\mathbb{Z}\oplus\mathbb{Z}. \tag{37}\] Figure 3: Explicit unlinking of nodal structure through the creation of adjacent nodal rings on closing a gap, as parametrized in Appendix C. The blue and orange rings show nodal lines between the bands in the occupied 3-band subspace and the green rings show nodal lines between the highest occupied and unoccupied bands. (**a**) initial nodal links, (**b-c**) creation of additional rings, (**d**) connecting rings, (**e-f**) splitting and disentangling green and blue rings, (**g**) removal of green rings, (**h**) contraction and annihilation of blue rings to enter the gapped flag limit. Numerically, the debraiding on closing the principal gap can be achieved by adding a diagonal mass term to the Hamiltonian, similarly to the two-dimensional Euler insulators and semimetals, where an effective mass to debraid the nodes can also be realized by adding onsite disorder [54]. The \(\mathbb{Z}_{2}\) quotient does not affect \(\pi_{n}\) for \(n\geq 2\) as it is a discrete space. Using this information, a Hamiltonian that hosts this set of invariants may be constructed as detailed in the remainder. A generic four-band flag Hamiltonian can be factored as \[H(\mathbf{k})=V(\mathbf{k})EV(\mathbf{k})^{T}, \tag{38}\] where \(V\) is an \(SO(4)\) matrix of the normalized eigenvectors and \(E=\text{diag}[-2,-1,1,2]\). Distinct eigenvalues are used to enforce the fact that there are gaps between all bands. \(V\) can then be factored into \(V_{R}V_{L}\) which are the left and right isoclinic rotations. An explicit form of this factorisation is [77] \[V_{R}=\left(\begin{array}{cccc}r_{0}&-r_{3}&r_{2}&r_{1}\\ r_{3}&r_{0}&-r_{1}&r_{2}\\ -r_{2}&r_{1}&r_{0}&r_{3}\\ -r_{1}&-r_{2}&-r_{3}&r_{0}\end{array}\right), \tag{39}\] \[V_{L}=\left(\begin{array}{cccc}l_{0}&-l_{3}&l_{2}&-l_{1}\\ l_{3}&l_{0}&-l_{1}&-l_{2}\\ -l_{2}&l_{1}&l_{0}&-l_{3}\\ l_{1}&l_{2}&l_{3}&l_{0}\end{array}\right), \tag{40}\] where \(r_{0}^{2}+r_{1}^{2}+r_{2}^{2}+r_{3}^{2}=1\) and \(l_{0}^{2}+l_{1}^{2}+l_{2}^{2}+l_{3}^{2}=1\). These conditions ensure that \(V_{R}\) and \(V_{L}\) are orthogonal matrices and imply that the components \(r_{i}\) and \(l_{i}\) form a pair of four dimensional vectors that lie on 3-spheres. Considering now the map from the BZ to each of these 3-spheres, we can use the winding vector defined in the previous model: \[\mathbf{r}=(\sin w_{L}k_{x},\sin k_{y},\sin k_{z},2-\cos w_{L}k_{x}-\cos k_{y} -\cos k_{z})^{T}, \tag{41}\] \[\mathbf{l}=(\sin w_{R}k_{x},\sin k_{y},\sin k_{z},2-\cos w_{R}k_{x}-\cos k_{y} -\cos k_{z})^{T}. \tag{42}\] to induce the winding on each of these spheres. However in this case, it is possible to have a different winding number on \(S_{R}^{3}\) and \(S_{L}^{3}\) through the parameters \(w_{L}\) and \(w_{R}\). The Bloch eigenvectors are then obtained by taking the columns (or rows) of \(V_{R}V_{L}\). Although it is not immediately obvious how these two winding numbers can be extracted from the eigenvectors, it is important to note the following. Calculation of the winding number of the Bloch eigenvectors using Eq. (7) gives either \(w_{L}-w_{R}\) or \(w_{L}+w_{R}\) depending on if we Figure 4: Visualization of the process of debraiding of nodal structures without closing the principal gap. The blue and orange rings show nodal lines in the occupied 3-band subspace. (**a**) initial nodal links, (**b**) creation of additional rings, (**c**) connecting rings, (**d-e**) splitting and disentangling rings, (**f-g**) closure and removal of orange rings, (**h**) leftover blue ring, which can be contracted and removed, obtaining the fully gapped flag limit. The debraiding was obtained on adding a parametrised dispersive band-splitting term to the Hamiltonian, contrary to the previously introduced debraiding with a diagonal term closing the principal gap, see also Appendix C. take \(V=V_{R}V_{L}\) or \(V=V_{R}V_{L}^{T}\). The same number is obtained from _all_ eigenvectors, so this alone is not enough to characterize the phase. We note that recently another phase of \(\mathcal{PT}\)-symmetric four band model has been proposed, being the so-called real Hopf insulator (RHI) [56]. Contrary to our model, such phase requires an occupied and unoccupied two-band subspace classified in the degenerate limit. In other words the partitioning in that case corresponds to a classifying space \[\widetilde{\mathsf{Gr}}_{2,4}(\mathbb{R})=SO(4)/(SO(2)\times SO(2))\cong S^{2 }\times S^{2}. \tag{43}\] Correspondingly, the bulk invariant is given by \[\pi_{3}(\widetilde{\mathsf{Gr}}_{2,4}(\mathbb{R}))\cong\pi_{3}(S^{2}\times S ^{2})\cong\mathbb{Z}\oplus\mathbb{Z}\;, \tag{44}\] which can be denoted as \((\chi_{w},\chi_{z})\) and referred to as double Hopf index. The Hopf invariants (indices) are given by [56]: \[\chi_{w/z}=-\frac{1}{4\pi}\int_{T^{3}}a_{w/z}\wedge f_{w/z}\;, \tag{45}\] where \(f_{w/z}=\mathrm{d}a_{w/z}\), while \(a_{w/z}=i\bar{z}_{w/z}\mathrm{d}\bar{z}_{w/z}\) are connection 1-forms defined in terms of Hopf maps induced by the complex vectors \(\bar{z}\)[56, 39]. It follows that the real Hopf insulators with \((\chi_{w},\chi_{z})=(0,Q)\) can be obtained on repartitioning the bands of the model with Pontryagin index, or by closing the upper and lower gaps in the flag limit phases. Any attempts to continuously connect the Hamiltonians of these three phases will necessarily fail, as a gap closing or reopening needs to occur, corresponding to a change of the invariants following from the distinct classifying spaces. There is, however, an interesting correspondence between \(w_{L}\) and \(w_{R}\) and the invariants constructed to classify the real Hopf insulator in Eqs. (43)-(45), see Appendix D, \[\begin{split}\chi_{w}&=\frac{1}{2\pi^{2}}\int_{BZ} \mathrm{d}^{3}\mathbf{k}\;\epsilon_{ijkl}r^{i}r^{j}_{k_{x}}r^{k}_{k_{y}}r^{l}_{ k_{z}}=w_{R},\\ \chi_{z}&=-\frac{1}{2\pi^{2}}\int_{BZ}\mathrm{d}^{3 }\mathbf{k}\;\epsilon_{ijkl}t^{i}_{k_{x}}t^{k}_{k_{y}}l^{l}_{k_{z}}=-w_{L}. \end{split} \tag{46}\] We also confirm with further numerical evaluations that \(\chi_{z}\) and \(\chi_{w}\)_correspond exactly_ to the winding number \(w_{L}\) and \(w_{R}\) of the isoclinic rotation matrices. This implies that although these invariants were derived assuming a classifying space of \(\widetilde{\mathsf{Gr}}_{2,4}\), the opening of a gap in the top and bottom two-band subspaces does not change the topology. In fact, the only nontrivial change happens on closing two adjacent gaps, as \(\pi_{3}(\mathsf{Fl}_{1,1,1,1})\cong\pi_{3}(\mathsf{Fl}_{1,1,2})\cong\pi_{3}( \mathsf{Gr}_{2,4})\cong\mathbb{Z}\oplus\mathbb{Z}\), but \(\pi_{3}(\mathsf{Gr}_{1,4})\cong\mathbb{Z}\). These results also show that \(\chi_{w}\) and \(\chi_{z}\) as defined above are not necessarily Hopf invariants, as there is no \(S^{2}\) structure in the classifying space of this model (although there is a link between the winding number on the 3-sphere and the Hopf invariant). We thus observe that we obtain an example of true multi-gap topology and there is a sequence of phase transitions that can be shown by closing successive gaps. Starting from a fully gapped model with the classifying space \(\mathrm{Fl}_{1,1,1,1}\), we can specify \(w_{L}\) and \(w_{R}\) to obtain a "real Hopf" phase. We can then close two adjacent gaps, leaving only the highest or bottom energy gap open, to obtain the model characterized by the Pontryagin index. And finally, we can trivialize the model by closing the highest energy gap. It is important to note that the eigenfunctions in the fully gapped model also possess a 3D winding number (Pontryagin index). The winding, however, can be fully determined from the values of \(w_{L}\) and \(w_{R}\) and therefore does not constitute an independent invariant. The crucial change that happens when closing two adjacent gaps is the ability to mix the bottom three bands through gauge transformations. This introduces a gauge dependence to \(w_{L}\) and \(w_{R}\), which can easily be checked numerically. It does not, however, affect the winding number of the eigenvectors, as captured by the Pontryagin index, which now becomes an independent invariant. We may also consider the stable flag limit in which we extend the band structure to an arbitrary number of isolated bands, enforcing band gaps between them in the flattened Hamiltonian. The classifying space for this new model is \(O(N)/\mathbb{Z}_{2}^{\times N}\) with third homotopy group \(\pi_{3}(O(N)/\mathbb{Z}_{2}^{\times N})\cong\pi_{3}(SO(N))\cong\mathbb{Z}\) for \(N\geq 5\). It is known [58] that every simple compact group \(\tilde{G}\) contains an \(SU(2)\) subgroup and the Pontryagin index as defined in equation Eq. (12) classifies all the maps \(\pi_{3}(\tilde{G})\). The invariant can actually be evaluated explicitly from Eq. (12) by inserting the frame of \(N\) eigenvectors in the place of matrix \(U\). Alternatively, the \(\mathbb{Z}\) invariant arising from the homotopy classification can be understood is the number of times the Hamiltonian wraps the \(SU(2)\) subgroup of \(SO(N)\). ## VI Edge states and disorder In this section we comment on the edge states due to the bulk-boundary correspondence in the reference flag limits, which we contrast with the original models consisting of one principal gap and a 3-band subspace. Furthermore, we show that these edge states are robust up to gap-closing disorder in the \(3\oplus 1\) Pontryagin phases, as well as in the related flag limits. ### Edge states in flag limits We further elaborate on the bulk-boundary correspondence between the flag invariants \(w_{L},w_{R}\) and the presence and degeneracies of the multi-gap edge states. We find that top and bottom gaps support the presence of dangling edge states, with degeneracies given by multiples of \(w_{L}-w_{R}\) and \(w_{L}+w_{R}\), see Fig. 6. This should be contrasted with the \(Q\) edge states in the principal gap of degenerate \(3\oplus 1\) limit with Pontryagin index, as well as with real Hopf insulators, which do not require a presence of two subsidiary band gaps in occupied and unoccupied two-band subspaces. Additionally, the further connection between edge states in flag limit and \(3\oplus 1\) phases with Pontryagin index can be seen on breaking \(\mathcal{T}\), while keeping \(\mathcal{PT}\) in the latter by an addition of a constant matrix term. Namely, this results in the additional edge states appearing in the lower nodal part of the 3-band subspace, besides the edge states in the principal gap. As we showed in Section IV, the nodes are not protected by the bulk invariant, hence their removal establishes a link of edges states in \(3\oplus 1\) phase to the edge states in the full flag limit. ### Robustness to disorder In this section, we show the protection of edge states supported by the non-trivial Pontryagin index \(Q\) of the bulk Hamiltonian, up to gap closing. While the trivial phase \(Q=0\) in the proposed model also has the associated edge modes, we show that these are not topologically protected, hence not robust to disorder. We impose Anderson disorder by the following perturbation Hamiltonian \[\Delta H_{\text{disorder}}=\sum_{n,i}\delta\mu_{i,n}c_{i,n}^{\dagger}c_{i,n}, \tag{47}\] where \(n\) is a unit cell label, \(i=1,2,3,4\) is an orbital label, and \(\delta\mu_{i,n}\in\big{[}-W,W\big{]}\) is a local change in the chemical potential on adding disorder with uniform random distribution and amplitude \(W\). We find that the edge states remain exponentially localized on adding weak disorder and dissolve at the disorder strength \(W\) above the size of the bulk gap, see Fig. 7. ## VII Experimental realizations We now further elaborate on experimental realizations which we suggest for studying physical manifestations of the novel type of three-dimensional Pontryagin band topology studied in this work. First, we propose a metamaterial realization of the described minimal phases with \(Q=2\), \(Q=1\), \(Q=0\) (\(p_{x},p_{y},p_{z}=1\)) with the correspondent edge states, hence a way to measure and empirically validate the \(\mathbb{Z}\)-invariant based on Pontryagin index. The protocol is based on the idea of extending acoustic resonators to 3D synthetic matter construction. To generate the lowest Pontryagin indices, connecting tubes up to the second-neighbours is necessary, and we propose that the \(\pi/2\) phase shifts generating imaginary hopping amplitudes can be ensured by proper phase-shifting impedance matching in the materials constituting the connecting tubes. On extending the setup of an analogous experiment used to study non-Abelian band topology [49], the amplitudes of hopping parameters can be controlled with the diameters of the tubes, with coupling of any two connected resonators being captured by an effective Hamiltonian \[H_{\text{eff}}=\begin{pmatrix}\omega_{1}&e^{i\phi}|\kappa|\\ e^{-i\phi}|\kappa|&\omega_{2}\end{pmatrix}. \tag{48}\] Here, \(|\kappa|\) and \(\phi\) correspond to the amplitude and the phase of the coupling \(\kappa\) representing particular hopping, and \(\omega_{1},\omega_{2}\) are natural resonator frequencies identified with the onsite energies in the tight-binding model. While phases with higher index \(Q\) can, in principle, be created, that would require an addition of furthermore connections, which might be unfeasible from the technical point of view. We propose that the mass term \(m\) crucial for the topological phase transition can Figure 5: Non-Abelian Wilson loop winding for Pontryagin index \(Q=0\) (**a**), \(Q=1\) (**b**), \(Q=2\) (**c**) and \(Q=3\) (**d**). The loops were evaluated at \(k_{y}=0\), and high-symmetry planes \(k_{y}=\pi\). For \(Q=0\) we find no winding, along with the winding of \(Q=1\), \(Q=2\), \(Q=3\) being even as in two-dimensional Euler insulators, contrary to the Stiefel-Whitney insulators with odd Wilson loop winding. be controlled by changing thickness of the metamaterial tubes. Such procedure would realize a three-dimensional extension of protocols implemented in the previous works studying Euler topology in two spatial dimensions [46; 49]. Additionally, we propose that the non-trivial nodal structures can be realized in optical trapped-ion experiments. We would expect the protocol to be analogous to the closely-related, lower-dimensional, experiment used for studying the Euler class in the topological Euler insulator [40] with 3-band Euler Hamiltonians realized in hyperfine states of ytterbium \({}^{179}\)Yb\({}^{+}\) ions. The four bands with Pontryagin index represented by the fourth-band winding might be realized in atomic states of four-level systems, such as e.g. neodymium Nd\({}^{3+}\), in that case with states labelled by term symbols \({}^{4}F_{5/2}\), \({}^{4}F_{3/2}\), \({}^{4}I_{9/2}\), \({}^{4}I_{11/2}\). Here, inverting the band structure while not changing topology, such that three bands are unoccupied, might be simpler to achieve in a real experiment. The explicit parametrization of the Hamiltonians naturally realizing the manipulation of the linked nodal structures by braiding processes described in Section IV, is provided in Appendix C. We note that the debraiding on closing the principal gap (Fig. 3), which is accessible by a simple diagonal term, might be experimentally simpler to realize than without closing the gap (Fig. 4). ## VIII Discussion and Conclusion We show that the Pontryagin index naturally induces a \(\mathbb{Z}\)-type invariant in real-valued three-dimensional four-band Hamiltonians, which we further corroborate by topological phase transitions and bulk-boundary correspondence. This can be contrasted with other known topologies in three-dimensional systems. Contrary to the stable \(\mathbb{Z}_{2}\) invariant of a three-dimensional topological insulator (Altland-Zirnbauer class AII), protected by spinful \(\mathcal{T}\) (\(\mathcal{T}^{2}=-1\)) symmetry, the centrosymmetric case thereof, or an axion insulator with broken \(\mathcal{T}\)-symmetry (having a \(\theta\)-angle \(\theta=\pi\), we find that the Pontryagin models can host a trivial Chern-Simons 3-form, \(\theta=0\) mod \(8\pi\), for even winding number \(Q\), contrary to odd windings that result in a non-trivial angle, \(\theta=4\pi\) mod \(8\pi\). Here, the mod \(8\pi\) classification rather than mod \(2\pi\) follows from the cohomology of gauge group \(SO(3)\), introducing its elements in the gauge transformations compliant with the definition of magnetoelectric polarizability. This manifestation of real topology can also be contrasted with the finding of \(\theta=2\pi\) mod \(4\pi\) corresponding to the three-dimensional bosonic topological insulator enjoying higher (e.g. unitary rather than orthogonal) gauge groups with different cohomology generators, under spinless time-reversal symmetry (\(\mathcal{T}^{2}=1\)), which results in yet different quantization of non-vanishing magnetoelectric polarizability [73]. We note that in case of the flag limits introduced in the text, the gauge dependence manifests itself also in the physical values that the axion angle can take, lacking modular constraints, contrary to the \(3\oplus 1\) models with Pontryagin Figure 7: Edge and bulk states of the Pontryagin \(Q=1\) (**a**-**b**) and associated flag limit \((w_{L},w_{R})=(0,1)\) phases (**c**-**d**) subject to weak disorder (\(W=0.3\)). The one-dimensional wavefunction sections of clean phases (_black_) were plotted against the same states perturbed with weak Anderson disorder (_red_). In both phases, the edge states remain exponentially localized (**a**,**c**) as long as the disorder strength is not sufficient to close the gap. The bulk states (**b**,**d**) do not show similar robustness, becoming distorted on adding disorder. index. Finally, it should be noted that for \(2\oplus 2\) partitioning, as found in the real Hopf insulators, analogous constraint due to cohomology also should not exist, as gauge transformation with \(g\in SO(2)\) cannot induce non-trivial integer-valued shift in \(\theta\), given \(H^{3}(SO(2),\mathbb{Z})\cong 0\). Additionally, we find that the even Wilson loop winding excludes a possibility of existence of 3D Stiefel-Whitney insulators with another \(\mathbb{Z}_{2}\) invariant (second Stiefel-Whitney class \(w_{R}\)) in the oriented Hamiltonians introduced in this work. We contrast these findings with full flag limit models, which also support non-trivial \(\theta\) similarly to the real Hopf insulators hosting associated surface Chern numbers [56; 78]. Hence, by exhaustion of the known three-dimensional topological phases, this suggests us that the 3-band subspace obtained on a band-inversion of the flag limit indeed induces a new type of non-Abelian real topology, which to the best of our knowledge has not been reported in the previous works. Overall, our findings provide a realization of non-Abelian multi-gap topological insulators with a single \(\mathbb{Z}\)-invariant in three spatial dimensions, as supported by the multitude of unique results and mathematical relations to other types of topologies, as elucidated in this work. We conclude that, contrary to the Abelian topological insulators with a single \(\mathbb{Z}\)-valued Hopf index, the origin of the invariant stems from the non-triviality of Pontryagin index characterizing the Bloch bundle. We introduced models for arbitrary index, studied the bulk-boundary correspondence due to the integer invariants, investigated the stability of edge modes against disorder, and also referenced these findings to the full flag limit. The index-changing topological phase transitions, ultimately trivializing the model, were studied and by an addition of the Hamiltonian terms opening all gaps and accessing the full flag limit, we made the connections to real Hopf and axion insulators. The nodal structures, with non-trivial linking numbers removable by highly non-local parallelization of the subbundle corresponding to the 3-band subspace hosting the split-biquaternion nodes, were studied, and demonstrated within a class of minimal models. These simple models promise a realization in a wide variety of experimental settings that include meta-materials and quantum simulators. Despite the 'trivializability' of nodal rings discussed in this work, we showed a non-Abelian character of the nodes in a three-dimensional setting, offering non-trivial platform for fusion and braiding, beyond the quaternion algebra of nodes in Euler semimetals. ## IX Acknowledgements W. J. J. acknowledges funding from the Rod Smallwood Studentship at Trinity College, Cambridge. A. B. has been partly funded by a Marie Sklodowska-Curie fellowship, grant no. 101025315. R.-J. S. acknowledges funding from a New Investigator Award, EPSRC grant EP/W00187X/1, as well as Trinity College, Cambridge.
2303.07223
PromptFusion: Decoupling Stability and Plasticity for Continual Learning
Current research on continual learning mainly focuses on relieving catastrophic forgetting, and most of their success is at the cost of limiting the performance of newly incoming tasks. Such a trade-off is referred to as the stability-plasticity dilemma and is a more general and challenging problem for continual learning. However, the inherent conflict between these two concepts makes it seemingly impossible to devise a satisfactory solution to both of them simultaneously. Therefore, we ask, "is it possible to divide them into two separate problems to conquer them independently?". To this end, we propose a prompt-tuning-based method termed PromptFusion to enable the decoupling of stability and plasticity. Specifically, PromptFusion consists of a carefully designed \stab module that deals with catastrophic forgetting and a \boo module to learn new knowledge concurrently. Furthermore, to address the computational overhead brought by the additional architecture, we propose PromptFusion-Lite which improves PromptFusion by dynamically determining whether to activate both modules for each input image. Extensive experiments show that both PromptFusion and PromptFusion-Lite achieve promising results on popular continual learning datasets for class-incremental and domain-incremental settings. Especially on Split-Imagenet-R, one of the most challenging datasets for class-incremental learning, our method can exceed state-of-the-art prompt-based methods by more than 5\% in accuracy, with PromptFusion-Lite using 14.8\% less computational resources than PromptFusion.
Haoran Chen, Zuxuan Wu, Xintong Han, Menglin Jia, Yu-Gang Jiang
2023-03-13T15:58:00Z
http://arxiv.org/abs/2303.07223v2
# PromptFusion: Decoupling Stability and Plasticity for Continual Learning ###### Abstract Continual learning refers to the capability of continuously learning from a stream of data. Current research mainly focuses on relieving catastrophic forgetting, and most of their success is at the cost of limiting the performance of newly incoming tasks. Such a trade-off is referred to as the stability-plasticity dilemma and is a more general and challenging problem for continual learning. However, the inherent conflict between these two concepts makes it seemingly impossible to devise a satisfactory solution to both of them simultaneously. Therefore, we ask, "is it possible to divide them into two problems to conquer independently?" To this end, we propose a prompt-tuning-based method termed PromptFusion to enable the decoupling of stability and plasticity. Specifically, PromptFusion consists of a carefully designed Stabilizer module that deals with catastrophic forgetting and a Booster module to learn new knowledge concurrently. During training, PromptFusion first passes an input image to the two modules separately. Then the resulting logits are further fused with a learnable weight parameter. Finally, a weight mask is applied to the derived logits to balance between old and new classes. Extensive experiments show that our method achieves promising results on popular continual learning datasets for both class-incremental and domain-incremental settings. Especially on Split-Imagenet-R, one of the most challenging datasets for class-incremental learning, our method exceeds state-of-the-art prompt-based methods L2P and DualPrompt by more than 10%. ## 1 Introduction Despite great advances in deep neural networks, they are often trained in a static supervised manner where all training data are available at once [19, 13, 34]. Continual learning [7, 12, 26], on the contrary, studies the behavior of neural networks under a more realistic scenario in which data arrive in a continual and dynamic procedure. Ideally, when confronted with new data, state-of-the-art models should be both stable to prevent performance degradation for previous tasks and plastic to learn sufficient information for the new task [43]. However, in practice it is hard or even impossible to maintain such a balance, a phenomenon known as the stability-plasticity dilemma [1, 29]. As a result, most of the current literature mainly focus on one side of the problem, _i.e_., relieving the problem of catastrophic forgetting, but overlooking the unseen sacrifice on the other side. A typical example is regularization-based approaches such as EWC [18]. In EWC, important parameters for previous tasks are kept intact which inevitably limits the capacity of learning new knowledge. Since directly balancing this trade-off is extremely challenging, in this work, we instead seek to address the issue from a different perspective. Specifically, inspired by the Complementary Learning System [28, 20], a biology theory that suggests intelligent agents must possess two learning systems, we hypothesize that the dilemma can be resolved in a similar fashion by leveraging two different architectures to tackle stability and plasticity independently. However, the incorporation of an additional architecture raises computational concerns as it might severely complicate the optimization process. So now the question becomes, is there a computationally efficient way to implement this idea? Recently, prompt tuning [24, 21, 42] has become an emerging trend of finetuning models on downstream tasks in a computationally efficient manner. Since prompt tuning only trains an additional small set of parameters and freezes Figure 1: Test accuracy for \(R_{i,i}\) on the Split-Cifar100 dataset. \(R_{T,i}\) is the test accuracy on task \(i\) after learning task \(T\), and \(R_{i,i}\) is a good measurement of plasticity as it reflects the performance on the most recently learned task. PromptFusion clearly yields higher plasticity than L2P and DualPrompt. the pretrained backbone, it gives a potential solution to the above question. In fact, due to its strong transferring ability, prompt tuning has already been explored in continual learning [41, 40, 39]. The first work to do so trains a prompt pool followed by optimizing a prompt selection process for each task and achieves remarkable performances. Consequently, various follow-up approaches are introduced. However, as depicted in Fig 1, their plasticity is severely limited. To address these limitations, we propose PromptFusion, a simple yet effective framework that first decouples stability and plasticity for continual learning using a Stabilizer module and a Booster module, followed by fusing their corresponding outputs for predicting final classes. Specifically, we instantiate the Stabilizer with CoOp [47] and the Booster with VPT [16], two mainstream prompt tuning methods, respectively. Note that while we instantiate PromptFusion with CoOp and VPT, it is a general framework and both modules can be flexibly changed. For the Stabilizer, PromptFusion initializes a new set of prompts for each incoming task and concatenates it to previously learned ones. During training, past tasks' prompts are frozen and only the current task-specific set is trained. As for the Booster, PromptFusion trains the same set of prompts for all tasks. Therefore, since only a newly added portion of the Stabilizer's prompts are involved in training, historical information can be well preserved (_stability_). In contrast, all of the Booster's prompts are continuously updated, thereby allowing full learning of new information (_plasticity_). As a result, our design makes the Stabilizer suitable for stability and the Booster suitable for plasticity, achieving the best of both worlds. Before conducting experiments on how to fuse the two modules, a pilot study is carried out to empirically analyze their continual learning ability and motivate our approach. Surprisingly, we find out that their performance diverges on different datasets. Specifically, the Stabilizer shows stronger ability on complex datasets such as Imagenet-R but is mediocre on simple ones such as Cifar100. The Booster, on the other hand, is proficient on simple datasets but struggles with hard ones. Meanwhile, the Stabilizer is discovered to be much more robust to intra-class variations and is thus more suitable for the domain-incremental setting. In light of this, to make full use of both modules, we train a weight parameter \(\lambda\) for the ensemble of their output logits, adaptively determining the optimal weights conditioned on inputs. Following most continual learning approaches [33, 4], we utilize a rehearsal buffer throughout the training pipeline, which allows us to apply a learnable weight mask to the fused logits for accommodating the imbalanced class distribution. Extensive experiments on both class-incremental and domain-incremental datasets show that our proposed method clearly achieves state-of-the-art results. In particular, on Imagenet-R, one of the most challenging class-incremental benchmarks that are widely used for prompt-based methods, PromptFusion exceeds all alternative methods by more than 10%. In summary, our contributions are three-fold: * We introduce PromptFusion to address the stability-plasticity dilemma in continual learning which takes advantage of two modules, the Stabilizer and the Booster, and decouples stability and plasticity into two independent problems in a parameter-efficient approach. * We conduct a detailed analysis of the Stabilizer and the Booster modules in the continual learning setting and discover that our design of the Stabilizer performs best with complex datasets in domain-incremental learning settings, while the Booster performs best only in class-incremental learning settings with simple datasets. * Our proposed PromptFusion achieves state-of-the-art results on several popular benchmarks for both class-incremental learning and domain-incremental learning. Specifically, on Imagenet-R, PromptFusion achieves the best reported average accuracy of 80.7%. ## 2 Related Work ### Continual Learning In continual learning, the principle is to train a model on a number of tasks sequentially without forgetting knowledge from previously learned tasks, where data for old tasks are not available when training the new tasks. There are three fundamental settings in literature for continual learning, namely task incremental [7, 18, 33], class incremental [9, 45], and domain incremental [10, 37] setups. For task incremental learning, the task identity for the input sample is provided at test time and therefore is regarded as the most relaxed condition. On the contrary, class incremental and domain incremental learning treats all samples the same during inference without prior knowledge about the task identity. The difference between these two is that in class incremental learning, data for each task are generally from the same distribution but belong to different categories, while in domain incremental learning, data for each task belongs to different distributions but have the same class labels. Throughout the years, numerous efforts have been devoted to tackling continual learning, and these approaches can be mainly categorized into three groups: rehearsal-based methods [4, 33, 25, 36, 14] where a memory is utilized to store exemplars from the past, architecture-based methods [9, 45, 15, 35, 38, 30, 27] where the network expands for an incoming task, and regularization based methods [18, 23, 2, 46, 8] where important parameters for previous tasks remain unchanged. However, most of these approaches mainly focus on the problem of catastrophic forgetting without considering the performance of learning the new task. Instead, in this paper, we focus on a more gen eral problem of continual learning with the aim to solve the stability-plasticity dilemma. ### Prompt Learning Recently, researchers in NLP have shown that learned large-scale language models can handle a wide range of downstream tasks with only a few or even no samples by prepending instructions to the input text [22, 24, 21]. Such instruction texts are called prompts. Consequently, prompts can be tuned instead of the weights of the entire network for a more efficient adaptation to downstream tasks. The success of prompt learning in NLP has also garnered attention in the vision community that motivated the establishment of many related methods [17, 11, 6, 16, 47]. For example, DAPL [11] applies prompt learning in unsupervised domain adaptation by training a prompt for each source-target domain pair. MPA [6] extends it by adapting to the multi-source scenario through a two-stage alignment process. In the context of continual learning, a series of prompt-based works has achieved tremendous success. In L2P [41] and DualPrompt [40], a prompt pool is trained such that each task samples from it a set of task-specific prompts using a key-value selection process. However, they cannot be trained end-to-end, as the keys are optimized locally. Furthermore, they assume that every data in the mini-batch uses the same set of prompts, which is problematic during inference as data from different tasks might be present for the same mini-batch. Similarly, S-Prompt [39] is proposed in a similar fashion built on top of CoOp. It is, however, specifically designed for domain-incremental learning. ## 3 Preliminary Since we instantiate PromptFusion with CoOp and VPT, two types of prompt learning approaches, we first briefly review the two methods, followed by presenting in details how they are leveraged in the Stabilizer and the Booster modules. **CoOp** CoOp [47] is a large-scale vision-language representation learning model built on top of CLIP [32]. It consists of an image encoder \(f\) and a text encoder \(g\) that aligns input images with text prompts. Unlike CLIP where the prompts are usually in the form of "a photo of a [CLS]", prompts in CoOp are trainable token parameters \(V_{i}\) of length \(M\) in the form of "\([V_{1}]...[V_{M}]\)[CLS]". Given an image \(\mathbf{x}\) with label \(y\) and text prompt \(\mathbf{P}_{k}\) for class \(k\), CoOp first maps them to the same embedding space. Then they are aligned in a contrastive manner such that \[p(y=k|\mathbf{x})=\frac{\text{exp}(<g(\mathbf{P}_{k}),f(\mathbf{x})>/T)}{\sum_{i=1}^{K} \text{exp}(<g(\mathbf{P}_{i}),f(\mathbf{x})>/T)} \tag{1}\] is maximized when the input image \(\mathbf{x}\) belongs to class \(k\). Here, \(K\) is the total number of classes, \(T\) is a temperature parameter, and \(<\cdot,\cdot>\) denotes the cosine similarity. During inference, the predicted label of a test image \(\mathbf{x}\) is: \[\operatorname*{arg\,max}_{k}<g(\mathbf{P}_{k}),f(\mathbf{x})>,k\in\{1,...,K\}. \tag{2}\] **VPT** In contrast to CoOp, where the model seeks alignment from two modalities, VPT only relies on a Vision Transformer (ViT) that leverages prompt learning in a pure vision fashion [16]. In VPT, an input image is first divided into \(m\) fixed-sized patches \(I_{i}\). Together with a class token, the input is first embedded into a latent space with positional embedding. Then, learnable prompt tokens \(U_{i}\) of length \(p\) are attached to the input of the form "[CLS]\([U_{1}]...[U_{p}][I_{1}]...[I_{m}]\)". In shallow VPT, prompts are only inserted to inputs for the first Transformer layer, while for deep VPT, prompts are inserted for every Transformer layer. In this work, shallow VPT is adopted. **Prompt Tuning for Continual Learning** In the present study, we focus on both the class incremental and the domain incremental setting. Formally, given \(N\) tasks \(\mathcal{T}=(T_{1},T_{2},...,T_{N})\) where data for task \(T_{t}\) is denoted as \(\mathcal{D}(\mathcal{X}^{t},\mathcal{Y}^{t})\), in class incremental scenarios, \(\mathcal{Y}^{i}\cap\mathcal{Y}^{j}=\emptyset\) with \(P(\mathcal{X}^{i})=P(\mathcal{X}^{j})\), while for domain incremental scenarios, \(\mathcal{Y}^{i}=\mathcal{Y}^{j}\) with \(P(\mathcal{X}^{i})\neq P(\mathcal{X}^{j})\), \(i,j\in\{1,...,N\}\), and \(i\neq j\). Here \(P(\cdot)\) denotes the probability density function. Since our method incorporates the usage of rehearsal memories, during each training phase, the model has access to both data of the current task and a few stored past exemplars, and the goal is to continuously update the model without forgetting past knowledge (_stability_) while achieving good results for the current one (_plasticity_). We observe that one difference between VPT and CoOp is that prompts in CoOp is class-dependent while VPT has no such restrictions. Therefore, for newly incoming classes, VPT is allowed to reuse the same set of prompts and CoOp, on the other hand, must learn new ones. As a result, for each task \(T_{t}\), a new set of text prompts \(\mathbf{P}_{t}^{stab}\in\mathbb{R}^{\frac{K}{N}\times M\times e^{stab}}\) where \(\frac{K}{N}\) is the number of classes in each task and \(e^{stab}\) is the embedding dimension is initialized for CoOp. If \(t>1\), \(\mathbf{P}_{t}^{stab}\) is concatenated with previously learned prompts \(\mathbf{P}^{stab}=\text{Concat}[\mathbf{P}_{1}^{stab},...,\mathbf{P}_{t}^{stab}]\) for alignment with image features in the embedding space. Note that \(\mathbf{P}_{1}^{stab},...,\mathbf{P}_{t-1}^{stab}\) is kept frozen. As for VPT, another set of prompts \(\mathbf{P}^{boost}\in\mathbb{R}^{p\times e^{boost}}\) is initialized before the training of the first task \(T_{1}\) and constantly updated for each task \(T_{t}\). While we could have trained a prompt for each task using VPT and concatenated it with previously learned ones as well, experiments show that such an approach results in poor performance. Thus, we stick to the above-stated design. ## 4 Pilot Study In this section, we conduct a pilot study exploring whether stability and plasticity can be decoupled by leveraging the two proposed modules, the Stabilizer and the Booster. To this end, we systematically analyze CoOp and VPT's performance under the setting of continual learning. **Decoupling Stability and Plasticity** We begin the analysis by justifying our statement that our design of CoOp is suitable for stability while VPT is suitable for plasticity. In Figure 2, three accuracy curves for task \(T_{2},T_{4}\) and \(T_{5}\) on the Split-Cifar100 dataset are presented. It is obvious that CoOp suffers much less from forgetting with an average performance drop of 9.9%. As a comparison, the average drop for VPT reaches 23.0%, indicating the stability for VPT is limited. Furthermore, the change in feature distribution for task \(T_{1}\) is plotted in Figure 3 with Gaussian Kernel Density Estimation (KDE). KDE is a popular non-parametric method for estimating the probability density function based on kernels as weights. As is shown in the graph, the distribution change for CoOp compared to VPT is much smaller, which coincides with results in Figure 2. Figure 4, on the other hand, depicts the performance for each newly learned task. While accuracy for VPT consistently achieves around 95%, CoOp exhibits a clear pattern of deterioration. Their margin in performance reaches up to 24.8% at \(T_{10}\) and could get worse when more tasks follow. While evidence from both figures support our hypothesis, one last piece is still needed to complete the entire picture. That is, another main difference between CoOp and VPT is their backbone network. **Prompt or Backbone?** To test whether patterns in Figure 2 and 4 are indeed from different designs of prompts, an additional experiment is conducted where we apply pretrained weights of CoOp's image encoder to VPT. Results in Figure 5 show that their performance remains approximately the same, and they both demonstrate a preference towards plasticity compared to stability, as the performance on the last task \(T_{10}\) is high. Furthermore, another interesting finding from Figure 5 is that using CoOp's backbone actually results in a worse performance. This is rather surprising as CoOp's backbone, Figure 4: Plasticity comparison between CoOp and VPT. VPT consistently achieves high accuracy on the latest task, while CoOp’s performance degrades dramatically. Figure 3: KDE analysis for distribution of features obtained using VPT and CoOp on task \(T_{1}\). Here we can observe that features for the first task experience a dramatic distributional shift after training on the last task with VPT. On the other hand, such a shift with CoOp is relatively small, indicating that the it is more robust against forgetting. Figure 2: Stability comparison between CoOp and VPT. Accuracy curves for tasks \(T_{2}\), \(T_{4}\), and \(T_{5}\) using the two modules are presented. All three graphs show a similar trend where performance degradation in CoOp is much smaller than that in VPT. This shows that CoOp is much more robust against forgetting than VPT. CLIP [32], is generally considered more powerful than Imagenet pretrained ViT. As a matter of fact, [40] reported a similar trend in performance drop when switching state-of-the-art continual learning methods with a stronger backbone, suggesting that a stronger network does not necessarily lead to a better continual learning ability since backbones are trained in a fully supervised manner. **Divergent Performance** Meanwhile, while testing both modules on different types of datasets, we find out that CoOp is more advanced on complex datasets especially when large intra-class variations exists. VPT, on the other hand, handles simpler ones better. To justify this finding, a t-SNE visualization on Split-Cifar100, Split-Imagenet-R, and Core50 is presented in Figure 6. Here, Split-Cifar100 is a relatively simple dataset, while both Split-Imagenet-R and Core50 incorporate covariate shifts in the data distribution. As is depicted in Figure 6, CoOp exhibits a more clustered feature than VPT on Imagenet-R and Core-50, suggesting that it is more robust to intra-class variations. Alternatively, on Cifar100, VPT's feature is more clustered. Therefore, by fusing the two modules, another potential benefit is that the resulting model can accommodate the varying characteristics of different datasets. ## 5 PromptFusion With results from the pilot study, we confirm that stability and plasticity can be decoupled using the proposed Stabilizer module and the Booster module. Based on this observation, we present our approach. The pseudocode and overview of PromptFusion is given in Algorithm 1 and Figure 7 respectively. Formally, denote the Stabilizer model as \(S\) and the Booster model as \(B\). Given an input image \(\mathbf{x_{i}}\) with label \(y_{i}\), \(\mathbf{x_{i}}\) is first passed to the two modules \(S\) and \(B\). Their respective output \(S(\mathbf{x_{i}})\) and \(B(\mathbf{x_{i}})\) are then fused by the trainable parameter \(\lambda\) through a weighted average. The result is further element-wise multiplied by a weight mask \(\mathbf{W}\) to balance the old and new classes. Specifically, we would like old classes to be rectified and new classes to be weakened. Therefore, we divide the learning of \(\mathbf{W}\) into two matrices, \(\alpha\) and \(\beta\) such that: \[\mathbf{W}=\text{Concat}[\frac{1}{\sigma(\beta)},\sigma(\alpha)], \tag{3}\] where \(\sigma(\cdot)=\frac{1}{1+e^{-x}}\) is the sigmoid function. Putting them together, the final output \(\mathbf{z}_{i}\) is derived as: \[\mathbf{z}_{i}=\mathbf{W}\odot[(1-\sigma(\lambda))\cdot S(\mathbf{x}_{i})+\sigma(\lambda )\cdot B(\mathbf{x_{i}})], \tag{4}\] where \(\odot\) indicates the element-wise multiplication operation. Furthermore, inspired by [3] which states that CLIP can be augmented by image prompts, we augment CoOp in a similar Figure 5: Comparison between VPT and VPT with CLIP’s weights. As performance only differs by a small margin, the effect of the backbone network is neglectable. Figure 6: t-SNE visualization of VPT and CoOp on Split-Cifar100, Split-Imagenet-R, and Core50. fashion by inserting prompts \(\tilde{\mathbf{P}}^{stab}\) to the image patches. We empirically set the size of \(\tilde{\mathbf{P}}^{stab}\) to be the same as \(\mathbf{P}^{boost}\). Finally, the learning objective throughout the entire training phase is a straightforward cross-entropy loss: \[\begin{split}\min_{\mathbf{\Theta}}\sum_{i=1}\texttt{CrossEntropy}( \mathbf{z}_{i},y_{i}),\\ \mathbf{\Theta}:=\{\alpha,\beta,\lambda,\mathbf{P}^{stab},\tilde{\mathbf{P}}^ {stab},\mathbf{P}^{boost}\}.\end{split} \tag{5}\] ## 6 Experiments ### Experimental setup **Datasets** Experiments are conducted on three popular benchmark datasets of continual learning, namely CIFAR100, Imagenet-R, and Core50, where CIFAR100 and Imagenet-R are evaluated under the class-incremental setting, and Core50 is evaluated under the domain-incremental setting. Cifar100 is a relatively simple dataset containing 100 classes and is split into 10 tasks with 10 classes each. Imagenet-R is composed of 200 classes that consist of data from different styles such as cartoons, graffiti, and origami and is also split into 10 tasks. It is considered one of the most difficult datasets for class-incremental learning as both semantic and covariate shift occurs. Core50, on the other hand, is a popular dataset for domain-incremental learning that consists of 50 objects from 11 domains. In particular, 8 of them are used for training and the rest 3 for testing. None of the images in the 3 domains is seen during training. For all three datasets, we use class orders the same as [41, 40] for a fair comparison. **Evaluation metrics** Following conventional settings, we report the Average Accuracy after training on all tasks to evaluate a model's continual learning ability. Formally, let \(R_{T,i}\) be the classification accuracy of task \(T_{i}\) after training on task \(T_{T}\), then the Average Accuracy \(A_{T}\) is defined as \[A_{T}=\frac{1}{T}\sum_{i=1}^{T}R_{T,i}, \tag{6}\] **Implementation details** We adopt a ViT-B-16 backbone for both CoOp and VPT's image encoder and leave it frozen during the entire training phase. For prompt size, \(M\) in CoOp is set to 4 while \(p\) in VPT is set to 30. All prompts are trained using the mini-batch AdamW optimizer of 0.01 learning rate together with a cosine annealing scheduler for all datasets. Considering their diverse difficulties, Cifar100 is trained for 3 epochs for every task while Imagenet-R and Core50 are trained for 5 epochs. All of them are equipped with a memory buffer of size 2,000. ### Comparison to state-of-the-art We compare against both classical and most recent state-of-the-art continual learning methods: EWC [18], LwF [23], BiC [44], GDumb [31], DER++ [45], Co\({}^{2}\)L [5], L2P [41], S-Prompt [39] and DualPrompt [40]. In particular, DER++ Figure 7: An overview of PromptFusion. PromptFusion consists of two architectures, the Stabilizer and the Booster. For a given image, it will be passed into the two modules by concatenating with corresponding image prompts \(\tilde{\mathbf{P}}^{stab}\) and \(\mathbf{P}^{boost}\). Both image prompts are initialized before training and updated continuously. For the Stabilizer, the resulting image feature will be further aligned in the embedding space with text feature from the Stabilizer’s text prompt. Here each incoming task trains a new task-specific text prompt and is concatenated with previously learned ones. Then, results from the Stabilizer and the Booster will be fused using weight \(\lambda\). The ensembled logits are further balanced by a weight mask \(\mathbf{W}\), producing final output \(\mathbf{z}_{i}\). Note that our proposed PromptFusion is capable of both class incremental and domain incremental settings. is the best-performing non-prompt-based method, while L2P, S-Prompt, and DualPrompt are all prompt-based methods. Since S-Prompt is specifically designed for domain-incremental learning, we only report its performance on the Core50 dataset. The results on Split-Cifar100, Split-Imagenet-R, and Core50 are shown in Table 1, where PromptFusion clearly outperforms all other alternatives. For Split-Cifar100, our approach reaches an Average Acc of 87.4% with a buffer size of less than 2% of the total training data. The performance is 4.1% and 1.7% higher than L2P and DualPrompt respectively. Considering that Cifar100 is relatively simple, we think the improvement is rather significant. For Split-Imagenet-R, our method exceeds all other methods by more than 10%. In particular, when compared against DualPrompt, the best prompt-based approach, our approach outperforms it by 12.2%. A major reason for this performance gain is that the incorporation of CoOp makes our method robust against intra-class variations, as discussed in Section 4. Here, both Split-Cifar100 and Split-Imagenet-R are commonly used datasets in the class-incremental learning scenario, and results show that PromptFusion produces satisfactory performance in this specific setting. We also test PromptFusion under the domain-incremental learning scenario on the popular Core50 dataset. Notably, both L2P and DualPrompt performed inferior to S-Prompt, as their main focus is on class-incremental learning. Nevertheless, PromptFusion still outperforms S-Prompt by 5.9%, indicating that the capability of PromptFusion is not restricted to any specific type of continual learning setting. Considering that S-Prompt is also based on CoOp, this is strong evidence that shows the success of our approach is not simply from this specific module. While concerns might be raised for the computational complexity of our method, we show that it is in fact similar compared to other prompt-based methods. This is because for L2P and DualPrompt, an additional raw pre-trained ViT is used to extract its [CLS] feature as the query for selecting prompts from the prompt pool. S-Prompt, on the other hand, needs to perform a K-NN to all training data, and is thus computationally expensive as the dataset gets larger. ### Effect of memory size Since the memory buffer is a major component of our method, we would like to test how PromptFusion behaves with respect to different buffer sizes. We choose to analyze sizes of 1K, 2K, 3K, 4K, and 5K for all three tested datasets and report the results in Figure 8. In general, the effect of buffer size on the overall performance is rather small, compared with state-of-the-art rehearsal-based methods [45, 9]. Specifically, for Split-Cifar100, the performance dropped by \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Method** & \multicolumn{2}{c}{**Split-CIFAR100**} & \multicolumn{2}{c}{**Split-ImageNet-R**} & \multicolumn{2}{c}{**Split-Core50**} \\ & Buffer size & Average Acc & Buffer size & Average Acc & Buffer size & Average Acc \\ \hline BiC [44] & \multirow{4}{*}{5000} & 81.4 & \multirow{4}{*}{5000} & 64.6 & \multirow{4}{*}{5000} & 79.3 \\ GDumb [31] & & 81.7 & & & 65.9 \\ DER++ [45] & & 83.9 & & 66.7 & & 79.7 \\ Co\({}^{2}\)L [5] & & 82.5 & & 65.9 & & 79.8 \\ \cline{2-6} EWC [18] & \multirow{4}{*}{0} & 47.0 & \multirow{4}{*}{0} & 35.0 & \multirow{4}{*}{0} & 74.8 \\ LwF [23] & & 60.7 & & 38.5 & & 78.3 \\ L2P\({}^{*}\)[41] & & 83.3 & & 62.1 & & 85.1 \\ S-Prompt [39] & & \(N.A.\) & & & 89.1 \\ DualPrompt\({}^{*}\)[40] & & 85.7 & & 68.5 & & 87.2 \\ \cline{2-6} PromptFusion (Ours) & 2000 & **87.4** & 2000 & **80.7** & 2000 & **95.0** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on the Split-Cifar100, Split-Imagenet-R, and Core50 datasets. Split-Cifar100 and Split-Imagenet-R are class-incremental learning datasets, and Core50 is a domain-incremental learning dataset. Compared methods are grouped based on memory size. * denotes that the results are obtained through our PyTorch re-implementation. Figure 8: Average Acc for different memory sizes on Split-Cifar100, Split-Imagenet-R, and Core50. 2.9% when the memory size changed from 5K to 1K. Note that there is a sudden drop when the size changed from 2K to 1K, while the change is marginal when growing from 2K to 5K. Such a trend is also found on the Split-Imagenet-R dataset where performance gain from the additional memory decreases as the buffer size keeps increasing. On Core50, however, the buffer size seems to not affect the performance of PromptFusion at all. This is in fact not surprising as the test data for Core50 is a constant set that is not seen during training. Therefore the only factor influencing the performance is the model's capability of accumulating domain-invariant knowledge from sequential training on the training set, which PromptFusion demonstrates to be excel at. ### Ablation study **Weight \(\lambda\) and mask \(\mathbf{W}\)** We report in Table 2 the ablation study on weight \(\lambda\) and mask \(\mathbf{W}\) and the results show that both pieces are significant to the success of PromptFusion. Specifically, for Mask \(\mathbf{W}\), its effect on Split-Cifar100 is much higher than on Split-Imagenet-R. We posit that this is because new and old classes in the Imagenet-R dataset are more diverse as both semantic and covariate shift occurs, resulting in a weaker inter-class interference. As for \(\lambda\), excluding it means a simple summation of the two outcomes is performed and results in Table 2 demonstrate it to be sub-optimal. As discussed in Section 4, we expect \(\lambda\) to be different dependent on the dataset being assessed. Indeed, experiments show that \(\lambda=0.95\) for Split-Cifar100 and \(\lambda=-0.21\) for Split-Imagenet-R. This is also in accordance with our empirical findings in Figure 6, where VPT performs better on Split-Cifar100 and worse on Split-Imagenet-R. **Prompt length** We also examine how prompt length affects the overall performance and the results are reported in Table 3. As is shown, our choice of \(M=4\) and \(p=30\) produces the best Average Acc. This would require a total of 0.33M trainable parameters on Split-Cifar100, which is infinitesimal compared to other approaches. **CoOp augmentation** As introduced in Section 5, CoOp is augmented by incorporating another set of image prompts in addition to the language prompts. We report in Table 4 the effectiveness of such augmentation. As is shown, the performance on Split-Cifar100 increased by 1.2% with only 0.02M extra parameters. **Why instantiate with CoOp and VPT?** In addition to the diverse preference for stability and plasticity between CoOp and VPT, we also empirically find that the two methods are complementary to each other in a more general sense. This is reflected in Figure 9, where the fusion of the two produces a substantial increase in the overall performance. We believe this is a meaningful finding that could be generalized to a much broader range of tasks beyond continual learning. ## 7 Conclusion In this paper, we introduced a dual architecture design PromptFusion for tackling the stability-plasticity dilemma in continual learning. PromptFusion is built on top of a Stabilizer module instantiated with CoOp and a Booster module instantiated with VPT, that decouples stability and plasticity into two independent problems. Specifically, for a given input image, PromptFusion first passes it to the two modules, and the outputs are fused by a learnable weight parameter conditioned on the dataset. The fused outputs are further balanced by a weight mask to accommodate class imbalance brought by the memory buffer. Extensive experiments showed that our method achieved state-of-the-art results in both class-incremental and domain-incremental learning. Hopefully, the idea of decoupling stability and plasticity can motivate future work. \begin{table} \begin{tabular}{c|c} \hline \hline Method & **Split-Cifar100** \\ \hline w/o augmentation & 86.2 \\ with augmentation & **87.4** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation studies on augmenting CoOp \begin{table} \begin{tabular}{c c c c} \hline \hline Mask \(\mathbf{W}\) & Weight \(\lambda\) & **Split-Cifar100** & **Split-Imagenet-R** \\ \hline \(\mathbf{\times}\) & ✓ & 82.8 & 79.8 \\ ✓ & ✗ & 85.9 & 79.4 \\ ✓ & ✓ & **87.3** & **80.7** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation studies on trainable weights. \begin{table} \begin{tabular}{c c||c c} \hline \hline Text Prompt & **Split-Cifar100** & Image Prompt & **Split-Cifar100** \\ \hline \(M=2\) & 86.8 & \(p=20\) & 86.9 \\ \(M=6\) & 86.5 & \(p=40\) & 87.1 \\ \hline \(M=4\) & **87.4** & \(p=30\) & **87.4** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation studies on prompt length \(M\) and \(p\) Figure 9: Average Acc for CoOp, VPT, and PromptFusion on the Split-Cifar100 dataset.
2303.08110
Toric Geometry in OSCAR
We report on the computer implementation for toric geometry in the computer algebra system $\texttt{OSCAR}$. The main architectural feature of $\texttt{OSCAR}$ is that its four fundamental tools $\texttt{Antic}$ (Hecke, Nemo), $\texttt{GAP}$, $\texttt{Polymake}$ and $\texttt{Singular}$ are $\mathit{integral~components}$, rather than external software. Toric geometry benefits greatly from this architecture. $\texttt{Julia}$ is a high-performance programming language designed for numerical and scientific computing. The growing ecosystem of $\texttt{Julia}$ packages ensures its continued viability for scientific computing and data analysis. Indeed, $\texttt{OSCAR}$ is written in $\texttt{Julia}$. This implies that the performance of $\texttt{OSCAR}$ should be comparable or even better than many other implementations.
Martin Bies, Lars Kastner
2023-03-14T17:45:10Z
http://arxiv.org/abs/2303.08110v1
# Toric Geometry in OSCAR ###### Abstract **Toric geometry - an arena for mathematical theories** Among the fields of algebraic geometry, the field of toric geometry is particularly well understood and algorithmic. Among others, the cohomology ring, the Chow ring, topological intersection numbers as well as cohomologies of coherent sheaves can be obtained with computer algorithms [10]. Therefore, toric varieties provide a useful platform for testing mathematical theories. To put it briefly, toric varieties are characterized by having an algebraic torus \(\left(\mathbb{C}^{*}\right)^{r}\) as a dense and open subset. This is why they are called _toric_. While the realm of toric varieties is more constrained compared to that of general schemes/varieties, the toric universe still provides a significant degree of versatility. As an example, many Calabi-Yau manifolds can be constructed as complete intersections in toric varieties [19, 20]. This includes many K3 surfaces [19]. More recently, starting from such K3 surfaces, researchers have discovered in the framework of F-theory - a non-perturbative regime of string theory - the largest currently-known class of globally consistent Standard Model solutions without chiral exotics and gauge coupling unification [11]. For all these reasons, there is a high demand for computer implementations of toric geometry. Some examples of computer algebra systems that support toric geometry are [17, 25]. **OSCAR - a melting pot** We present a computer implementation for toric varieties in the computer algebra system OSCAR[14, 21]. The funding for OSCAR is provided by the SFB-TRR 195 _Symbolic Tools in Mathematics and their Application_ of the German Research Foundation (DFG). The main architectural feature of OSCAR is that its four fundamental tools Antic (Hecke, Nemo), GAP, Polymake and Singular are _integral components_, rather than external software that can be used. For more information, the interested reader can consult the article "OSCAR: Open Source Computer Algebra Research system" by _Prof. Dr. Max Horn_ (to appear in the _ComputerAlgebraRundbrief_) or the OSCAR homepage: [https://www.oscar-system.org](https://www.oscar-system.org) By leveraging Polymake, we can carry out polyhedral geometry operations, such as handling cones and fans, and utilize cutting-edge algorithms for triangulations [18]. This provides a reliable backbone for toric geometry in OSCAR. The Cox ring as well as the Chow ring of toric varieties are polynomial rings. Closed subvarieties of toric varieties correspond to homogeneous polynomial in the Cox ring [10]. This functionality is provided by the software Singular. Additionally, tools from group and number theory are essential in toric geometry. Such tasks are executed with Antic (Hecke, Nemo) and GAP. To sum up, toric geometry benefits greatly from the combination of Antic (Hecke, Nemo), GAP, Polymake and Singular. **Julia - a modern programming language** Julia[5] is a high-performance programming language designed for numerical and scientific computing. The growing ecosystem of Julia packages ensures its continued viability for scientific computing and data analysis. OSCAR is written in Julia. This implies that the performance of OSCAR should be comparable or even better than many other implementations. ## Overview Our goal with OSCAR is to create a computer algebra system that is both user-friendly and convenient. To assist users with toric geometry, we offer a tutorial:1 Footnote 1: Interested readers may also explore the actual OSCAR code on GitHub. [https://www.oscar-system.org/tutorials/](https://www.oscar-system.org/tutorials/). The toric implementation in OSCAR are conceptually based on [10]. This is a fundamental guiding principle within OSCAR: Implementations are conceptually grounded in a few carefully selected publications. The relationship between toric geometry and polyhedral geometry is crucial for any toric geometry implementation. We illustrate this connection for affine toric varieties. In OSCAR, the toric implementations focus on the lattice \(N=\mathbb{Z}^{n}\), where \(n\in\mathbb{Z}_{\geq 0}\) is a suitable integer. Let \(M\) be the dual lattice of \(N\). We then consider a rational polyhedral cone \(\sigma\subseteq N\otimes_{\mathbb{R}}\mathbb{R}\cong\mathbb{R}^{n}\). To this cone, we associate the semigroup \(S_{\sigma}=\sigma^{\vee}\cap M\). The corresponding affine toric variety \(U_{\sigma}\) is given by [10]: \[U_{\sigma}=\operatorname{Spec}\left(\mathbb{C}\left[S_{\sigma}\right]\right) =\operatorname{Spec}\left(\mathbb{C}\left[\sigma^{\vee}\cap M\right]\right)\,. \tag{1}\] As an example, consider \[\sigma=\operatorname{Span}_{\mathbb{Z}_{\geq 0}}\left(\left[\begin{array}{c}1\\ 0\end{array}\right],\left[\begin{array}{c}0\\ 1\end{array}\right]\right)\,. \tag{2}\] We create \(U_{\sigma}\) in OSCAR: ``` o=positive_hull([10;01]) U=affine_normal_toric_variety(o) ``` Many properties of \(U_{\sigma}\) are encoded in \(\sigma\). For instance, \(U_{\sigma}\) is smooth if and only if \(\sigma\) can be generated by a subset of a basis of the lattice \(N\). An interactive check in OSCAR can determine whether \(U_{\sigma}\) is smooth: ``` julia>hilbert_basis(o) 2-elementSubObjectIterator{PointVector{ZZRingElem}}: [1,0] [0,1] [0,1] julia>is_smooth(U) true ``` Similarly, the dimension of \(U_{\sigma}\) matches that of \(\sigma\): ``` julia>dim(o)==dim(U) true ``` Below is an instance of a non-smooth affine toric variety that can be created using OSCAR: ``` o2=positive_hull([-1;01;11]) U2=affine_normal_toric_variety(o2) ``` We verify interactively that \(U_{2}\) is not smooth: ``` julia>hilbert_basis(o2) 3-elementSubObjectIterator{PointVector{ZZRingElem}}: [-1,1] [0,1] [1,1] julia>is_smooth(U2) false ``` Notice the appearance of the generator \([0,1]\) in the Hilbert basis of \(\sigma_{2}\). Its appearance signifies that \(U_{2}\) is not smooth. Alternatively, we can inspect \(U_{2}\) as subvariety of the affine space. To this end, we compute the toric ideal: ``` julia>toric_ideal(U2) ideal(-xl+x2+x3^2) ``` This means, that in the affine space \(\mathbb{A}^{3}\) with coordinates \((x_{1},x_{2},x_{3})\), it holds \(U_{2}\cong V(-x_{1}x_{2}+x_{3}^{2})\). Consequently, \(U_{2}\) is singular. Much more can be said about the interplay between polyhedral and toric geometry. For example, there exists a connection between normal toric varieties and rational polyhedral fans. Indeed, in OSCAR, one can create a general normal toric variety based on a rational polyhedral fan. Moreover, OSCAR offers specialized constructors for several well-known toric varieties, including projective_space, hirzebruch_surface, del_pezzo_surface, and cyclic_quotient_singularity. For further information, interested readers may wish to consult [10]. ## 2 Notable capabilities **Vanishing sets of line bundle cohomology** Support for torus invariant divisors, divisor classes and line bundles is available in OSCAR. The cohomCalg algorithm [1, 9] is employed to infer dimensions of line bundle cohomologies on any smooth and complete, as well as any simplicial and projective toric variety \(X_{\Sigma}\). The set \(V^{i}(X_{\Sigma})\) of all line bundles on \(X_{\Sigma}\) with vanishing \(i\)-th sheaf cohomology can be derived [6]: \[V^{i}(X_{\Sigma})=\operatorname{Pic}(X_{\Sigma})-\bigcup_{m=1}^{l}L^{i}_{(m)}\,, \tag{3}\] where \(L^{i}_{(m)}\) is the set of lattice points in a certain polyhedron \(P^{i}_{(m)}\). For \(\mathbb{P}^{1}\times\mathbb{P}^{1}\), it holds \(\operatorname{Pic}(\mathbb{P}^{1}\times\mathbb{P}^{1})=\mathbb{Z}^{2}\) and that the vanishing sets can be represented as follows: (4) Specifically, \[\begin{split} V^{0}(\mathbb{P}^{1}\times\mathbb{P}^{1})& =\mathbb{Z}^{2}-(P^{0}\cap\mathbb{Z}^{2})\,,\\ V^{1}(\mathbb{P}^{1}\times\mathbb{P}^{1})&=\mathbb{Z}^ {2}-(P^{1}_{(1)}\cup P^{1}_{(2)})\cap\mathbb{Z}^{2}\,,\\ V^{2}(\mathbb{P}^{1}\times\mathbb{P}^{1})&=\mathbb{Z}^ {2}-(P^{2}\cap\mathbb{Z}^{2})\,,\\ P^{0}=\left[\begin{array}{c}0\\ 0\end{array}\right]+\text{Span}_{\mathbb{Z}_{\geq 0}}\left(\left[\begin{array}{ c}1\\ 0\end{array}\right],\left[\begin{array}{c}0\\ 1\end{array}\right]\right)\,,\\ P^{1}_{(1)}=-\left[\begin{array}{c}2\\ 0\end{array}\right]+\text{Span}_{\mathbb{Z}_{\geq 0}}\left(\left[\begin{array}{ c}-1\\ 0\end{array}\right],\left[\begin{array}{c}0\\ 1\end{array}\right]\right)\,,\\ P^{1}_{(2)}=-\left[\begin{array}{c}0\\ 2\end{array}\right]+\text{Span}_{\mathbb{Z}_{\geq 0}}\left(\left[\begin{array}{ c}1\\ 0\end{array}\right],\left[\begin{array}{c}0\\ -1\end{array}\right]\right)\,,\\ P^{2}=-\left[\begin{array}{c}2\\ 2\end{array}\right]-\text{Span}_{\mathbb{Z}_{\geq 0}}\left(\left[\begin{array}{ c}1\\ 0\end{array}\right],\left[\begin{array}{c}0\\ 1\end{array}\right]\right)\,.\end{split} \tag{5}\] With the following lines, we can replicate these results in OSCAR: P1 = projective_space( NormalToricVariety, 1) v0, v1, v2 = vanishing_sets(P1*P1) ph0 = polyhedra(v0)[1] ph11, ph12 = polyhedra(v1) ph2 = polyhedra(v2)[1] With OSCAR, we can investigate the polyhedra interactively. For example, we can find inequalities for \(P^{0}\) and \(P^{2}\) as follows: julia> print_constraints(ph0) -x1 <= 0 -x2 <= 0 -x2 <= -2 x1 <= -2 -2 Indeed, from eq. (5) we see that \(L^{0}\), \(L^{2}\) can be expressed as follows: \[L^{0} =\left\{\left(x_{1},x_{2}\right)\in\mathbb{Z}^{2}\right|x_{1},x_{2 }\geq 0\right\}, \tag{6}\] \[L^{2} =\left\{\left(x_{1},x_{2}\right)\in\mathbb{Z}^{2}\right|x_{1},x_{2 }\leq-2\right\}. \tag{7}\] It is not too hard to repeat this exercise for \(L^{1}_{(1)}\), \(L^{1}_{(2)}\). As a more interesting example, consider the del Pezzo surface \(dP_{1}\) with \(\mathbb{Z}^{2}\)-graded Cox ring: \[\begin{split}\hline&\quad x_{1}\quad x_{2}\quad x_{3} \quad e_{1}\\ \hline H&\quad 1\quad 1\quad 1\\ -E_{1}&\quad 1\quad 1\quad\quad\quad-1\end{split} \tag{8}\] For this grading, we visualize the vanishing sets: \[\begin{split}\hline\end{split} \tag{9}\] The interested reader might find it entertaining to "see" Serre duality in eq. (4) and eq. (9). We emphasize that the vanishing sets can be determined algorithmically for any smooth and complete, as well as any simplicial and projective toric variety \(X_{\Sigma}\). However, our ability to visualize the vanishing sets reduces drastically once the polyhedra are of dimension 4 or higher. This happens for instance for \(\mathbb{P}^{1}\times\mathbb{P}^{1}\times\mathrm{dP}_{1}\). Still, the vanishing sets can be derived in OSCAR: P1 = projective_space( NormalToricVariety,1) dp1 = del_pezzo_surface(1) v0, v1, v2, v3, v4 = vanishing_sets(P1*P1*dP1) **Intersection theory** Loosely speaking, intersection theory provides an answer to the question "At how many points do two algebraic cycles intersect?". A caveat arises whenever the algebraic cycles in question are "similar/the same". This leads to the notion of _rational equivalence_ and the observation, that sometimes the number of intersection points can be negative. To demonstrate this somewhat exotic idea in a concrete setting, we focus on the del Pezzo surface \(dP_{1}\) with \(\mathbb{Z}^{2}\)-graded Cox ring as in eq. (8). Next, consider the following algebraic cycles: \[H=V(x_{1})+V(e_{1})\,,\qquad E_{1}=V(e_{1})\,. \tag{10}\] Strictly speaking, we want to consider the rational equivalence classes of these algebraic cycles. For ease of notation, we do not introduce new symbols. The intersection numbers among \(H\) and \(E_{1}\) are as follows: \[H^{2}=1\,,\qquad H\cdot E_{1}=0\,,\qquad E_{1}\cdot E_{1}=-1\,. \tag{11}\] The following code computes this result in OSCAR: julia> dP1 = del_pezzo_surface(1); julia> intersection_form(dP1) Dict{MPPolyRingElem, QGFieldElem} with 10 entries: x1*x3 => 1 el^2 => -1 x2*x3 => 1 x3^2 => 1 x1*x2 => 0 x3*e1 => 0 x2^2 => 0 x1*e1 => 1 x2*e1 => 1 x1^2 => 0 Certainly, we can create the rational equivalence classes of \(H\) and \(E_{1}\) in OSCAR: x1, x2, x3, e1 = gens(chow_ring(dP1)) E1 = rational_equivalence_class(dP1,el) H = E1 + rational_equivalence_class(dP1,xl) With this, we can explicitly and interactively verify in OCSAR how these algebraic cycles intersect: julia> H*H Rational equivalence class on a normal toric variety represented by V(x2,x3) julia> H*E1 Trivial rational equivalence class on a normal toric variety julia> E1*E1 Rational equivalence class on a normal toric variety represented by -1V(x2,x3) In the last computation, notice the appearance of \(-1\). To understand its meaning, we must understand how the intersection points of \(E_{1}\) with itself are computed. The theory tells us that we should use different, yet rationally equivalent, algebraic cycles which intersect "nicely". The technical term for this is to move the algebraic cycles in _general position_[15]. In toric varieties, rational equivalences are captured by the ideal of linear relations: julia> ideal_of_linear_relations(dP1) ideal(x1 - x3 + e1, x2 - x3 + e1) Let \(\sim\) denote rational equivalence. Then it holds: \[V(x_{1})-V(x_{3})+V(e_{1})\sim 0\,, \tag{12}\] \[V(x_{2})-V(x_{3})+V(e_{1})\sim 0\,. \tag{13}\] Hence \(V(e_{1})\sim V(x_{3})-V(x_{1})\) and it follows that \[E_{1}\cdot E_{1} \sim V(e_{1})\cdot\left[V(x_{3})-V(x_{1})\right] \tag{14}\] \[=V(e_{1},x_{3})-V(e_{1},x_{1})\,. \tag{15}\] Next, let us look at the Stanley-Reisner ideal of \(\mathrm{dP}_{1}\): julia> stanley_reisner_ideal(dP1) ideal(x1*x2, x3*e1) From this ideal we learn that \[\{\,p\in\mathrm{dP}_{1}|\,x_{3}=e_{1}=0\}=\emptyset\,. \tag{16}\] Consequently, we find \[E_{1}\cdot E_{1}\sim-V(e_{1},x_{1})\,. \tag{17}\] It is not too hard to verify that \(V(e_{1},x_{1})\sim V(x_{2},x_{3})\). This finally aligns our investigations with the result computed by OSCAR. We do hope that this example illustrates the origin of negative intersection numbers. The collection of rational equivalence classes of algebraic cycles enjoys a ring structure where the multiplication corresponds to the intersection of the algebraic cycles. This ring is known as the _Chow ring_ and can be computed for any complete, simplicial toric variety [10]. For example, the Chow ring of \(dP_{1}\) can be computed in OSCAR as follows: julia> chow_ring(dP1) Quotient of Multivariate Polynomial Ring in x1, x2, x3, e1 over Rational Field by ideal (x1-x3+e1, x2-x3+e1, x1*x2, x3*e1) It has been noted more recently that the completeness assumption can be dropped [23].2 Indeed, OSCAR is capable of computing the Chow ring for simplicial toric varieties that are not complete. As an example, we create a non-complete, yet simplicial toric variety v from its rays r and (maximal) cones c: r =[[1,0],[0,1],[-1,-1]] c = [[1],[2],[3]] v = normal_toric_variety(r, c) Footnote 2: See also [16] for the significance of this observation for the interplay between matroids and toric varieties. We verify that \(v\) is not complete but simplicial: julia> is_complete(v) false julia> is_simplicial(v) true We can also compute the Chow ring interactively: julia> chow_ring(v) Quotient of Multivariate Polynomial Ring in x1, x2, x3 over Rational Field by ideal (x1-x3, x2-x3, x1*x2, x1*x3, x2*x3) Triangulations As explained in [19, 20], reflexive polytopes \(\Delta^{\circ}\) (and their polar duals) can be used to classify Calabi-Yau hypersurfaces in toric spaces. The ambient toric spaces can be found from fine regular star triangulations (FRST) of \(\Delta^{\circ}\) (see e.g. [12] for background). For an example, consider the square with vertices at \((\pm 1,\pm 1)\). The informed reader will notice immediately that this configuration has a unique FRST corresponding to \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). P = convex_hull( [1 1; -1 1; 1 -1; -1 -1]) X = NormalToricVarieties FromStarTriangulations(P) Certainly, we can verify that \(X\) consists only of a single variety. Furthermore, we compute the Stanley-Reisner and the irrelevant ideal of this toric variety, to provide evidence that this variety is indeed just \(\mathbb{P}^{1}\times\mathbb{P}^{1}\): julia> length(X) 1 julia> irrelevant_ideal(X[1]) ideal(x3*x4, x2*x4, x1*x3, x1*x2) julia> stanley_reisner_ideal(X[1]) ideal(x1*x4, x2*x3) A much more involved example is included in the tutorial ([https://www.oscar-system.org/tutorials/](https://www.oscar-system.org/tutorials/)). This example is computationally demanding and its code was optimized for performance. We propose to use this example for benchmarking purposes. Note also that this code was used in a recent string theory application [7]. ## Outlook OSCAR is a relatively new software and still under heavy development. This is an opportunity for young developers - we truly appreciate contributions. While this means that things are changing within OSCAR, the interface for toric varieties is already rather mature. In recent times, this interface has remained stable. For users (of the toric functionality) this is great news, as you need not be afraid of changes to the interface that might break your workflow. We strongly encourage users to try out and enjoy the existing toric functionality. There are plans to significantly extend the toric functionality for coherent sheaves. In the realm of smooth and complete toric varieties, coherent sheaves are equivalent to certain classes of finitely presented graded modules [10]. This equivalence can be utilized to compute sheaf cohomologies of coherent sheaves. In fact, a relevant algorithm for this purpose was proposed in [6].3 There are plans to incorporate this functionality into OSCAR in the future. It would also be advantageous to explore specialized algorithms, e.g. based on [22, 2, 4, 2], for cohomologies of vector bundles. Footnote 3: This algorithm is available at [https://github.com/homalg-project/ToricVarieties_project](https://github.com/homalg-project/ToricVarieties_project). In view of applications in the field of F-theory - a specialized domain of string theory - initial discussions have taken place to assess the possibility of incorporating FTTheoryTools[8] into OSCAR. FTTheoryTools is primarily focused on computing resolution for singular elliptic fibrations. Such computations pose a significant arithmetic challenge in F-theory. The goal is to make this task as convenient as possible for researchers in this area. There are overlaps with some of the schemes technology that is currently being actively developed in OSCAR. For instance, OSCAR has basic support for toric schemes in experimental stages. A more specialized task in F-theory involves constructing solutions that replicate the particle physics observed in modern accelerator experiments. Recently, numerous promising solutions known as the _Quadrillion F-theory Standard Models_ (F-theory QSMs) were identified in [11]. These solutions are based on the geometry of toric K3 surfaces via [19]. Consequently, toric technology is critical to constructing and exploring these solutions in the future. In fact, many F-theory constructions are based on toric geometry (see [26] and references therein). It would be interesting to provide user-friendly and convenient tools in OSCAR for toric F-theory constructions. Cosmological investigations within string theory led to the development of the software _CYTools_[13]. This software focuses on high-performance triangulations of the 4-dimensional reflexive polytopes in [20]. Such triangulation tasks matter in many explicit realizations of Calabi-Yau manifolds. For these reasons, [13] is a very interesting software package. We expect that its capabilities can be boosted by using mptopcom[18] or the latest version of TOPCOM[24]. This task is reserved for future work. ## Acknowledgement M. B. and L. K. express their gratitude and appreciation for the support provided by the OSCAR team, led by Claus Fieker, Max Horn, Michael Joswig, and Wolfram Decker. M. B. acknowledges financial support from the Forschunginitiative des Landes Rheinland-Pralz through the project _SymbTools - Symbolic Tools in Mathematics and their Application_. L. K. is thankful for the funding received from _MaRDI - Mathematical research initiative_ of the German Research Foundation (DFG). This work was supported by the SFB-TRR 195 _Symbolic Tools in Mathematics and their Application_ of the German Research Foundation (DFG).
2307.13979
AstroSat view of the neutron star low-mass X-ray binary GX 340+0
Understanding the spectral evolution along the `Z'-shaped track in the hardness-intensity diagram of Z-sources, which are a class of luminous neutron star low-mass X-ray binaries, is crucial to probe accretion processes close to the neutron star. Here, we study the horizontal branch (HB) and the normal branch (NB) of the Z source GX 340+0 using $AstroSat$ data. We find that the HB and the NB appear as two different types of X-ray intensity dips, which can appear in any sequence and with various depths. Our $0.8-25$ ~keV spectra of dips and the hard apex can be modeled by the emissions from an accretion disk, a Comptonizing corona covering the inner disk, and the neutron star surface. We find, as the source moves onto the HB the corona is replenished and energized by the disk and a reduced amount of disk matter reaches the neutron star surface. We also conclude that quasi-periodic oscillations during HB/NB are strongly associated with the corona, and explain the evolution of strength and hard-lag of this timing feature using the estimated coronal optical depth evolution.
Yash Bhargava, Sudip Bhattacharyya, Jeroen Homan, Mayukh Pahari
2023-07-26T06:35:58Z
http://arxiv.org/abs/2307.13979v2
# _AstroSat_ view of the neutron star low-mass X-ray binary GX 340+0 ###### Abstract Understanding the spectral evolution along the 'Z'-shaped track in the hardness-intensity diagram of Z-sources, which are a class of luminous neutron star low-mass X-ray binaries, is crucial to probe accretion processes close to the neutron star. Here, we study the horizontal branch (HB) and the normal branch (NB) of the Z source GX 340+0 using _AstroSat_ data. We find that the HB and the NB appear as two different types of X-ray intensity dips, which can appear in any sequence and with various depths. Our \(0.8-25\) keV spectra of dips and the hard apex can be modeled by the emissions from an accretion disk, a Comptonizing corona covering the inner disk, and the neutron star surface. We find, as the source moves onto the HB the corona is replenished and energized by the disk and a reduced amount of disk matter reaches the neutron star surface. We also conclude that quasi-periodic oscillations during HB/NB are strongly associated with the corona, and explain the evolution of strength and hard-lag of this timing feature using the estimated coronal optical depth evolution. X-rays: binaries -- stars: individual (GX 340+0) -- accretion, accretion discs 0000-0002-4880-880X]Yash Bhargava 0000-0002-4888-788X]Sudip Bhattacharyya 0000-0002-4888-788X]Jeroen Homan 0000-0002-4888-788X]Mayukh Pahari ## 1 Introduction A neutron star (NS) low-mass X-ray binary (LMXB), viz., an NS accreting matter from a low-mass donor star, is a natural laboratory to study accretion processes in extreme conditions. These binaries can be classified into 'Z' sources and 'atoll' sources based on the evolution of spectral and temporal properties, as well as luminosity (Hasinger & van der Klis, 1989). Hardness-intensity diagrams (HIDs) and color-color diagrams (CCDs) of these sources provide a simple model-independent way to probe the spectral evolution. Z-sources, which emit close to the Eddington luminosity, show _Z_-shaped tracks in HIDs and CCDs. Such tracks can drift secularly over long duration (Hasinger & van der Klis, 1989; van der Klis, 2004). Z-sources can be further divided into two subclasses 'Cyg-like' sources (Cyg X-2, GX 340+0, GX 5-1) or 'Sco-like' sources (Sco X-1, GX 17+2, and GX 349+2) depending
2305.05962
A Comprehensive Picture of Factors Affecting User Willingness to Use Mobile Health Applications
Mobile health (mHealth) applications have become increasingly valuable in preventive healthcare and in reducing the burden on healthcare organizations. The aim of this paper is to investigate the factors that influence user acceptance of mHealth apps and identify the underlying structure that shapes users' behavioral intention. An online study that employed factorial survey design with vignettes was conducted, and a total of 1,669 participants from eight countries across four continents were included in the study. Structural equation modeling was employed to quantitatively assess how various factors collectively contribute to users' willingness to use mHealth apps. The results indicate that users' digital literacy has the strongest impact on their willingness to use them, followed by their online habit of sharing personal information. Users' concerns about personal privacy only had a weak impact. Furthermore, users' demographic background, such as their country of residence, age, ethnicity, and education, has a significant moderating effect. Our findings have implications for app designers, healthcare practitioners, and policymakers. Efforts are needed to regulate data collection and sharing and promote digital literacy among the general population to facilitate the widespread adoption of mHealth apps.
Shaojing Fan, Ramesh C. Jain, Mohan S. Kankanhalli
2023-05-10T08:11:21Z
http://arxiv.org/abs/2305.05962v1
# A Comprehensive Picture of Factors Affecting User Willingness to Use Mobile Health Applications ###### Abstract. Mobile health (mHealth) applications have become increasingly valuable in preventive healthcare and in reducing the burden on healthcare organizations. The aim of this paper is to investigate the factors that influence user acceptance of mHealth apps and identify the underlying structure that shapes users' behavioral intention. An online study that employed factorial survey design with vignettes was conducted, and a total of 1,669 participants from eight countries across four continents were included in the study. Structural equation modeling was employed to quantitatively assess how various factors collectively contribute to users' willingness to use mHealth apps. The results indicate that users' digital literacy has the strongest impact on their willingness to use them, followed by their online habit of sharing personal information. Users' concerns about personal privacy only had a weak impact. Furthermore, users' demographic background, such as their country of residence, age, ethnicity, and education, has a significant moderating effect. Our findings have implications for app designers, healthcare practitioners, and policymakers. Efforts are needed to regulate data collection and sharing and promote digital literacy among the general population to facilitate the widespread adoption of mHealth apps. Mobile health applications, user willingness, structural equation modeling + Footnote †: 2023 Association for Computing Machinery. + Footnote †: 2023 Association for Computing Machinery. age and the number of follow-up visits were negatively correlated with patients' willingness to utilize mHealth applications (Han et al., 2019). This study aims to investigate the willingness of the general population to use mHealth apps, as well as the factors that influence such willingness. To achieve this, we conducted online surveys between December 2021 and March 2022 in eight countries across four continents: the United States (US), United Kingdom (UK), Germany, India, China, Singapore, New Zealand, and Australia. Our research was based on a vignette design of hypothetical mHealth apps, inspired by the authors' previously developed app (Bang et al., 2021; Wang et al., 2021), as well as other existing apps from around the world. Specifically, we presented each participant with eight versions of an mHealth app, each with varying features in seven areas: app functionality, type of rewards for using the app, type of data collected by the app, where the collected data is stored, with whom the collected data is shared, the app's privacy protection measures, and user control of app data management. Participants indicated the likelihood of using each version of the app on their smartphone, and provided reasons for their decision. At the end of the experiment, participants provided information about their daily phone usage, online behavior, concern about privacy, and prior experience with mHealth apps (see Appendix C for the complete questionnaire). Through a factorial survey design with vignettes, we were able to simultaneously test the effect of several app characteristics on users' acceptance. ## 2. Related Work In this section, we provide an overview of the benefits and challenges of mHealth apps in the market, as well as a review of prior research on the factors that influence user willingness to use mHealth apps. ### Benefits and challenges of mHealth apps mHealth apps are being used in a variety of ways, such as disease prevention, health promotion, patient education, self-management, and remote monitoring (Shi et al., 2019). Numerous studies have reported various benefits of mHealth apps, such as increased access to healthcare (Shi et al., 2019; Wang et al., 2021), improved patient engagement (Wang et al., 2021), and more efficient healthcare delivery (Shi et al., 2019; Wang et al., 2021). Additionally, mHealth apps allow individuals to monitor their own health, track their progress, and receive personalized feedback and recommendations (Bang et al., 2021). While mHealth apps offer many potential benefits, they also face several challenges. Studies by Baig et al. (Baig et al., 2021) and Newaz et al. (Newaz et al., 2020) have identified security and privacy issues as major challenges facing mHealth apps. Jaime and colleagues (Jaime and colleagues, 2020) conducted a privacy assessment of mHealth apps based on 24 selected articles published from 2014 onwards. They found that despite great progress made in raising awareness of privacy and security in mHealth apps, there is still much to be done. Two major barriers identified were the diversity of mHealth apps and the lack of standard evaluation criteria. In addition, Tangari and colleagues (Tangari and colleagues, 2020) analyzed 20,991 mHealth apps on the Google Play Store and found serious problems with privacy and inconsistent privacy practices in mHealth apps. They urged clinicians to be aware of these issues and to articulate them to patients when determining the benefits and risks of using these apps. ### Factors Influencing User Acceptance of mHealth apps While mHealth apps have great potential, some people may be hesitant to use them due to various reasons. These include concerns about the privacy and security of personal information (Bang et al., 2021; Wang et al., 2021), a lack of trust in the accuracy of the information provided by the app, and concerns about the cost of the app (Tangari and colleagues, 2020). Previous studies have indicated that the willingness to use mHealth apps can be affected by various factors, such as age, gender, health status, and technological ability (Wang et al., 2021; Wang et al., 2021). However, the results from these studies are not entirely conclusive. One study by Andreia et al. (Andreia et al., 2020) found that older users exhibited a higher level of conscientiousness and behavioral intention to use mHealth apps. On the other hand, Virella et al. (Virella et al., 2020) reported that younger individuals were more likely to use mHealth apps than older adults. Gender also plays a role in the relationship between personality traits and the behavioral intention to use mHealth apps, according to Nunes et al. (Nunes et al., 2018). While Zhang et al. (Zhang et al., 2019) found that males had a higher level of intention to adopt mHealth compared to females, a later study by Bol et al. (Bol et al., 2018) reported no gender effect on aggregated mHealth app use. Furthermore, Ernsting et al. (Ernsting et al., 2019) found that individuals with chronic conditions were more likely to use mHealth apps. However, the study by Robbins et al. (Robbins et al., 2019) suggested that individuals with mHealth apps were not more likely to have chronic health conditions compared to those without. The previous studies have generated inconsistent results. In our research, we expand on the previous research by examining a more extensive range of factors that affect the acceptance of mHealth apps, including those that have not been well researched, such as users' cultural and educational backgrounds, as well as the data processing and storage of mHealth apps. Furthermore, we examine how users' prior experience with Covid-19 affects their willingness to use mHealth apps. ## 3. Methodology ### Research Question and Hypotheses In this subsection, we describe our research questions, and propose hypotheses under each research question. **RQ1:** What are the factors that influence users' willingness to use mHealth apps? In this study, our goal is to present a comprehensive and unified perspective on the potential factors that impact user willingness to use mHealth apps. We propose that four broad categories of variables will influence this willingness. The first category is demographic background, including age, gender, and educational background. Secondly, user intention is influenced by perceived benefits and interest in the mHealth app, while perceived costs decrease willingness. Additionally, since mHealth apps handle users' health data, privacy concerns and the risk of disclosure become more prominent. Therefore, trust in the data collection and sharing process and privacy concerns will be of significant importance to users. Finally, the willingness to use mHealth apps will be impacted by factors related to users' technological background. These factors include user experience and comfort with mobile technology and feelings of control over the data management. We describe four hypotheses based on these factors below. Previous studies have shown inconsistent results regarding the influence of age, gender, and health status on user willingness to use mHealth apps. While some studies have reported significant effects of these factors (Ernsting et al., 2019; Nunes et al., 2018; Zhang et al., 2019; Zhang et al., 2019), a few others have indicated limited effects (Bol et al., 2018; Robbins et al., 2019) (refer to Sec 2.2 for a more detailed discussion). Recently, Utz and colleagues (Utz and colleagues (Utz and colleagues, 2019) reported that countries of residence also impact user preferences for the collection of personalized data or anonymity, with Chinese participants preferring the former and German and US participants favoring the latter. The variation in the impact of demographic background across studies may be partly due to different experimental settings and contexts. In our study, we hypothesize that if a demographic factor influences user views and preferences for mHealth apps, it is likely to impact user willingness to use mHealth apps. Based on this, we propose our first hypothesis, **H1:** The willingness of users to use mHealth apps is influenced by their demographic background, including factors such as age, gender, educational background, and health status. Previous research has shown the importance of perceived benefits and interest in user willingness of using mHealth apps (Ernsting et al., 2019; Zhang et al., 2019). Therefore, our second hypothesis is, **H2:** Users are more willing to use an mHealth app if they find it beneficial to their health, or if they can get financial rewards for using the app. Privacy and security are paramount for any mobile app, particularly mHealth apps that deal with sensitive and personal information such as users' health history, medication usage, and location data [63]. Prioritizing privacy and security in an mHealth app can increase user willingness to use it. A study by Zhou and colleagues [73] revealed that most study participants had concerns about their privacy when using mHealth apps and expressed their preferences for security features, such as regular password updates, remote wipe, user consent, and access control. Based on these findings, we propose the third hypothesis in this study: **H3:** Users' willingness to use an mHealth app is positively correlated with their trust in the app's privacy and security measures. The technological background of a user can refer to their experience with technology and their level of comfort in using it. Previous studies have suggested a positive correlation between users' technological background and their willingness to use mHealth apps [39]. For instance, a study by Jaana and colleagues [28] revealed that older adults were less likely to use mobile health apps due to a lack of familiarity with technology. Thus, we propose the fourth hypothesis: **H4:** Greater familiarity with technology is positively associated with user willingness to use mHealth apps. **RQ2:** How do various factors jointly contribute to users' intention to use mHealth apps? Previous research suggests that multiple factors interact with each other in influencing users' intention to use mHealth apps. For instance, gender has been found to moderate the effects of personality and mobile technology preferences, which, in turn, affect users' willingness to use mHealth apps [48]. In our study, we will utilize structural equation modeling to quantitatively measure how various factors collectively contribute to users' willingness to use mHealth apps, and draw a comprehensive picture of the underlying structure. ### Approval and Consent The experiment was approved by the Institutional Review Board (IRB) of National University of Singapore. A participation information sheet was shown at the first page of the experiment. Participants provided consent before starting the experiment. ### Participants Recruitment We recruited a total of 1,669 participants (1,021 male, mean age \(39.64\pm 7.83\)) from eight countries (US, UK, Germany, India, China, Singapore, New Zealand, and Australia) across four continents. Among them, 436 participants were from the online crowd-sourcing platform Amazon Mechanical Turk (MTurk) [40], and 1,233 participants were from Toluna Global Panel, another crowd-sourcing platform managed by the Internet survey company Toluna (www.toluna-group.com). We used two platforms for two reasons: first, MTurk has few participants from Oceania and Asia (except India), so we used Toluna to obtain more participants from eight different countries; second, MTurk has few participants over the age of 60, but we aimed to recruit a significant number of seniors as they are a specific target audience of mHealth apps. Toluna enabled us to recruit seniors as well as participants in other age groups. For each country, we attempted to achieve a uniform distribution of participants' ages. Overall, our participants had diverse demographic backgrounds in terms of their residential countries, gender, and age (see Table 1 for details). ### Vignette Design Our human experiment is based on the vignette design, which combines the advantages of survey research and multidimensional experimental design. In vignette experiments, short, systematically varied descriptions of situations or subjects (called vignettes) are used to elicit the beliefs, attitudes, or behaviors of respondents regarding presented scenarios. Participants evaluate hypothetical situations or subjects described in vignettes that vary in the level of characteristics (dimensions) of the described situations or subjects [33, 55]. The advantage of vignette experiments is that they allow researchers to study complex social phenomena in a controlled and systematic way. By using carefully constructed scenarios, researchers can manipulate different variables to examine their effects on people's attitudes and behaviors. This can help isolate and identify specific factors that influence how people think and act in different social situations [24]. #### 3.4.1. Vignette dimensions The vignettes used in our study are composed of various dimensions of mHealth apps, inspired by our hypotheses H2 and H3 (see Section 3.1). Each dimension is assigned one of multiple levels, which we determined by examining existing mHealth apps and prior research. We selected a final set of seven factors and factor levels, which are reported in Table 2. #### 3.4.2. Vignette composition In our study, we used vignettes to describe hypothetical mHealth apps, as illustrated in Fig. 1. Each vignette consisted of an unchanging text template (the non-highlighted black text) with placeholders for six factors (colored boxes), each assigned one of several levels (the text in the colored boxes) to create a unique scenario. We systematically varied the factor levels across various vignettes to evaluate how different levels \begin{table} \begin{tabular}{l l r r r r r r r r} \hline \hline & & US & IN & CN & SG & UK & DE & AU & NZ \\ \hline \multirow{4}{*}{**c**} & \multicolumn{2}{c}{Number of participants} & 204 & 206 & 315 & 159 & 169 & 111 & 343 & 186 \\ \hline \multirow{4}{*}{**c**} & Male & 60.78\% & 73.30\% & 53.97\% & 62.26\% & 57.40\% & 95.41\% & 50.73\% & 54.84\% \\ & Female & 39.22\% & 26.70\% & 46.03\% & 37.74\% & 42.60\% & 4.59\% & 49.27\% & 45.16\% \\ \hline \multirow{4}{*}{**c**} & Below 19 & 0.49\% & 1.46\% & 0.32\% & 3.77\% & 0.59\% & 0.00\% & 1.75\% & 1.61\% \\ & 19-24 & 7.84\% & 3.88\% & 4.76\% & 8.81\% & 8.28\% & 32.43\% & 9.62\% & 11.29\% \\ & 25-34 & 45.10\% & 69.42\% & 30.16\% & 29.56\% & 26.63\% & 37.84\% & 24.78\% & 20.97\% \\ & 35-44 & 25.00\% & 20.87\% & 24.13\% & 32.08\% & 17.16\% & 5.41\% & 25.36\% & 21.51\% \\ & 45-59 & 16.67\% & 3.40\% & 30.48\% & 20.75\% & 27.81\% & 1.80\% & 16.62\% & 26.34\% \\ & Above 60 & 4.90\% & 0.97\% & 10.16\% & 5.03\% & 19.53\% & 29.73\% & 21.87\% & 18.28\% \\ \hline \multirow{4}{*}{**c**} & American Indian & 4.90\% & 3.22\% & 0.00\% & 1.26\% & 0.00\% & 0.00\% & 2.33\% & 1.61\% \\ & Asian (Indian)/Pacific islander & 8.82\% & 91.49\% & 95.87\% & 88.05\% & 11.24\% & 38.53\% & 16.62\% & 22.04\% \\ & Black & 4.41\% & 0.69\% & 0.00\% & 1.26\% & 4.14\% & 1.83\% & 1.17\% & 2.69\% \\ & Hispanic/Latino & 3.43\% & 0.46\% & 0.00\% & 0.63\% & 0.00\% & 0.00\% & 1.75\% & 1.08\% \\ & White & 75.00\% & 1.84\% & 0.63\% & 4.40\% & 84.02\% & 58.72\% & 74.34\% & 63.44\% \\ & Other & 3.43\% & 2.30\% & 3.49\% & 4.40\% & 0.59\% & 0.92\% & 3.79\% & 9.14\% \\ \hline \multirow{4}{*}{**c**} & Below high school & 2.45\% & 0.97\% & 5.08\% & 3.14\% & 2.37\% & 0.00\% & 4.08\% & 3.23\% \\ & Vocational training & 9.31\% & 1.94\% & 9.21\% & 8.18\% & 29.59\% & 1.83\% & 12.83\% & 20.43\% \\ \cline{1-1} & High school graduate & 5.88\% & 2.43\% & 7.94\% & 18.24\% & 19.53\% & 0.92\% & 28.57\% & 23.12\% \\ \cline{1-1} & Bachelor’s degree & 59.31\% & 82.04\% & 66.03\% & 59.12\% & 27.81\% & 87.16\% & 34.11\% & 36.56\% \\ \cline{1-1} & Graduate degree & 23.04\% & 12.62\% & 11.75\% & 11.32\% & 20.71\% & 10.09\% & 20.41\% & 16.67\% \\ \hline \hline \end{tabular}, Vol. 1, No. 1, Article. Publication date: May 2023. \end{table} Table 1. Participant Demographics. Distribution for gender, age, and education. influenced participants' perception of the hypothetical mHealth apps. In total, we generated 1,944 vignettes by combining all possible factor levels, resulting in a diverse set of scenarios to explore participants' attitudes and behaviors towards mHealth apps. ### Experiment Procedure Our human study was performed on two crowd-sourcing platforms - Amazon Mechanical Turk (MTurk) and Toluna. We recruited participants from the US and India through MTurk and participants from other countries through Toluna. The vignettes and surveys were translated to Chinese for participants from China, while participants from other countries responded in English. Each participant was presented with eight vignettes, which described a hypothetical mHealth app. Participants rated the likelihood of using the app on a 7-point scale, where 1 indicated "Very unlikely" and 7 indicated "Very likely". They were also asked to provide a reason for their response, rate the perceived usefulness of the app on a scale of 1 to 7, and indicate their major goal if they decided to use the app. Additionally, participants were asked about what they liked most and least about the app and their demographic information, such as gender, age, ethnicity, educational background, and country of residence. Questions were also included about participant information, such as digital literacy and online behavior. All measures used in the study are available in Appendix B. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Dimension** & **No. of levels** & **Vignette text** \\ \hline \multirow{4}{*}{App functionality} & \multirow{4}{*}{4 levels} & provides general lifestyle related suggestions for maintaining a healthy life \\ \cline{3-3} & & provides personalized feedback and suggestions about your personal health \\ \cline{3-3} & & monitors your health and provides personalized feedback \\ \cline{3-3} & & allows your family members or friends to monitor your health remotely and receive real-time alert during medical emergencies \\ \hline \multirow{4}{*}{App rewards} & \multirow{4}{*}{3 levels} & discounts/coupons for health check-ups, specialist/doctor consultancy and medical tests \\ \cline{3-3} & & discounts/coupons for health/fitness products such as smart-watches, exercise equipment and gym memberships \\ \cline{3-3} & & to know and interact with people who have similar health conditions \\ \hline \multirow{4}{*}{Data collection} & \multirow{4}{*}{3 levels} & your daily statistics such as step count, resting heart rate, geolocation, physical movements \\ \cline{3-3} & & your health records such as medication and clinic consultation records, daily diet \\ \cline{3-3} & & your daily usage on mobile phone such as number of messages and phone calls, browser history, app usage, and interaction with other devices \\ \hline \multirow{4}{*}{Data storage} & \multirow{4}{*}{3 levels} & not be stored either on phone or on the cloud \\ \cline{3-3} & & be stored on your phone only \\ \cline{3-3} & & be stored at the app developer side \\ \hline \multirow{4}{*}{Data sharing} & \multirow{4}{*}{3 levels} & not to be shared \\ \cline{3-3} & & be shared with the app developer \\ \cline{3-3} & & be shared with the hospitals and clinics \\ \hline \multirow{4}{*}{Data protection} & \multirow{4}{*}{2 levels} & protected by the latest encryption and authentication techniques \\ \cline{3-3} & & No mentioning of the privacy \& security mechanisms \\ \hline \multirow{4}{*}{User’s level of control} & \multirow{4}{*}{3 levels} & user can stop data collection and sharing by changing the settings in the app \\ \cline{3-3} & & user can set in the app on which data to be collected or shared \\ \cline{3-3} & & No mentioning of level of control \\ \hline \end{tabular} \end{table} Table 2. Vignette dimensions, number of levels, and vignette text used in our human study. ### Analytical Approach We conducted statistical analyses using several methods to examine the impact of each independent variable on the dependent variables. Firstly, we conducted ANOVA (analysis of variance) to check for any overall effects. If ANOVA results were significant, we performed post-hoc Tukey tests to identify specific manipulation effects. For analyses involving multiple factors, we used both logistic regression and linear regression. To control for multiple comparisons, we applied the Bonferroni correction. For more information about these inferential statistics, please refer to Bailey's (2008) guide (Bailley, 2008). We used structural equation modeling to gain a comprehensive understanding of how multiple factors collectively influence users' behavioral intention. In particular, we first conducted an exploratory factor analysis (EFA) to identify the number of latent variables (factors) present in the human data and extract a concise set of attributes underlying these variables. Next, we conducted a confirmatory factor analysis (CFA) to assess the reliability and validity of the measures and test the relations between latent and observed variables. Factor analysis (FA) and principal component analysis (PCA) are variable reduction techniques that extract a reduced set of variables from a larger set of variables, but in PCA, the components are orthogonal linear combinations that maximize the total variance, while in FA, the factors are linear combinations that maximize the shared portion of the variance underlying "latent constructs." Attributes with poor loadings or fits were removed. We developed the final model through path analysis, which calculated standardized regression weights (\(y\)) and correlations (\(\phi\)) among latent variables. We conducted these analyses using statistical toolboxes in Matlab R2016b, IBM Amos 21, and IBM SPSS 20. ### Data Availability Readers can view the vignettes and try our human study online at [https://ncript.comp.nus.edu.sg/site/experiment2/start?taskid=4391](https://ncript.comp.nus.edu.sg/site/experiment2/start?taskid=4391). Figure 1. Examples of vignette that combines seven factor levels into a specific scenario of mHealth app. ## 4. Results In this section, we present the results of our human study. Specifically, we explored how different factors impact two important ratings: i) participants' willingness to use the app and ii) their perception of its usefulness. Our investigation sought to shed light on the underlying structure that shapes users' behavioral intention to use mHealth apps. ### Influence of App Related Properties We conducted ANOVAs for the seven factors listed in Table 2 to examine whether each factor influences participants' willingness and perceived app usefulness. Results indicated that three factors (rewards for using the app, the types of data collected by the app, and how data is shared by the app) significantly impacted participants' willingness, with \(F\)s \(\geq\) 5.13 and \(p\)s \(\leq\).0061. Financial rewards (e.g., discount/coupons for health service and products) had greater effect on promoting user willingness than knowing and interacting with people with similar health conditions, with \(F(2,14518)=5.13\) and \(p=.006\). The factor of how data is stored had a marginal impact on participants' willingness, with \(F(2,14518)=5.01\) and \(p=.007\). App functionality, how the app data is protected, and users' level of control did not have significant impact. Only one factor, the types of data collected by the app, had a significant impact on perceived app usefulness, with \(F(2,14518)=26.13\) and \(p\leq.0001\). Detailed post-hoc Tukey test results for the impact of separate factor levels on participants' willingness are provided in Table 3. Footnote 1: ANOVA results are presented as \({}^{*}F(\text{df}condition,\text{df}error)=F\) value, \(p=p\) value’. If a \(p\) value is less than the conventional significance level threshold of.05 or.01 (with Bonferroni correction), we reject the null hypothesis of no difference among the means. The ANOVA revealed interaction effects of country of residence with app rewards and data collection on users' willingness, with \(F\)s \(\geq\) 2.16 and \(p\)s \(\leq\).007. Follow-up analyses showed that the effect of app rewards was primarily driven by German participants (\(F(2,1004)=5.23\), \(p=.006\)) and New Zealand participants (\(F(2,1467)=5.17\), \(p=.006\)), while the effect of data collection was significant among most countries except India, China, and the United Kingdom, with \(F\)s \(\geq\) 6.78 and \(p\)s \(\leq\).001. No interaction effect of data collection and country of residence was found for perceived app usefulness. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Type of factor** & **Factor level** & **Willingness** \\ \hline \multirow{4}{*}{App rewards} & **discounts/coupons for health check-ups, specialist/doctor consultancy and medical tests** & **0.69** \\ \cline{2-3} & **discounts/coupons for health/fitness products such as smart-watches, exercise equipment and gym memberships** & **0.69** \\ \cline{2-3} & to know and interact with people who have similar health conditions & 0.68 \\ \hline \multirow{4}{*}{Data collection} & **your daily statistics such as step count, resting heart rate, geolocation, physical movements** & **0.70** \\ \cline{2-3} & **your health records such as medication and clinic consultation records, daily diet** & **0.70** \\ \cline{2-3} & **your daily usage on mobile phone such as number of messages and phone calls, browser history, app usage, and interaction with other devices** & 0.66 \\ \hline \multirow{4}{*}{Data sharing} & **not to be shared** & **0.70** \\ \cline{2-3} & be shared with the app developer & 0.68 \\ \cline{1-1} \cline{2-3} & be shared with the hospitals and clinics & 0.68 \\ \hline \end{tabular} \end{table} Table 3. The participants’ willingness to use the mHealth app varied depending on different factor levels. For each type of factor, the willingness under the level with a bold font was statistically higher (\(p\)s \(\leq\).01). ### Influence of Users' Demographic Background In this section, we will explore the impact of participants' demographic backgrounds on their willingness to use the app and their perceived usefulness of it. We will be focusing on five key demographic factors across all app scenarios: country of residence, gender, age, ethnicity, and educational background. The ANOVA indicated a significant effect of country of residence on both user willingness to use mHealth apps (\(F(7,14513)=116.08\), \(p\leq.0001\)) and perceived app usefulness (\(F(7,14513)=147.29\), \(p\leq.0001\)). Indian participants had the highest willingness to use the apps, followed by Chinese and American participants. Participants from Germany and New Zealand showed the lowest intention to use mHealth apps (\(p\leq.05\)). Participants' perceived usefulness of the app followed a similar pattern (see Fig. 2). The age of the participants had a significant impact on both their willingness to use the mHealth apps (\(F(7,14515)=387.64\), \(p\leq.0001\)) and their perceived usefulness of the app (\(F(7,14515)=132.63\), \(p\leq.0001\)), as illustrated in Figure 3(a). The distribution of age over user willingness formed a bell curve, with participants in the age group of 25-34 exhibiting the highest willingness to use the app and perceiving it to be the most useful. The second-highest willingness and perceived usefulness were demonstrated by the age group of 35-44. Conversely, participants aged below 19 and above 60 showed the lowest willingness to use the app and perceived its usefulness to be the least (\(p\)s \(\leq.05\)). Notably, participants in the age group of 45-59 reported a similar perceived usefulness of the app as those in the age group of 19-24, but the former group demonstrated a lower willingness to use the app compared to the latter. Additionally, participants aged 35-44 rated the app's usefulness higher than those in the age group of 19-24, yet both groups exhibited statistically similar intention to use the mHealth app. These findings suggest that other Figure 2. Participant ratings (\(\pm\)_SE_) on willingness to use the mHealth app and perceived app usefulness, grouped by country of residences. factors, such as technical skills and related experience, may influence users' behavioral intention. Therefore, a more comprehensive analysis is necessary, as discussed later in Section 4.4. Participants' educational background and ethnicity also had significant impacts on their willingness to use the app and perceived app usefulness (\(F_{8}\geq 87.49\), \(p\leq.0001\) for ethnicity, and \(\geq 182.49\), \(p\leq.0001\) for education). Those with higher levels of education were more willing to use the app and perceived it to be more useful (see Fig. 3(b)). Asians (including Asian Indians) and American Indians showed greater willingness to use the app than Black people, while White and Hispanic participants were the least willing. Gender did not significantly influence users' willingness to use the app (\(t(14519)=1.80\), \(p=.071\)), but male participants perceived the app as more useful than female participants (Male:.65, Female:.63, \(t(14519)=4.90\), \(p<.0001\)). * ### Influence of Other User Background We conducted exploratory analyses on individual difference measures to determine which factors predicted users' willingness to use the mHealth apps. To do this, we calculated the average willingness of each participant to use the eight different versions of the mHealth apps described in the vignettes. We then performed a multiple linear regression analysis using the participants' willingness as the dependent variable and individual difference measures as explanatory variables. Table 4 shows that variables related to privacy and security concerns, online behaviors, prior experience of related apps and the Covid-19 pandemic, and technology skills robustly impacted users' willingness to use mHealth apps. The strongest predictor was users' smartphone skills, followed by users' positive attitude towards wearable devices, and users' frequency of sharing health information online. Surprisingly, whether a user had a chronic disease or whether they had used mHealth apps before did not have a significant impact. ### Joint Influence of Multiple Impacting Factors In this section, we utilized structural equation modeling (SEM) to gain a comprehensive understanding of how users' behavioral intention towards our mHealth apps is influenced by various factors. While our previous Figure 3. Participant ratings (\(\pm\)_SE_) on willingness to use the mHealth app and perceived app usefulness, grouped by ages (a) and educational background (b). analysis using multiple linear regression provided insight into the individual impact of variables, SEM allows us to explore the intercorrelations among these variables and how they jointly shape user willingness. To begin, we conducted exploratory factor analysis (EFA) followed by confirmatory factor analysis (CFA) [35] to measure the relationships between observed variables and latent factors (higher-level perceptions and reactions). We hypothesized that the reasons for user willingness are multidimensional and inter-correlated, so we used maximum likelihood with oblique transformation in our EFA [34]. Finally, we employed path analysis to inform the model structure. The resulting model, shown in Fig. 4, is divided into three layers. The individual difference measures obtained from our questionnaire are at the bottom layer (in grey rectangles) and load onto different latent variables \begin{table} \begin{tabular}{l c c} \hline **Explanatory variables** & \(\beta\) & \(p\) \\ \hline Concer about personal privacy and security & & \\ \hline Concern about privacy breach on mobile apps & **-.076** & **.000** \\ Knowledge of online info misuse & **-.076** & **.000** \\ Overall concern about personal privacy & **-.044** & **.005** \\ Worry about companies having access to my profile & **-.072** & **.000** \\ Worry of mobile info leakage & -.030 &.037 \\ Worry of info leakage on mHealth apps & **-.107** & **.000** \\ Concern about sharing health info online & **-.107** & **.000** \\ Privacy issues and my mobile data activities are not a concern &.016 &.222 \\ Making transactions on my mobile phone is not a concern & **.042** & **.000** \\ IUIPC Privacy Concern measurement* & **-.084** & **.000** \\ \hline Online behaviors & & \\ \hline Daily time of using cell phone & -.024 &.017 \\ Frequency of sharing personal info online &.030 &.140 \\ Frequency of sharing health info online & **.133** & **.000** \\ \hline Prior experience \& opinions of related apps & \\ \hline Have used mHealth apps before &.023 &.127 \\ Have used wearable devices before & **-.037** & **.003** \\ Will recommend mHealth apps & **.044** & **.001** \\ Will recommend wearable Devices & **.139** & **.000** \\ Acceptance level of tracing app & **.045** & **.000** \\ Acceptance level of sharing data with public authorities for Covid-19 measures & **.041** & **.000** \\ \hline Related experience about Covid-19 and chronic diseases & & \\ \hline Concern about Covid-19 & **-.055** & **.000** \\ Have chronic disease & -.003 &.573 \\ Have been quarantined for Covid-19 before & **.038** & **.000** \\ Have infected with Covid-19 before & -.021 &.016 \\ Have friends infected with Covid-19 before & **.054** & **.000** \\ \hline Technology skills & & \\ \hline Smartphone skills & **.244** & **.000** \\ Tech savvy & **.083** & **.000** \\ Technophobia & **.103** & **.000** \\ \hline \end{tabular} * Internet Users’ Information Privacy Concerns (IUIPC) privacy concern measurement scale [42]. We report the mean results here. \end{table} Table 4: Regression \(\beta\) coefficients for individual difference variables predicting users’ willingness of using our mHealth apps. Results of variables with significant impact are highlighted in bold. Bonferroni correction was applied with a reduced \(\alpha\) of.005. represented by blue eclipses in the mid-layer. The top layer contains the latent variable "behavioral intention to use mHealth apps", represented by an orange eclipse. We created the final model through path analysis, predicting the top layer latent construct from the lower-level perception latent constructs. We evaluated the model's fitness using two metrics: the Comparative Fit Index (CFI) and Root Mean Square Error of Approximation (RMSEA). Our model had acceptable fit, with \(CFI=.983\) and \(RMSEA=.045\). CFI compares the target model's chi-square to an independent model, while RMSEA estimates the error of approximation per model degree of freedom and considers the sample size. Higher CFI and lower RMSEA values indicate better model fit. In our SEM analysis, as depicted in Fig. 4, five latent factors (represented by blue eclipses) were found to jointly contribute to users' behavioral intention to use mHealth apps. Among these, "digital literacy" had the strongest weight on users' behavioral intention (\(y=.44\)), followed by "Online behavior of sharing personal information" (\(y=.25\)), "Indifference to personal privacy" (\(y=.18\)), and "Concern about personal privacy" (\(y=-.14\)). The factor "Covid-19 related experience" had the lowest weight on users' behavioral intention (\(y=.07\)). The individual difference measures in our study were moderated by users' demographic background, including age, gender, ethnicity, education, and country of residence. As discussed in Section 4.1, we found significant interaction effects of country of residence with app rewards and data collection. Gender also interacted with several measures on users' willingness, such as "having good cell phone skills" (gender: \(F(6,14507)=18.61\), \(p\leq.0001\)) and "concern about personal info leakage via mHealth apps" (gender: \(F(4,14511)=7.28\), \(p\leq.0001\)). Specifically, the impact of "having good cell phone skills" on users' behavioral intention had a stronger effect on female participants (\(\eta_{p}=.21\)) than male participants (\(\eta_{p}=.11\)). In contrast, the influence of "concern about personal info leakage via mHealth apps" was carried by male participants (\(F(4,8786)=18.61\), \(p\leq.0001\)), with no significance found among female participants (\(F(4,5725)=1.23\), \(p=.296\)). Figure 4: A Structural equation model (SEM) analysis of multiple latent factors (in blue eclipses) contributing to users’ behavioral intention to use mHealth apps (in orange eclipse). Individual difference measures describing each latent factor are in grey rectangles. Standardized regression weights (\(\gamma\)) are in bold italic font. \(CFI=.983\), \(RMSEA=.045\). Furthermore, we observed a significant interaction effect between education and "being tech-savvy" on users' willingness (\(F(16,14496)=13.79\), \(p\leq.0001\)). For example, the impact of "being tech-savvy" had a stronger effect size for participants with education below high school (\(\eta_{p}=.64\)) than for those with a graduate degree (\(\eta_{p}=.21\)). ### Users' Purposes and Preferences In this section, we aim to investigate participants' usage purposes and preferences for mHealth apps to gain a better understanding of users' behavioral intention. The majority of participants selected "personal health monitoring" as their primary goal for using the proposed mHealth apps, followed by "getting personalized feedback from the app" and "receiving advice from health professionals". Conversely, the least chosen option was "getting user rewards of using the app" (see Fig. 5). This trend was consistent across all eight countries. To gain further insight into participants' opinions, we asked them to identify what they liked most and least about the proposed app separately. As depicted in Fig. 6, participants appreciated the function of "personal health monitoring & disease prevention" the most and expressed their dislike for data sharing with other parties. Participants showed relative indifference to "users' level of control over the app" and "user rewards for using the app" in both questions. This trend was consistent across all countries, with the exception of Chinese participants, who indicated that their least favorite feature was privacy protection instead of data sharing with other parties. Participants also provided reasons for their motivation to use or not to use the app, which were consistent with their preferred and least preferred features of the proposed app. For further details, readers can refer to Appendix A. ## 5. Discussion This section presents a discussion of our findings in relation to previous literature. We also provide a summary of the implications of our results. ### A General Picture of Users' Behavioral Intention Our findings suggest that the development of users' behavioral intention to use mHealth apps has a complex and hierarchical underlying structure. Multiple factors, such as users' demographic background and technology skills, and the design of the mHealth app, jointly influence users' willingness. Based on our SEM, users' digital literacy had the strongest impact, suggesting the critical role of technical competence in promoting mHealth apps. Prior research has suggested the importance of digital technology in digital health (Han et al., 2016; Wang et al., 2017). Our study provides nuanced information that supports prior literature, demonstrating the incomparable importance of digital literacy over other factors in influencing users' behavioral intentions to use Figure 5. The percentage of options for which participants selected as the major goal of using the mHealth app. mHealth apps. Users' online behavior of sharing personal information online is the second important factor. This is reminiscent of prior research showing the close relation between social network and mobile health [21, 60]. The existing literature has emphasized data privacy and security as a major hindrance to user acceptance of mHealth apps [2, 73]. Nevertheless, our study reveals that users' privacy concern had only a moderate impact, which was outweighed by users' digital literacy. Notably, we did not observe a significant correlation between users' privacy concerns and their online behavior of sharing personal information (\(p=.999\)). This lack of correlation may be attributed to the different underlying factors driving these two latent variables. Users' concern about privacy may arise from a range of reasons, including previous experiences of data breaches or awareness of the potential risks of personal information leakage [27]. Conversely, users may share their information online due to various reasons, such as convenience, trust in the website or app they are using, or a lack of understanding of the potential consequences of divulging personal information [3]. As a result, users may express concerns about privacy but still choose to share their information if they perceive the benefits to outweigh the risks or if they do not fully comprehend the risks involved [8]. This observation of a disconnect between users' expressed privacy concerns and their online behavior aligns with the privacy paradox, which refers to the discrepancy between users' intention to protect their privacy and their actual behavior in the online environment [19]. This explanation also partly accounts for the relatively low weight of users' privacy concerns in their behavioral intention to use mHealth apps. Ernsting and colleagues [18] discovered that individuals with chronic conditions were more likely to use mHealth apps. However, our study showed that whether participants had a chronic disease had no significant impact on their willingness to use mHealth apps. Similarly, our SEM revealed that users' Covid-19 related experience had a low weight on their behavioral intention. The low correlations observed in our study could be attributed to several factors. Firstly, people's decisions to use mobile health apps are influenced by various factors that may not be directly related to their health status [6, 70]. Secondly, our SEM indicated that users' digital literacy was a strong influencing factor. However, senior participants who are more likely to have chronic diseases are typically less digitally literate, which hinders them from using mHealth apps [64]. Additionally, the functionality of mHealth apps can be a crucial factor in users' decisions, but the design of the app's features in our vignettes was not tailored to chronic diseases. Previous studies have highlighted the challenges of aging in the context of apps for chronic diseases [64]. Our findings support the need for promoting digital literacy among patients with chronic diseases, as emphasized in prior literature. Figure 6: The percentage of options for what participants selected as their favorite and least favorite aspects of the mHealth app. ### The Importance of mHealth App Design Our analyses showed significant impact for three app related factors: the rewards of using the app, types of data collected by the app, and with whom the data is shared. Our participants cared more about financial rewards (_e.g._, discount/coupons for health service and products) than knowing and interacting with people with similar health conditions. The effect of financial incentives is consistent with previous studies in [57]. However, our research only analyzed short-term acceptance. In terms of continued usage of mHealth apps, other factors may came into play, such as performance expectancy and price value [67]. Our study found that participants were more willing to allow mHealth apps to collect their health records, such as medication and clinic consultation records, than their daily usage data on mobile phones, such as the number of messages and phone calls, browser histories, and app usages. This preference may be due to the clear linkage between health records and mHealth apps, while the connection between the app and daily usage on mobile phones is unclear. However, previous studies have shown that daily phone usage can be informative of a person's mental health [29, 30, 43]. We anticipate challenges in raising user awareness and acceptance of mHealth apps that are related to mental health. Furthermore, our study found that participants' intention to use mHealth apps was negatively influenced by sharing app data with third parties such as app developers and hospitals. Previous research has identified multiple factors that influence users' willingness to share self-collected health data, including the source and type of data, user benefits of data sharing (such as personalized feedback), and privacy and security concerns [22, 38, 65]. To encourage the sharing of mobile health data for the benefit of individuals and the wider community, greater efforts are needed to address these concerns and promote the benefits of data sharing. Our analysis did not find a significant influence of app functionality, but this does not necessarily imply that functionalities lack significance. It is possible that the four levels on our functionality dimension are not sufficient to represent the range of functionality offered by mHealth apps in general. Further research should consider more detailed and specialized functionality features that cater to specific groups, such as pregnancy apps designed for women. Such specialized functionality may have a greater impact on users' behavioral intentions to use mHealth apps than general functionality measures. In addition, past studies have indicated that there is a significant correlation between higher levels of perceived ease of use and the intention to use a product or service [61]. Our forthcoming research will also explore the significance of user-friendly attributes as a crucial factor in this regard. ### The Role of Demographic Background Our analysis revealed that users' willingness to use mHealth apps was significantly influenced by their demographic features, such as country of residence, age, and education. Participants from India, China, and the United States exhibited the highest intention to use the apps, with Singapore following closely behind. In contrast, participants from Europe (United Kingdom and Germany) and Oceania (Australia and New Zealand) showed relatively lower intention compared to those from Asia and the United States. Moreover, the intention of participants from Germany and New Zealand was influenced by app rewards, while those from other countries were not. The type of data collected by the app also significantly influenced the intention of participants from most countries, except India, China, and the United Kingdom. Several factors could account for these differences. Cultural attitudes towards healthcare and data sharing, for instance, may vary across regions, with some cultures emphasizing self-care and being more open to data collection and sharing. For instance, research by Utz and colleagues [58] found that Chinese participants preferred the collection of personalized data, while German and US participants favored anonymity. In some regions, such as India and China, limited access to healthcare services in certain regions could make mHealth apps a more attractive option for individuals seeking to manage their health [50, 68]. Differences in technology adoption and digital literacy could also play a role, with some populations being more comfortable with using mobile devices and apps in their daily lives [44]. Finally, varying regulatory frameworks and policies related to mobile health apps could also affect acceptance levels [9; 71]. Different from previous research [48; 62], our study identified a bell curve pattern that linked user age and their willingness to use mHealth apps. We found that participants aged between 25 and 34 expressed the strongest intention to use these apps, followed by the age group of 35-44. Participants over the age of 45 or under the age of 25 demonstrated a lower behavioral intention. In particular, participants above the age of 60 reported the lowest willingness to use the apps. Education enhances users' willingness to use mHealth apps. Our SEM analyses suggest that one possible explanation for this trend is that individuals with higher levels of education may possess greater digital literacy. Our study provides strong evidence that several demographic factors, such as country of residence, age, gender, and education, play a significant role in moderating mHealth adoption. While previous literature has highlighted age and gender as commonly recognized moderators [26; 47], our research offers more nuanced insights by revealing additional moderating factors, including education and country of residence. ### Implications of Findings Our research may have relevance across various disciplines. First and foremost, our findings can aid mHealth developers in creating more user-welcomed apps. We recommend that designers consider the factors that have a significant impact on users' willingness to use the app, such as the type of data collected by the app and with whom it is shared. Users would like to be informed about the rewards for using the app, but they are less concerned about the app's data protection and storage or the level of control they have over it. The most significant finding of our SEM analyses is the critical role of users' digital literacy. As the reliance on digital tools increases, it has the potential to exacerbate existing health disparities by widening the gap between those who possess digital skills and access to digital tools and those who do not. In conjunction with the importance of data collection and sharing highlighted in the previous paragraph, our research provides support for policymakers to regulate the data management of mHealth apps and promote digital literacy among users. Our study provides valuable insights for both healthcare professionals and marketers interested in promoting and implementing mHealth interventions. By demonstrating the significant moderating effects of demographics, our research highlights the importance of tailoring digital health solutions to different populations by considering a wider range of demographic factors. These findings have important implications for improving the effectiveness and accessibility of mHealth interventions, ultimately helping to improve healthcare outcomes for diverse populations. Finally, our study also presents an invitation for researchers to delve deeper into the underlying structure that shapes users' behavioral intention to use mHealth apps. Through this exploration, we hope to gain a better understanding of how different factors, such as users' demographics and online behavior, impact user willingness to utilize mHealth apps. This deeper understanding will allow for more comprehensive explanations of user behavior and pave the way for improved mHealth app development and adoption. ## 6. Limitation and Future Work Although our research findings are insightful, there are some limitations to our study. Prior research has emphasized the significance of app cost and perceived ease of use [36; 41], particularly in developing countries and among individuals with low digital literacy [37]. Unfortunately, we did not measure these two factors in our study, mainly due to two reasons. Firstly, previous research has already produced consistent and robust findings regarding these factors. Secondly, they are primarily dependent on the app developers' design and marketing strategy. Additionally, the app developer's brand image and trustworthiness can also influence users' behavioral intention, which is an essential factor (Kumar et al., 2018). However, to simplify our experiment's design, we chose to control for it. As with many studies, our data were collected based on experimentally controlled stimuli and participant self-reports, rather than real-life decisions to use mHealth apps. Although our careful manipulation of variables enhanced the internal validity of our findings, it may limit the generalizability of the results to natural behavior. It is important to acknowledge this limitation and consider the potential implications for real-life scenarios. Nonetheless, our findings provide valuable insights into the underlying mechanisms that influence individuals' decisions to use mHealth apps. Lastly, it is important to acknowledge that our study's participants were recruited solely from two online platforms (MTurk and Toluna), which may have limited the diversity of our user population. For instance, our participants were all familiar with online crowd-sourcing tasks, indicating that they may possess higher levels of digital literacy than the general population. We tried to include countries that are representative of each continent. However, due to budget constraints and limited participant availability on MTurk and Toluna, our coverage could not be more comprehensive. Moreover, we conducted surveys in English for all countries except China, resulting in a predominantly English-speaking participant pool. Previous studies have suggested that language preferences can significantly impact user perceptions of mHealth apps (Kumar et al., 2018), which means that our findings may not be readily generalizable to non-English speaking users. Moving forward, we intend to expand our participant pool by recruiting individuals from local communities and non-English speaking countries. By doing so, we aim to enhance the generalizability of our study and obtain a more diverse sample of the population. Additionally, we plan to explore the use of a real app setting in future studies. Participants will be asked to install and utilize the app for a set period, and provide feedback on their perceptions and experiences at various stages throughout the trial period. By utilizing a real app setting, we hope to better capture the nuances of user behavior and generate more comprehensive insights into mHealth app usage. ## 7. Conclusion Mobile health (mHealth) apps have been recognized as a promising solution to improve healthcare outcomes, increase access to care, and reduce healthcare costs (Kumar et al., 2018). However, their widespread adoption is hindered by barriers such as data privacy and security concerns (Kumar et al., 2018). It is crucial to address these barriers to promote the use of mHealth apps. This study offers a comprehensive understanding of the factors that influence users' intention to use mHealth apps and measures their contribution to user willingness. It highlights the critical role of user digital literacy and emphasizes the importance of tailoring solutions to different populations based on a wider range of demographic factors. The findings of this study have implications for various stakeholders, including app designers, healthcare practitioners, and policymakers. Future research could combine the results of this study with media and sociology research to enhance user willingness to use mHealth apps. In addition, promoting digital literacy and regulating data collection and sharing are crucial for increasing user trust in mHealth apps. Overall, this study provides valuable insights for improving the design and implementation of mHealth apps and promoting their use. #### Acknowledgments This research is supported by the National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
2302.10310
Doppler Constraints on Planetary Companions to Nearby Sun-like Stars: An Archival Radial Velocity Survey of Southern Targets for Proposed NASA Direct Imaging Missions
Directly imaging temperate rocky planets orbiting nearby, Sun-like stars with a 6-m-class IR/O/UV space telescope, recently dubbed the Habitable Worlds Observatory, is a high priority goal of the Astro2020 Decadal Survey. To prepare for future direct imaging surveys, the list of potential targets should be thoroughly vetted to maximize efficiency and scientific yield. We present an analysis of archival radial velocity data for southern stars from the NASA/NSF Extreme Precision Radial Velocity Working Group's list of high priority target stars for future direct imaging missions (drawn from the HabEx, LUVOIR, and Starshade studies). For each star, we constrain the region of companion mass and period parameter space we are already sensitive to based on the observational baseline, sampling, and precision of the archival RV data. Additionally, for some of the targets we report new estimates of magnetic activity cycle periods, rotation periods, improved orbital parameters for previously known exoplanets, and new candidate planet signals that require further vetting or observations to confirm. Our results show that for many of these stars we are not yet sensitive to even Saturn-mass planets in the habitable zone, let alone smaller planets, highlighting the need for future EPRV vetting efforts before the launch of a direct imaging mission. We present evidence that the candidate temperate super-Earth exoplanet HD 85512 b is most likely due to the star's rotation, and report an RV acceleration for delta Pav which supports the existence of a distant giant planet previously inferred from astrometry.
Katherine Laliotis, Jennifer A. Burt, Eric E. Mamajek, Zhexing Li, Volker Perdelwitz, Jinglin Zhao, R. Paul Butler, Bradford Holden, Lee Rosenthal, B. J. Fulton, Fabo Feng, Stephen R. Kane, Jeremy Bailey, Brad Carter, Jeffrey D. Crane, Elise Furlan, Crystal L. Gnilka, Steve B. Howell, Gregory Laughlin, Stephen A. Shectman, Johanna K. Teske, C. G. Tinney, Steven S. Vogt, Sharon Xuesong Wang, Robert A. Wittenmyer
2023-02-20T20:53:52Z
http://arxiv.org/abs/2302.10310v2
# Doppler Constraints on Planetary Companions to Nearby Sun-like Stars: ###### Abstract Directly imaging temperate rocky planets orbiting nearby, Sun-like stars with a 6-m-class IR/O/UV space telescope, recently dubbed the _Habitable Worlds Observatory_, is a high priority goal of the Astro2020 Decadal Survey. To prepare for future direct imaging surveys, the list of potential targets should be thoroughly vetted to maximize efficiency and scientific yield. We present an analysis of archival radial velocity data for southern stars from the NASA/NSF Extreme Precision Radial Velocity Working Group's list of high priority target stars for future direct imaging missions (drawn from the _HabEx, LUVOIR_, and _Starshade Rendezvous_ studies). For each star, we constrain the region of companion mass and period parameter space we are already sensitive to based on the observational baseline, sampling, and precision of the archival RV data. Additionally, for some of the targets we report new estimates of magnetic activity cycle periods, rotation periods, improved orbital parameters for previously known exoplanets, and new candidate planet signals that require further vetting or observations to confirm. Our results show that for many of these stars we are not yet sensitive to even Saturn-mass planets in the habitable zone, let alone smaller planets, highlighting the need for future EPRV vetting efforts before the launch of a direct imaging mission. We present evidence that the candidate temperate super-Earth exoplanet HD 85512 b is most likely due to the star's rotation, and report an RV acceleration for \(\delta\) Pav which supports the existence of a distant giant planet previously inferred from astrometry. Exoplanet astronomy (486), Exoplanet systems (484), Radial velocity (1332) ## 1 Introduction In order to further push the boundaries of the search for life elsewhere in the universe, astronomers must advance our capabilities to detect temperate, terrestrial planets orbiting Sun-like stars and to characterize their atmospheres. To detect and spectrally characterize many such planets in reflected light, it is expected that future space-based direct imaging (DI) missions will employ starlight suppression technologies such as coronagraphs or starshades, and survey of order \(\sim\)100 of the nearest Sun-like stars (National Academies of Sciences, Engineering, and Medicine, 2018, 2021). Habitable Exoplanet Observatory (_HabEx_) and the Large Ultraviolet Optical Infrared Surveyor (_LUVOIR_), two proposed space-based DI mission concepts considered by the _2020 Decadal Survey on Astronomy and Astrophysics_ (Astro20201), were designed to obtain direct atmospheric spectra and enable atmospheric characterization of small, temperate planets (Gaudi et al., 2020; The LUVOIR Team, 2019). _Pathways to Habitable Worlds_ was a priority science theme in the Astro2020 Decadal Survey2, which included a recommendation3 for NASA to work 4 towards launching a 6-m class UV/Visible/IR space observatory in the early 2040s to spectrally search for biosignatures in the atmospheres of directly-imaged temperate rocky planets orbiting nearby stars5. The scale of the proposed observatory is intermediate between the _HabEx_ and _LUVOIR-B_ concepts, and will require further technology and science maturation and trade studies to converge on an architecture before project implementation later in the 2020s. To inform trade studies and simulate mission yields, and ultimately to fulfill the Astro2020 Decadal goal of searching for biosignatures in the atmospheres of \(\sim\)25 imaged exo-Earths, one requires a prioritized and carefully vetted target list. Footnote 1: [https://www.nationalacademies.org/our-work/decadal-survey-on-astronomy-and-astrophysics-2020-astro2020](https://www.nationalacademies.org/our-work/decadal-survey-on-astronomy-and-astrophysics-2020-astro2020) Footnote 2: Released November 2021, near the completion of this study Footnote 3: _“Recommendation: After a successful mission and technology maturation program, NASA should embark on a program to realize a mission to search for biosignatures from a robust number of about \(\sim\)25 habitable zone planets and to be a transformative facility for general astrophysics. If mission and technology maturation are successful, as determined by an independent review, implementation should start in the latter part of the decade, with a target launch in the first half of the 2040s.”_ Footnote 4: NASA Administator Bill Nelson recently announced plans in December 2022 to the National Academies on the 50th anniversary of the Apollo 17 mission to proceed with the Decadal mission concept for a large UV/Vis/IR space telescope with the name _Habitable Worlds Observatory_. Footnote 5: We note that an alternative approach to discovering and characterizing rocky temperate exoplanets via space-based mid-infrared nulling interferometry is being pursued for the Large Interferometer for Exoplanets (LIFE) concept for ESA’s Voyage 2050 program (Quanz et al., 2022; Dannert et al., 2022; Konrad et al., 2022; Hansen et al., 2022). The stars that are chosen for a future DI mission must meet criteria related to their \(T_{\rm eff}\), brightness (e.g. \(V_{\rm mag}\)), luminosity, multiplicity, and distance from Earth. Of primary importance is the maximum separation that a potentially habitable planet can achieve in its orbit around its star, as seen from Earth. The habitable zone annuli scale as the square root of the luminosity, such that one can define the Earth Equivalent Insolation Distance (EEID) as \(\sqrt{L/L_{\odot}}\) au, or in angular separation \(\theta_{EEID}=\sqrt{L/L_{\odot}}\)\(D_{pc}^{-1}=\sqrt{L/L_{\odot}}\)\(\varpi\), for a star at distance \(D_{pc}\) (parsecs) (=1/\(\varpi\)) and parallax \(\varpi\) (arcseconds). For a planet to be visible, at least part of its orbit must be outside the inner working angles (IWA) for the observatory's means of starlight suppression - usually either a coronagraph or starshade6 - so that the incoming starlight does not overwhelm the planet's signal. Instrumentation limits constrain the primary candidate stars' locations to be within a range where a \(\sim\)1 AU orbit would have a minimum on-sky separation of \(\sim\)40 mas for _LUVOIR_(The LUVOIR Team, 2019) and larger separations are required for _HabEx_ and _Starshade Rendezvous_. Given the limited number of Sun-like stars in the solar neighborhood, and the sizes of their habitable zones, this places tight limits on the distance ranges and sample sizes of targets well suited to Earth-analog searches with a DI mission. An ideal observation candi date must both exhibit Sun-like characteristics and be close enough that its habitable zone would be accessible to a direct imaging mission's instruments. The accuracy to which the properties of the exoplanets' atmosphere can be determined will rely on precise measurements of the planets' surface gravities, which in turn require precise planet mass measurements (Batalha et al., 2019). There are two practical methods for the foreseeable future for measuring the masses of temperate rocky exoplanets orbiting Sun-like stars - radial velocity and astrometry - both of which would require considerable advancement to achieve this goal (Lovis and Fischer, 2010; Quirrenbach, 2010; National Academies of Sciences, Engineering, and Medicine, 2018). It is generally acknowledged, however, that Extreme Precision Radial Velocity (EPRV) measurements obtained via spectrographs with single measurement precisions \(<10\,\mathrm{cm\,s^{-1}}\) are likely the most direct route to detecting Earth analogs around Sun-like stars (National Academies of Sciences, Engineering, and Medicine, 2018). The current generation of RV instruments are beginning to demonstrate single measurement precisions at the 30-50 cm s\({}^{-1}\) level (see, e.g., Pepe et al., 2021; Brewer et al., 2020; Trifonov et al., 2021; Seifahrt et al., 2018). Further improvements to reach the 10 cm s\({}^{-1}\) level will require advances in sustained instrument stability, wavelength calibration, and data extraction and analysis techniques with a focus on methods for mitigating stellar variability. Exploring solutions to these challenges that prevent mass measurements for Earth analogs was the goal of the Extreme Precision Radial Velocity (EPRV) Working Group chartered by NASA and the NSF7. The EPRV Working Group was charged with devising a path towards developing methods and facilities that will be capable of accurately measuring the masses of temperate terrestrial exoplanets orbiting Sun-like stars8. The Working Group's final report (Crass et al., 2021) includes recommendations for advancements in stellar activity and telluric mitigation, instrument efficiency and accuracy, and research and analysis techniques. Footnote 7: [https://exoplanets.nasa.gov/exep/NNExplore/EPRV/](https://exoplanets.nasa.gov/exep/NNExplore/EPRV/). Footnote 8: The final NASA concept study reports for the _HabEx_, _LUVOIR_, and _Starshade Rendezvous_ reports for the Astro 2020 Decadal Survey are posted at: [https://science.nasa.gov/astrophysics/2020-decadal-survey-planning](https://science.nasa.gov/astrophysics/2020-decadal-survey-planning) In parallel to these advancements in RV instrumentation and analysis, there must also be a concerted effort to better understand the potential DI target stars. One way to do this is by studying archival RV data sets taken using previous, less precise (\(\sigma_{\mathrm{RV}}<5\,\mathrm{m\,s^{-1}}\)), generations of RV spectrographs. The EPRV Working Group curated a list of \(\sim\)100 nearby, Sun-like (F9-K7) stars that are promising candidates for a future DI mission, many of which have already been included in multiple RV surveys over the past three decades as the community's interests have often been tied to bright G and K dwarfs. In this study, we analyze archival radial velocity and stellar activity data from the _HARPS_, _HIRES_, _UCLES_, _APF_, and _PFS_ instruments for 49 southern hemisphere stars identified as promising future DI mission targets. We perform planet injection and recovery tests to assess the completeness of the existing RV data as a function of planet mass and orbital period, so as to identify regions of mass/period parameter space in which planet signals might still be hiding. Any mass/period gaps identified in this work can then be filled by directed future observations, contributing to the completeness of the target star list data. In preparing the RV data sets for the injection/recovery analysis we first identify and remove significant signals from the RV time series. This results in the detection of numerous, previously confirmed exoplanets; a number of new planet candidates; and rotation and magnetic activity cycles within the data. The structure of this paper is as follows. In SS2, we discuss the stars chosen for this project and the types and sources of the data analyzed. In SS3 we detail the sources and treatments of the data sets used in this work. In SS4, we present the different methods of analysis used to characterize the stars' existing RV sensitivity. In SS5-7 we explain the results of our analysis. Each star on the target list has a subsection including updates to parameters of any known planets, evidence of strong stellar activity cycles, and any new signals recovered. We address those targets which lacked any significant signals in SS7. SS8 contains a general discussion of our results, including highlights of the analysis carried out in this work and exploration of major gaps we have identified in the archival RV data. Finally, in SS9 we cover the conclusions drawn from this work, and identify future work necessary before any target list is finalized for a direct imaging mission concept. The full set of figures for each target, including radial velocity, S-index, completeness contour, and, if relevant, H\(\alpha\) activity and speckle imaging plots can be found in the online journal. ## 2 Stellar Target List Our list of target stars is drawn from the EPRV Working Group, and a full description of the selection process and criteria considered can be found in its final presenta tion9 and report (Crass et al., 2021). In brief, the Working Group cross-matched target lists provided by the _HabEx_, _LUVOIR-A_, _LUVOIR-B_, and _Starshade Rendezvous_ teams (Gaudi et al., 2020; The LUVOIR Team, 2019; Seager et al., 2018) to assemble a combined list of potential target stars. It then compiled information on the stars' effective temperatures, apparent magnitudes, rotational velocities, metallicities, and surface gravities, among other traits. From this catalog, they culled those stars with spectral types from F7-K9, projected rotational velocities \(v\)sin\(i\) \(<\) 5 km s\({}^{-1}\), and that appear on at least two of the mission concept target lists. Stars were not eliminated based on knowledge of their stellar activity levels as the characterization and mitigation of stellar variability is an active field and we may yet overcome the obstacles it presents. The resulting list includes 101 stars (Figure 1). Footnote 9: [https://exoplanets.nasa.gov/internal_resources/1556/](https://exoplanets.nasa.gov/internal_resources/1556/) For this work, we have chosen to focus primarily on the 53 stars from this list located in the southern hemisphere10 (Figure 2) due to the recent publication of archival HARPS RV data that is now available on the RVBank website (Trifonov et al., 2020). Footnote 10: It should be noted that this list was assembled before the Astro2020 Decadal Survey was released, which recommended a 6-m-class space telescope. From subsequent analysis and literature survey (Mamajek & Stapelfeldt, in prep.), ten of the sample stars in Table 1 may be undesirable targets for a survey for potentially habitable exoplanets with a 6-m class space telescope: 4 systems have habitable zones prohibitively close to their stars (HD 85512 [EED \(\rm{1-\,3\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, ## 3 Data ### Radial Velocities In this work, we include data sets consisting of unbinned radial velocity measurements from five different instruments: the High Accuracy Radial Velocity Planet Searcher (HARPS, on the ESO 3.6m telescope, Mayor et al., 2003), the HIgh REsolution Spectrometer (HIRES, on the 10m Keck I telescope, Vogt et al., 1994), the Levy spectrometer (on the 2.4m Automated Planet Finder (APF) telescope, Vogt et al., 2014), the Planet Finder Spectrometer (PFS, on the 6.5m Magellan Clay telescope, Crane et al., 2006, 2008, 2010), and the University College London Echelle Spectrograph (UCLES, on the 3.9m Anglo-Australian Telescope, Diego et al., 1990). Table 2 lists the number of archival RV epochs (binned at a 12 hour cadence) acquired by each facility. All \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ HD} & GJ & \(T_{\rm eff}\) & \(\log g\) & [Fe/H] & Ref. & Spec. Type & Ref. & \(\log L/L_{\odot}\) & Ref.(L) \\ \hline [MISSING_PAGE_POST] 9V Fe-1.4 CH-0.7 & 2 & 0.166 & 3 \\ 207129 & 838 & 5937 & 4.49 & 0.00 & 4 & G0V Fe+0.4 & 2 & 0.082 & 3 \\ 209100 & 845 & 4649 & 4.63 & -0.19 & 9 & K4V(k) & 2 & -0.654 & 3 \\ 216803 & 879 & 4647 & 4.88 & 0.07 & 24 & K4+Vk & 2 & -0.707 & 3 \\ \hline \end{tabular} Note. – References: (1) Takeda et al. (2005), (2) Gray et al. (2006), (3) Stassun et al. (2019), (4) Sousa et al. (2008), (5) Jofré et al. (2014), (6) Gray et al. (2003), (7) Ramirez et al. (2014), (8) Keenan & McNeil (1989), (9) Ramirez et al. (2013), (10) Santos et al. (2004), (11) Valenti & Fischer (2005), (12) Adibekyan et al. (2016), (13) Tsantaki et al. (2013), (14) Gonzalez et al. (2010), (15) Maldonado & Villaver (2016), (16) Brewer et al. (2016) (17) Montes et al. (2018), (18) Spina et al. (2016), (19) Mahdi et al. (2016), (20) Gray et al. (2001), (21) Sousa et al. (2018), (22) Luck (2017), (23) Schofield et al. (2019), (24) Santos et al. (2001), (25) Luminosity for this star was calculated using the Virtual Observatory SED Analyzer version 7.0 (VOSA; Bayo et al., 2008) assuming zero extinction, \(log(g)=4.0\), and BT-Settl-AGSS2009 model spectra. This produced a best fit \(\log\)(L/\(L_{\odot}\)) value of \(+0.5407\pm 0.0065\) for HD 2151. \end{table} Table 1: (continued) radial velocity and activity indicator measurements for each star will be provided in a machine readable table alongside this paper in their original, unbinned, form (Table 3). The _HARPS_ instrument uses multiple observing fibers; one directed at the stellar target, and one directed instead at a Th-Ar calibration lamp. The calibration lamp serves as a wavelength reference for the stellar spectra. _HARPS_ has a resolving power of \(\sim\)115,000 and a spectral grasp of 3800-6900 A (Pepe et al., 2002; Cosentino et al., 2012). All _HARPS_ RV data used in this work were downloaded from the _HARPS_RVBank archive11(Trifonov et al., 2020). These RVBank velocities were generated using the SERVAL pipeline (Zechmeister et al., 2020) which uses a template matching approach (Zechmeister et al., 2018). For each star, SERVAL creates a high S/N template spectrum by shifting and co-adding all individual spectra of that star. The template is then used to derive RVs from the same observed spectra by using a \(\chi^{2}\)-minimization approach. The final velocities were checked for any nightly systematic errors that can be corrected in order to increase the precision of the RV data set. Footnote 11: [https://www2.mpia-hd.mpg.de/homes/trifonov/HARPS_RVBank.html](https://www2.mpia-hd.mpg.de/homes/trifonov/HARPS_RVBank.html) _HIRES_, _APF_, _UCLES_, and _PFS_ are Iodine-based instruments, meaning that they each include a cell of gaseous I\({}_{2}\) within the converging beam of their respective telescopes. Incoming stellar spectra are imprinted in a high-density forest of I\({}_{2}\) lines in the 5000-6200A band pass. These lines act both as a calibrator for the wavelengths of the stellar spectra and as a representative for the point-spread function (PSF) of each instrument. After extraction of the iodine region of the spectrum, the stellar spectrum must be deconvolved from the I\({}_{2}\) absorption lines such that the wavelengths, instrument PSFs, and Doppler shifts may be extracted. This is accomplished by splitting each iodine region into 2A chunks, and then analyzing them via the spectral synthesis technique outlined in Butler et al. (1996). A weighted mean of all the Doppler velocities of the individual chunks is taken, and serves as the final Doppler velocity for each individual observation. The standard deviation of all the 2A chunks (\(\sim\)800 for _PFS_ and \(\sim\)700 for the _APF_ and _HIRES_) constitutes the total internal uncertainty for each velocity measurement. The timestamps for each iodine-based RV are converted from their pipeline produced MJD values to BJD\({}_{\rm TDB}\) timestamps using the Pexo modeling package (Feng et al., 2019). We note that three of the above spectrographs have undergone instrumental upgrades since their deployment. The _HIRES_ detector was replaced with a new mosaic CCD in August 2004, _HARPS_ moved to the use of an octagonal science fiber in 2015 and in 2018, the _PFS_ detector was replaced with a smaller pixel 10k\(\times\)10k detector and the slit used for I\({}_{2}\) observations was changed from 0.5" to 0.3". In all three cases, we treat the data taken before and after the upgrade as coming from two separate instruments, identified in our RV data sets and figures as -Pre and -Post velocities. The instruments cover spectral ranges of 3700-8000A for _HIRES_, 3700-9000A for _APF_, 3900-6700A for _PFS_, and 4800-8400A, for _UCLES_, however the radial velocity measurements are made using only the 5000-6200A wavelength region. The typical spectral resolutions for each instrument are: \(R\simeq\) 90,000 for the _APF_, 60,000 Figure 1: Stars identified by the EPRV Working Group as potential targets for future direct imaging missions such as HabEx and LUVOIR that would aim to detect and characterize Earth analog exoplanets. This work focuses primarily on the stars located in the southern hemisphere. Figure 2: HR diagram containing the target stars in this study. The dashed blue lines are MIST V1.2 evolutionary tracks from Choi et al. (2016) covering masses 0.7 - 1.4 M\({}_{\odot}\) over age range 100Myr - 14 Gyr. The tracks adopts protosolar mix (initial \(Y=0.2703\), \(Z=0.0142857\), [\(\alpha\)/Fe]=0, \(v/v_{crit}=0\)). The Sun is depicted using the \(\odot\) symbol. for _HIRES_, 45,000 for _UCLES_, and 80,000/130,000 for _PFS_ pre-/post-upgrade, respectively. The _HIRES_ data was obtained from the public Earth Bound Planet Search archive12, which provides updates to the Butler et al. (2017)_HIRES_ data catalog. The final HIRES data point included in this analysis was taken on 26 December 2017. The _APF_, _PFS_, and _UCLES_ data were provided by the corresponding instrument teams. Footnote 12: [https://ebps.carnegiescience.edu/data/hireskeck-data](https://ebps.carnegiescience.edu/data/hireskeck-data) For each instrument's data set for a given star, we apply a robust sigma clipping where any points further than \(5\sigma\) from the mean are discarded as outliers. We visually inspect the points identified as outliers in each case, and find that in practice this analysis flags 1-3 data points per instrument per star, which is generally a small percentage of the overall data. Once each instrument's outliers are removed, we combine the cleaned data sets from each instrument into a single list. We include a column of data tracking which instrument was used to generate each measurement, so that later analysis can determine offsets between instruments. There are three stars that, despite having a significant number of _HARPS_ observations, cover time baselines incompatible with our science case. HD 203608 and HD 165341 were both observed over a single week, and HD 147584 was observed over a single night. None of these stars have been targeted by the other instruments in our study, and so we remove them from further consideration. ### S-index Measurements A major challenge when classifying periodic signals seen in Doppler velocity data is determining whether those signals are due to planetary companions or the star itself. Stellar variability is produced by a variety of surface phenomena that occur and evolve across a range of time scales, but they can be grouped into four broad categories. Acoustic waves within the star cause patches of the surface to rise and fall periodically, creating RV oscillations at the few m s\({}^{-1}\) level over timescales of minutes (Bouchy & Carrier, 2001; Nordlund et al., 2009). Granulation is due to motion within stellar convective cells as hot plasma wells up to the surface before radiatively cooling and sinking back down via intergranular lanes. This process takes anywhere from 20 minutes to 1 day depending on the granule size and again results in RV shifts at the few m s\({}^{-1}\) level (Cegla et al., 2019; Meunier et al., 2015; Meunier & Lagrange, 2019). Active regions are areas of increased magnetic flux on the stellar surface such as star spots, plages, and faculae that transit across the visible hemisphere of the star as it rotates. They generally persist for multiple rotation periods and induce cyclic RV variations at the 1-10 m s\({}^{-1}\) level each time they pass over the star's face (Saar & Donahue, 1997; Lockwood et al., 2007; Haywood et al., 2016). Finally, stellar magnetic cycles are driven by stellar dynamos, which are maintained through differential rotation at the tachocline - the interface between a star's radiative and convective layers. These magnetic cycles vary slowly, generally exhibiting periods of 5-15 years for Sun-like stars, and can induce RV variations up to 20 m s\({}^{-1}\) over that time span (Meunier et al., 2010; Makarov, 2010; Dumusque et al., 2011). In our case of working primarily with radial velocity data sets taken at relatively low cadence (e.g., once per week or month) the latter two varieties, active regions and magnetic cycles, produce the highest rate of false positive signals. A well-established method for tracing a star's variability level is the use of stellar activity indicators, which compare the amount of flux inside activity sensitive lines to the flux in nearby continuum regions. The most common stellar activity indicators for Sun-like stars are derived from measurement of the emission reversal at the cores of the Fraunhofer H and K lines of Ca II located at at 3968 A and 3934 A, respectively, which trace chromospheric activity. As the Ca II line core emission is generated in regions of concentrated magnetic fields, these lines serve as a proxy for the number of spots on the star, and often show variations with the star's rotational period. Because stars in the active phases of their magnetic cycles tend to produce more sunspots (Schwabe, 1843), activity indicators based on the Ca II lines can also act as a tracer of long-term magnetic cycles. For Sun-like stars, the best known Ca II activity indicator is the S-index which compares the flux in the cores of the H & K lines to two nearby continuum regions denoted as the R and V filters (Wilson, 1968; Duncan et al., 1991). The S-index generally takes the form: \[S_{\rm index}=\frac{H+K}{R+V} \tag{1}\] and is often calibrated to the original Mt. Wilson S-index survey which ran from 1966-1983 (Duncan et al., 1991) to allow for comparisons between facilities. Over the Mt. Wilson survey's two-decade span, 111 F2-M2 stars were monitored continuously from Mt. Wilson and 60% were seen to exhibit magnetic cycles on a 5-15 year time scale (Baliunas et al., 1995). These time series make clear that the range of variability exhibited by the continuously monitored Mt. Wilson stars is much more diverse than what we observe in the Sun. To search for evidence of these long term magnetic cycles, in addition to shorter term rotational periods, in \begin{table} \begin{tabular}{l c c c c c c||l c c c c c} \hline \hline \multicolumn{1}{c}{H} & \multicolumn{1}{c}{\({}^{a}\)HARPS} & \multicolumn{1}{c}{HIRES} & \multicolumn{1}{c}{UCLES} & \multicolumn{1}{c}{\({}^{a}\)PFS} & \multicolumn{1}{c}{APF} & \multicolumn{1}{c}{HD} & \multicolumn{1}{c}{\({}^{a}\)HARPS} & \multicolumn{1}{c}{HIRES} & \multicolumn{1}{c}{UCLES} & \multicolumn{1}{c}{\({}^{a}\)PFS} & \multicolumn{1}{c}{APF} \\ \hline HD693 & 16 [16/0] & 0 [0/0] & 0 & 0 [0/0] & 0 & HD85512 & 580 [517/63] & 7 [0/7] & 31 & 44 [38/6] & 0 \\ HD1581 & 329 [262/67] & 0 [0/0] & 119 & 0 [0/0] & 0 & HD100623 & 4 [4/0] & 64 [16/48] & 104 & 40 [34/6] & 0 \\ HD2151 & 34 [34/0] & 0 [0/0] & 163 & 0 [0/0] & 0 & HD102365 & 82 [78/4] & 13 [0/13] & 187 & 33 [22/11] & 0 \\ HD4628 & 42 [37/5] & 117 [0/117] & 0 & 0 [0/0] & 71 & HD102870 & 8 [8/0] & 0 [0/0] & 0 & 0 [0/0] & 59 \\ HD7570 & 19 [19/0] & 0 [0/0] & 60 & 0 [0/0] & 0 & HD104304 & 26 [0/26] & 42 [0/42] & 0 & 0 [0/0] & 14 \\ HD13445 & 11 [0/11] & 0 [0/0] & 74 & 0 [0/0] & 0 & HD114613 & 20 [13/7] & 45 [0/45] & 244 & 39 [27/12] & 0 \\ HD14412 & 26 [0/26] & 139 [24/115] & 28 & 12 [11/1] & 0 & HD115617 & 229 [224/5] & 157 [0/157] & 169 & 31 [28/3] & 0 \\ HD16160 & 45 [45/0] & 76 [0/76] & 0 & 0 [0/0] & 83 & HD125072 & 74 [55/19] & 0 [0/0] & 86 & 0 [0/0] & 0 \\ HD20766 & 26 [26/0] & 0 [0/0] & 58 & 0 [0/0] & 0 & HD131977 & 22 [22/0] & 0 [0/0] & 0 & 0 [0/0] & 0 \\ HD20794 & 260 [187/73] & 0 [0/0] & 147 & 21 [18/3] & 0 & HD136352 & 249 [242/7] & 28 [0/28] & 169 & 24 [21/3] & 0 \\ HD20807 & 99 [76/23] & 0 [0/0] & 99 & 16 [13/3] & 0 & HD140901 & 27 [27/0] & 0 [0/0] & 117 & 27 [23/4] & 0 \\ HD22049 & 28 [24/4] & 89 [0/89] & 0 & 0 [0/0] & 0 & HD146233 & 177 [119/58] & 112 [28/84] & 81 & 15 [15/0] & 0 \\ HD22484 & 26 [0/26] & 8 [0/8] & 0 & 0 [0/0] & 71 & HD147584\({}^{b}\) & 1 [1/0] & 0 & 0 & 0 [0/0] & 0 \\ HD23249 & 116 [76/40] & 55 [20/35] & 95 & 0 [0/0] & 29 & HD149661 & 12 [12/0] & 43 [31/12] & 14 & 0 [0/0] & 0 \\ HD23356 & 14 [14/0] & 71 [12/59] & 0 & 0 [0/0] & 0 & HD156026 & 0 [0/0] & 27 [18/9] & 11 & 54 [35/19] & 59 \\ HD26965 & 103 [82/21] & 163 [7/156] & 112 & 24 [20/4] & 13 & HD160346 & 34 [34/0] & 0 [0/0] & 0 & 0 [0/0] & 0 \\ HD30495 & 44 [35/9] & 6 [0/6] & 0 & 0 [0/0] & 0 & HD160691 & 163 [161/2] & 0 [0/0] & 178 & 14 [12/2] & 0 \\ HD32147 & 41 [37/4] & 157 [0/157] & 0 & 27 [22/5] & 65 & HD15341\({}^{b}\) & 7 [7/0] & 0 & 0 & 0 [0/0] & 0 \\ HD38858 & 103 [91/56] & 0 & 0 [0/0] & 0 & HD188512 & 17 [17/0] & 57 [7/0] & 0 & 0 & 0 [0/0] & 0 \\ HD39091 & 49 [42/7] & 0 [0/0] & 77 & 37 [0/37] & 0 & HD190248 & 391 [279/112] & 0 [0/0] & 236 & 0 [0/0] & 0 \\ HD43834 & 26 [0/26] & 0 [0/0] & 140 & 24 [21/3] & 0 & HD192310 & 432 [348/84] & 137 [0/137] & 171 & 19 [19/0] & 0 \\ HD50281 & 12 [12/0] & 52 [29/23] & 0 & 0 [0/0] & 33 & HD196761 & 37 [27/10] & 63 [30/33] & 49 & 29 [21/8] & 0 \\ HD69830 & 273 [265/8] & 154 [0/154] & 24 & 29 [29/20] & 87 & HD203608\({}^{b}\) & 7 [7/0] & 0 & 0 & 0 [0/0] & 0 \\ HD72673 & 158 [115/43] & 77 [21/56] & 63 & 15 [15/0] & 0 & HD207129 & 111 [98/13] & 0 [0/0] & 123 & 22 [15/7] & 0 \\ HD75732 & 2 [2/0] & 220 [23/197] & 0 & 0 [0/0] & 25 & HD209100 & 137 [100/37] & 0 [0/0] & 0 & 0 [0/0] & 0 \\ HD76151 & 7 [7/0] & 0 & 0 [0/0] & 0 & 0 [0/0] & 0 & HD216803 & 11 [11/0] & 16 [6/10] & 15 & 0 [0/0] & 0 \\ \hline \hline \end{tabular} * \({}^{a}\)The total number of _HARPS_ _HIRES_ and _PFS_ RV epochs are followed by a break down of how many data points were taken before and after the instruments’ upgrades (see Section 3) as the pre- and post- upgrade time series are treated as coming from two different instruments. \({}^{b}\)These stars have thousands of individual observations all taken over observational baselines covering less than 1 week of time, making them incompatible with exoplanet search and injection/recovery analyses. We therefore remove them from further consideration in this paper. \end{table} Table 2: Number of Archival RV Epochs Analyzed for Each Target Star \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{Starname} & \multicolumn{1}{c}{BJD\({}_{TBD}\)} & \multicolumn{1}{c}{RV [ms\({}^{-1}\)]} & \multicolumn{1}{c}{RV\({}_{err}\) [m s\({}^{-1}\)]} & \multicolumn{1}{c}{Instrument} & \multicolumn{1}{c}{S-index\({}_{err}\)} & \multicolumn{1}{c}{H\(\alpha\)} & \multicolumn{1}{c}{H\(\alpha\)} & \multicolumn{1}{c}{File Name} \\ \hline HD115617 & 2453026.86393 & -3.12 & 0.95 & HARPS-Pre & 0.1404 & 0.0026 & -1.0 & -1.0 & 2004-01-22708:41:20 \\ HD115617 our RV data sets we first derive an S-index value from each RV spectrum. For the _HIRES_ _APF_ and _PFS_ data sets, these S-index values are generated automatically as part of the data reduction pipelines and further details can be found in Butler et al. (2017) and Burt et al. (2021). We determine errors for each instrument's S-index values taking into account photon noise. The resulting uncertainty, by error propagation, is: \[\sigma_{\rm S}={\rm S}\cdot\sqrt{\frac{\sigma_{\rm H}^{2}+\sigma_{\rm K}^{2}} {({\rm H}+{\rm K})^{2}}+\frac{\sigma_{\rm R}^{2}+\sigma_{\rm V}^{2}}{({\rm R} +{\rm V})^{2}}} \tag{2}\] In each case we have used a set of overlapping target stars to calibrate the instrument's S-index measurements to the Mt. Wilson survey so that they can be considered together without concerns for large scaling offsets. In some cases, however, the Mt. Wilson calibration is based on a small number of stars and may introduce non-astrophysical offsets between the instruments. To account for this, our analysis allows us to fit for offsets between the S-index data sets, as described more thoroughly in Section 4.1. The _HARPS_ RVBank data does not yet provide S-index measurements, and so we instead make use of the methodology described in Perdelwitz et al. (2021). Specifically, we use a set of narrow bands close to the Ca II line cores, along with PHOENIX synthetic spectra (Husser et al., 2013), to derive \(R^{\prime}_{\rm HK}\). We then convert these into S-Indices using the prescription given by Gomes da Silva et al. (2021) and calibrate the results to the Mount Wilson scale by cross matching with the mean S-indices derived by Duncan et al. (1991). The S-index errors are calculated via a Monte Carlo approach where, in each trial, the flux in each bin of the measured spectrum is randomly displaced within a Gaussian distribution with width of the flux error. The S-index is then evaluated for each trial, and the error is taken to be the standard deviation of the resulting set. This approach yields S-indices for \(\sim 93.5\%\) of the _HARPS_ spectra present in the RVBank archives. Attempts to use these Monte Carlo S-index errors for the HARPS data alongside the photon-noise limited errors derived from the HIRES, PFS, and APF data results in an uneven weighting in favor of the iodine-based instruments. While we report all of these errors in the data tables that accompany this publication for reference, in practice we adopt a third, alternative method for determining S-index errors so that all four instruments' data sets are treated in the same way. We begin by selecting five stars (HD 69830, HD 196761, HD 114613, HD 4628, and HD 39091) all of which have at least two dozen observations from at least two of the instruments. We assign all of the S-index data for each star the same error bar of \(\sigma_{S}=0.01\) and carry out an initial uninformed fit with RVSearch. We then combine all of the residual values from each instrument across all stars, measure the standard deviation, and assign that as the global error for that instrument. Those values are: HARPS-Pre : 0.010; HARPS-Post : 0.006; HIRES-Pre : 0.010; HIRES-Post : 0.014; PFS-Pre : 0.005; PFS-Post : 0.009, APF : 0.007. As we expect the S-index measurement to be systematics dominated (e.g. from the deblazing and continuum normalization) rather than photon-noise dominated, this empirical approach to measuring the uncertainty provides a more homogeneous error estimate. When summarizing the properties of our target stars below, we reference \(\log R^{\prime}_{HK}\) values for stars where it has been reported in the literature. This metric is also derived using the Ca II H&K absorption lines, but \(\log R^{\prime}_{HK}\) removes the basal (rotation independent) photospheric flux (Noyes et al., 1984; Schrijver, 1987; Mittag et al., 2013). This photospheric flux, which can contaminate the S-index filters, introduces a dependency on stellar effective temperature. By removing it, the \(\log R^{\prime}_{HK}\) metric produces a measure of activity that can be compared across spectral types. ### H\(\alpha\) Equivalent Width Measurements The _UCLES_ spectrograph cannot simultaneously cover the Iodine region necessary for precise wavelength calibration of the stellar spectra and the Ca II H & K region necessary for extracting the S-index measurements. To provide a stellar activity check on the RVs derived from the AAT dataset, we instead make use of the H\(\alpha\) absorption line, using measurements of the line's equivalent width (EW) to detect variations related to the long-term stellar magnetic activity cycle. This EW\({}_{H\alpha}\) analysis follows the methodology of Wittenmyer et al. (2017) which is similar to that presented by Robertson et al. (2014), except for the addition of an automated algorithm for continuum normalization and telluric contamination identification near the H\(\alpha\) line. A visual comparison of the resulting EW\({}_{H\alpha}\) time series reveals the presence of very similar structured variations among each of the resulting data sets (Figure 3). Given the shared trends between the stars, the source is likely either instrumental or environmental in nature. One potential cause is variations in the water content of the atmosphere. As our EW\({}_{H\alpha}\) calculation algorithm does not actively correct for the telluric lines, it is reasonable to assume the EW\({}_{H\alpha}\) measurements are subject to effects from atmospheric water content. However, when compared to the historic precipitable water vapor mea surements from Siding Spring (e.g. those in Haslebacher et al. (2022)), no strong correlation is evident. While the exact cause of the variations is not clear, we note that the stacked periodograms of a dozen stars (each with dozens of _UCLES_ H\(\alpha\) data points but no significant detections) all linearly interpolated onto the same period grid shows prominent peaks at \(\sim\)1 year, \(\sim\)3000 day (approximately half the _UCLES_ observation time span), and \(\sim\)6000 day (approximately equal to the _UCLES_ observation time span) periods. We therefore advise caution when interpreting the results of the H\(\alpha\) RVSearchanalysis, especially in the case of long period signals. Shorter period detections, those in the 10 - 100 day range where we generally look for evidence of stellar rotation for these F - K dwarfs stars, seem to be unaffected by these long period variations. ### Speckle Imaging For a handful of the stars studied in this work, high resolution speckle imaging observations to search for and/or rule out nearby stellar companions were obtained. The speckle imaging observations were carried out using the 'Alopeke and Zorro instruments at Gemini-North and Gemini-South, respectively (Scott et al., 2021). These instruments observe simultaneously in two bands, (832\(\pm\)40 nm and 562\(\pm\)54 nm) obtaining diffraction limited images with inner working angles of 0\({}^{\prime\prime}\).026 and 0\({}^{\prime\prime}\).017, respectively. All targets were observed using a sequence of short 60 ms exposures. These images were combined using Fourier analysis techniques, examined for stellar companions, and used to produce reconstructed speckle images (see Howell et al. (2011) and Horch et al. (2021)). We summarize the observations and the resulting sensitivity to companions in Table 8. and Figure Set 8 (available in the online journal). ## 4 Analysis ### Overview of RVSearch RVSearch(Rosenthal et al., 2021) is a recently released Python package based on RadVel(Fulton et al., 2018), built specifically to perform uninformed searches for Keplerian signals in RV data and to perform injection and recovery analysis of RV time series data. RVSearch's uninformed search function is used to identify candidate signals in our compiled radial velocity, S-index, and H\(\alpha\) data sets. For each data set we bin the input velocities / activity measurements to nightly data points to decrease the computational requirements and set a minimum search period of 2 days and a maximum search period of three times the total observational baseline days. In addition to any Keplerian signals, RVSearch also fits for a constant offset between each instrument's data set and for the 'jitter' of each instrument. This jitter term is used to address the unmodeled instrumental effects or stellar variability that induce additional scatter in the RV time series and encompasses uncorrelated signals that occur on timescales shorter than the observational baseline. RVSearch implements an iterative fitting approach when searching for periodic signals in a time series. It first tests for the presence of a linear or quadratic slope in the data, before beginning the Keplerian fitting process by generating a single planet with undefined orbital parameters to become the initial likelihood model. With the initial model in hand, RVSearch defines a set of periods to test13 and computes a \(\Delta\)BIC goodness-of-fit pe Figure 3: Top: EW\({}_{H\alpha}\) time series for three stars observed with the _UCLES_ spectrograph, offset vertically from one another for ease of viewing. Similar long term behavior is present in each of these three data sets, suggesting that the cause is not stellar but rather instrumental or environmental. Bottom: Stacked periodograms from 6 stars with at least 100 _UCLES_ H\(\alpha\) measurements spread over the majority of the instrument’s \(\sim\)6000 day observational baseline, where no significant signals were detected by RVSearch. The long term structure seen in the top panel emerges as two humps in this composite periodogram, one at _UCLES_’ observing baseline (6000 days) and another at half that (3000 days). The impact of the monthly lunar cycle is also evident via a narrow peak at 29.5 days. We therefore treat H\(\alpha\) periods detected around any of these three periods with some degree of caution. riodogram by fitting a sinusoid to the data at each fixed period. The Bayesian Information Criterion, or BIC, is used for model selection when considering a finite set of models and is calculated as: \[\text{BIC}=-2\,\ln\,\mathcal{L}_{\text{max}}\,+\,n\,\ln\,N \tag{3}\] where \(\mathcal{L}_{\text{max}}\) is the maximum likelihood, \(n\) is the number of model free parameters, and \(N\) is the number of data points (see, e.g., Kass & Raftery, 1995, for details). Models with lower BIC values are generally preferred. The \(\Delta\)BIC value at each period is the difference between the best-fit, \(n\)+1-planet model with the given fixed period, and the \(n\)-planet fit to the data. Once the \(\Delta\)BIC periodogram has been calculated, a linear fit is applied the data, and a histogram of periodogram power values is plotted on a log scale. A detection threshold is then constructed such that only 0.1% of periodogram peaks are expected to be high enough powered to exceed it. This threshold is the empirical False Alarm Probability (FAP) of 0.1% (Rosenthal et al., 2021). Any signal above a 0.1% empirical False Alarm Probability (FAP) is considered significant. For our S-index search, we enforce an additional requirement that the \(\Delta\)BIC value be at least 10 for a signal to be added to the system's model, even if that corresponds to a FAP value \(<\)1%. This prevents the inclusion of a nonphysical number of short period signals in sparse data sets, while still being a generous inclusion criteria as the field standard for considering a signal worthy of consideration for publication is more often \(\Delta\)BIC \(>\) 25. If a significant detection is made, RVSearch refines the fit of the signal's Keplerian orbit by performing a maximum a posteriori (MAP) fit with all model parameters free, including eccentricity, and records the BIC of that best-fit model. The search algorithm then adds an additional planet to the model and repeats the fitting and evaluation process. In the n+1 planet fit, the signals are treated simultaneously, so that the change in the BIC can again be evaluated to compare the \(n\)-planet fit to the \(n\)+1-planet fit. We note here that when analyzing our S-index and H\(\alpha\) data sets, the 'planet' detections instead refer to activity-driven periodicities in the data sets. If the new planet is supported by the data, the search continues. The uninformed search continues to iterate on the time series until no additional significant signals are present in the periodogram. At this point, RVSearch returns the max-likelihood estimates of the orbital model parameters the the dataset and the model posteriors are sampled via an affine-invariant sampling that is implemented in RadVel using the emcee package (Foreman-Mackey et al., 2013). The resulting parameter estimates and uncertainties, reported as the median and \(\pm 1\sigma\) intervals are visible on the summary figures produced by RVSearch and in our summary tables. One complication encountered in the fitting process, across both the RV and S-index applications, is the treatment of signals with periods on order or greater than the total observational baseline. While the \(\Delta\)BIC periodogram approach used in the first phase of RVSearch's process can only fully resolve periods shorter than the observational baseline, the posterior sampling is not subject to similar constraints. Thus in cases where there is a prominent peak in the periodogram that has a peak close to or beyond the observational baseline, the MCMC will sometimes suggest that the true period is 2x the periodogram peak, or in some cases many times larger. In these instances the traditional MCMC method fails to return a well-sampled model posterior and the resulting period uncertainty is as large as, if not many times larger than, the period itself. We note these types of detections as long period signals ('LPS') in our summary tables and report just the initial \(\Delta\)BIC periodogram peak instead of the final MCMC fit and its corresponding uncertainties, as they are non-physical. For these signals, note that additional data is required to fully reveal and constrain the underlying signal. ### Identification of Candidate Signals in the Radial Velocity data An example RVSearch fit to the radial velocity data for HD 115617 is shown in Figure 4. The HD 115617 planetary system was first published in Vogt et al. (2010) using data from the _HIRES_ and _UCLES_ spectrographs. Three planets were discovered with periods and RV semi-amplitudes of \(4.215\pm 0.0006\,\text{d}\) and \(2.12\pm 0.23\,\text{m}\,\text{s}^{-1}\), \(38.021\pm 0.034\,\text{d}\) and \(3.62\pm 0.23\,\text{m}\,\text{s}^{-1}\), and \(123.01\pm 0.55\,\text{d}\) and \(3.25\pm 0.39\,\text{m}\,\text{s}^{-1}\) for planets b, c, and d, respectively. Revisiting the system with the available archival data, we supplement the published data with an additional 275 _HIRES_ points, 159 _UCLES_ points, 1248 _HARPS_ points, and 11 _PFS_ points taken between 2004 and 2020. **Fig. Set 4. Radial Velocity Analysis Summary Plots** **Fig. Set 5. S-Index Activity Summary Plots** Incorporating this additional RV data produces a fit consistent with the Vogt et al. (2010) results. All three previously published planets are again detected at statistically significant levels and at very similar period and semi-amplitude values. The uncertainties on those values, however, are notably improved in the updated fit; the RV semi-amplitude uncertainty decreases by a fac Figure 4: RVSearch results for HD 115617. Panel (a) shows the initial radial velocity time series with the best-fit model plotted in blue and panel (b) shows the RV residuals. Panels (c), (e), (g) show phased RV curves for the three known planets in the system, and report the best-fit parameters for each orbit. Panels (d), (f), and (h) show the periodograms associated with each planet detection. The yellow horizontal dotted line marks the minimum \(\Delta\)BIC for a 1% FAP, while the vertical dotted lines show monthly and yearly aliases. Panels (i) and (j) show the periodogram and best fit curve to a fourth, much longer and highly eccentric signal that is likely driven by stellar variability. Panel (k) shows the RV significance of each signal relative to the number of observations considered and is calculated using the best-fit orbits shown in the left side panels above it, and panel (l) shows the residual periodogram, indicating that no further planets are found in the data set. The complete set of Radial Velocity summary plots (46 figures) can be found in the online journal. Figure 5: RVSearch results for the relative S-index measurements of HD 115617, following the same plot image structure as in Figure 4. Panel (a) shows the S-index time series with the best-fit model plotted behind them while panel (b) shows the S-index residuals. Panel (c) would present the phase folded curves for any signals identified by RVSearch, but as seen in panel (d) no signals in the periodogram rise above the \(\Delta\)BIC \(>10\) requirement imposed on the S-index search (yellow horizontal dotted line). The red and green vertical dotted lines show the one month and one year aliases of the tallest peak in the periodogram. The complete set of S-Index summary plots (46 figures) can be found in the online journal. tor of two, thereby doubling the detection significance, and the period uncertainty decreases by factors of two to four across the three planets. RVSearch also identifies a fourth, much longer period signal that rises above the detection threshold, with \(P=20565\pm 21000\) days (Figure 3 panel i). The uncertainty on the best-fit orbital period is of order the period itself and it overlaps broadly with a long period signal in the star's activity data (see: Section 4.3). Additionally, the strength of this fourth RV signal varies quite noticeably as the number of data points increases. This suggests that only specific clumps of data are providing additional power in the periodogram, as compared to the roughly monotonic increase that is expected for a Keplerian signal (similar to the Mortier & Collier Cameron (2017) stacked periodogram technique). Finally, we note that the period of the peak which is actually being fit by this signal is of approximately the same length as the observation baseline for this target. As discussed in Section 4.1, this results in unphysical MCMC fitting to the signal. These concerns, combined with the fact that the best-fit model's high eccentricity would produce a semi-minor axis of \(\sim\)0.5 AU that would disturb the three shorter period planets that have been robustly vetted, leads us to conclude the final signal detected by RVSearch is not planetary in nature. This signal is classified as LPS in Table 4. For all previously discovered planetary systems, HD 115617 included, we report our best-fit results as updates to the published orbital parameters in Section 5. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ ID} & Period [days] & K [m s\({}^{-1}\)] & Ecc. & \multicolumn{1}{c}{Msini [M\({}_{\oplus}\)]} & FAP & Interp \\ \hline HD1581 I & 635.0\(\pm\)4.4 & 0.89\(\pm\)0.14 & 0.55\(\pm\)0.13 & 10.08\({}^{+1.22}_{-1.17}\) & 7.24e-09 & SRC \\ HD1581 II & 15.653\(\pm\)0.005 & 0.662\(\pm\)0.096 & 0.106\(\pm\)0.097 & 2.56\({}^{+0.37}_{-0.38}\) & 1.85e-05 & ACT-R \\ HD1581 III & 29.4661\(\pm\)0.0041 & 1.6\(\pm\)1.1 & 0.89\(\pm\)0.12 & 3.53\({}^{+1.15}_{-0.85}\) & 8.26e-04 & ACT \\ HD2151 I & 5365\(\pm\)1400 & 3.21\(\pm\)0.58 & 0.54\(\pm\)0.15 & 81.41\({}^{+12.95}_{-13.22}\) & 8.90e-07 & ACT \\ HD13445 I & 88080\(\pm\)46000 & 3117\(\pm\)750 & 0.68\(\pm\)0.12 & 201858.32\({}^{+79293.94}_{-71306.73}\) & 2.28e-16 & Binary \\ HD13445 b & 15.764862\(\pm\)4.3e-05 & 377.58\(\pm\)0.77 & 0.0485\(\pm\)0.0018 & 1271.19\({}^{+25.51}_{-25.72}\) & 3.56e-83 & KP \\ HD16160 I & 22999\(\pm\)1200 & 702.5\(\pm\)2.9 & 0.6075\(\pm\)0.0092 & 20304.22\({}^{+434.82}_{-436.28}\) & 1.23e-26 & Binary \\ HD20766 I & 5643.5 & – & – & – & – & LPS \\ HD20794 b & 18.305\(\pm\)0.0052 & 0.807\(\pm\)0.089 & 0.17\(\pm\)0.11 & 2.83\(\pm\)0.31 & 2.20e-11 & KP \\ HD20794 d & 89.766\(\pm\)0.085 & 0.86\(\pm\)0.12 & 0.27\(\pm\)0.11 & 5.02\({}^{+0.66}_{-0.64}\) & 7.38e-11 & KP \\ HD20807 I & 3180\(\pm\)130 & 2.9\(\pm\)0.4 & 0.23\(\pm\)0.11 & 62.48\({}^{+8.81}_{-8.74}\) & 2.73e-07 & SRC \\ HD22049 b & 2832\(\pm\)120 & 11.1\(\pm\)1.2 & 0.09\(\pm\)0.08 & 211.16\({}^{+23.57}_{-24.34}\) & 8.55e-11 & KP \\ HD23249 I & 596.6\(\pm\)2.6 & 3.0\(\pm\)1.1 & 0.65\(\pm\)0.14 & 33.33\({}^{+7.89}_{-5.6}\) & 8.18e-08 & SRC \\ HD26965 I & 42.303\(\pm\)0.025 & 1.4\(\pm\)0.22 & 0.37\(\pm\)0.17 & 5.94\(\pm\)0.79 & 1.48e-08 & ACT* \\ HD26965 II & 37.33\(\pm\)0.02 & 1.17\(\pm\)0.19 & 0.14\(\pm\)0.12 & 5.14\({}^{+0.84}_{-0.86}\) & 7.45e-05 & Alias \\ HD26965 III & 367.9\(\pm\)3.1 & 1.63\(\pm\)0.88 & 0.46\(\pm\)0.27 & 13.9\({}^{+5.13}_{-2.95}\) & 1.37e-05 & FP \\ HD32147 I & 2866\(\pm\)140 & 1.8\(\pm\)0.21 & 0.34\(\pm\)0.13 & 32.02\({}^{+3.54}_{-3.49}\) & 3.94e-12 & SRC \\ HD38858 I & 2893\(\pm\)150 & 2.8\(\pm\)0.3 & 0.19\(\pm\)0.12 & 58.15\({}^{+6.19}_{-6.01}\) & 1.41e-13 & ACT-M \\ HD39091 b & 2089.05\(\pm\)0.46 & 196.5\(\pm\)0.6 & 0.6428\(\pm\)0.0017 & 3225.56\({}^{+58.05}_{-59.18}\) & 1.10e-20 & KP \\ HD39091 d & 125.58\(\pm\)0.27 & 2.16\(\pm\)0.42 & 0.16\(\pm\)0.15 & 17.56\({}^{+3.49}_{-3.31}\) & 4.26e-04 & KP \\ HD69830 b & 8.66897\(\pm\)0.00028 & 3.4\(\pm\)0.1 & 0.128\(\pm\)0.028 & 10.1\({}^{+0.38}_{-0.37}\) & 2.15e-62 & KP \\ HD69830 c & 31.6158\(\pm\)0.0051 & 2.6\(\pm\)0.1 & 0.03\(\pm\)0.027 & 12.09\({}^{+0.55}_{-0.54}\) & 1.47e-84 & KP \\ HD69830 d & 201.4\(\pm\)0.4 & 1.5\(\pm\)0.1 & 0.08\(\pm\)0.071 & 12.26\({}^{+0.80}_{-0.88}\) & 1.89e-36 & KP \\ HD75732 b & 14.65157\(\pm\)0.00015 & 70.39\(\pm\)0.37 & 0.0069\(\pm\)0.0047 & 254.81\({}^{+4.79}_{-4.81}\) & 4.13e-97 & KP \\ HD75732 d & 14951\(\pm\)5100 & 54\(\pm\)5 & 0.515\(\pm\)0.086 & 1686.0\({}^{+229.11}_{-244.51}\) & 9.03e-20 & KP \\ \hline \end{tabular} \end{table} Table 4: Keplerian RV Signals Identified by RVSearch (Updated for Resubmission) ### Identification of Candidate Activity Signals in the S-index Data \begin{table} \begin{tabular}{c c c c c c c} \hline \hline ID & Period [days] & K [m s\({}^{-1}\)] & Ecc. & Msini [M\({}_{\oplus}\)] & FAP & Interp \\ \hline HD75732 c & 44.39\(\pm\)0.01 & 9.95\(\pm\)0.37 & 0.22\(\pm\)0.041 & 50.78\({}^{+2.05}_{-2.0}\) & 2.37e-34 & KP \\ HD75732 e & 0.736546\(\pm\)5e-06 & 6.26\(\pm\)0.34 & 0.039\(\pm\)0.035 & 8.35\({}^{+0.48}_{-0.47}\) & 6.05e-28 & KP \\ HD75732 f & 260.88\(\pm\)0.36 & 5.68\(\pm\)0.48 & 0.585\(\pm\)0.057 & 43.4\({}^{+3.61}_{-3.5}\) & 3.51e-14 & KP \\ HD85512 I & 3891 & – & – & – & 1.90e-16 & LPS \\ HD85512 II & 51.195\(\pm\)0.073 & 0.438\(\pm\)0.079 & 0.3\(\pm\)0.19 & 1.86\({}^{+0.31}_{-0.3}\) & 6.98e-06 & ACT-R* \\ HD102365 b & 121.3\(\pm\)0.25 & 1.38\(\pm\)0.23 & 0.28\(\pm\)0.15 & 9.34\({}^{+1.52}_{-1.5}\) & 1.58e-04 & KP \\ HD114613 I & 6622\(\pm\)270 & 7.29\(\pm\)0.45 & 0.291\(\pm\)0.061 & 239.94\({}^{+13.54}_{-13.5}\) & 2.87e-34 & ACT-M* \\ HD114613 II & 73.141\(\pm\)0.056 & 2.54\(\pm\)0.45 & 0.51\(\pm\)0.14 & 16.72\({}^{+2.38}_{-2.42}\) & 4.31e-04 & SRC \\ HD114613 III & 1954\(\pm\)39 & 2.98\(\pm\)0.52 & 0.6\(\pm\)0.11 & 54.01\({}^{+7.59}_{-7.5}\) & 2.09e-04 & SRC \\ HD115617 b & 4.21498\(\pm\)0.0014 & 2.47\(\pm\)0.11 & 0.033\(\pm\)0.029 & 5.98\({}^{+0.3}_{-0.29}\) & 5.01e-61 & KP \\ HD115617 c & 38.079\(\pm\)0.008 & 3.56\(\pm\)0.12 & 0.026\(\pm\)0.023 & 17.94\({}^{+0.73}_{-0.7}\) & 2.93e-46 & KP \\ HD115617 d & 123.2\(\pm\)0.2 & 1.47\(\pm\)0.17 & 0.15\(\pm\)0.11 & 10.82\({}^{+1.33}_{-1.03}\) & 5.63e-22 & KP \\ HD115617 I & 5910.9 & – & – & – & 1.22e-10 & LPS \\ HD136352 b & 11.5767\(\pm\)0.0015 & 1.65\(\pm\)0.11 & 0.05\(\pm\)0.045 & 5.5\(\pm\)0.38 & 3.67e-38 & KP \\ HD136352 c & 27.5845\(\pm\)0.0064 & 2.49\(\pm\)0.12 & 0.041\(\pm\)0.036 & 11.12\(\pm\)0.57 & 3.04e-24 & KP \\ HD136352 d & 107.5\(\pm\)0.14 & 1.44\(\pm\)0.12 & 0.072\(\pm\)0.061 & 10.08\({}^{+0.87}_{-0.85}\) & 2.06e-23 & KP \\ HD136352 I & 121.66\(\pm\)0.26 & 0.68\(\pm\)0.13 & 0.22\(\pm\)0.19 & 4.69\({}^{+0.87}_{-0.86}\) & 9.76e-04 & ACT \\ HD140901 I & 5084\(\pm\)1200 & 11.6\(\pm\)2.4 & 0.44\(\pm\)0.25 & 269.81\({}^{+43.83}_{-42.02}\) & 7.32e-04 & SRC \\ HD146233 I & 2374\(\pm\)47 & 5.47\(\pm\)0.33 & 0.21\(\pm\)0.07 & 111.72\({}^{+6.67}_{-5.99}\) & 1.31e-25 & ACT-M* \\ HD146233 II & 6256\(\pm\)370 & 4.96\(\pm\)0.57 & 0.59\(\pm\)0.06 & 114.86\({}^{+12.0}_{-11.43}\) & 7.39e-14 & ACT-M \\ HD146233 III & 19.8777\(\pm\)0.0062 & 1.73\(\pm\)0.26 & 0.38\(\pm\)0.16 & 6.77\(\pm\)0.86 & 7.23e-09 & Candidate \\ HD160346 I & 83.7286\(\pm\)0.0005 & 5690.3\(\pm\)2.3 & 0.2048\(\pm\)0.0003 & 35280.0\({}^{+706.83}_{-716.46}\) & 1.42e-15 & Binary \\ HD160691 b & 644.93\(\pm\)0.28 & 35.7\(\pm\)0.2 & 0.0499\(\pm\)0.0082 & 528.58\({}^{+11.05}_{-11.13}\) & 2.16e-46 & KP \\ HD160691 c & 9.6394\(\pm\)0.0008 & 2.8\(\pm\)0.2 & 0.132\(\pm\)0.069 & 10.22\(\pm\)0.73 & 5.38e-98 & KP \\ HD160691 d & 308.4\(\pm\)0.23 & 12.7\(\pm\)0.3 & 0.074\(\pm\)0.016 & 147.23\({}^{+4.63}_{-4.56}\) & 8.24e-131 & KP \\ HD160691 e & 4035\(\pm\)21 & 22.25\(\pm\)0.24 & 0.026\(\pm\)0.013 & 607.79\({}^{+14.0}_{-13.99}\) & 2.84e-32 & KP \\ HD190248 I & 360.8\(\pm\)1.9 & 1.21\(\pm\)0.43 & 0.29\(\pm\)0.15 & 12.96\({}^{+5.08}_{-3.76}\) & 5.14e-04 & FP \\ HD192310 b & 74.278\(\pm\)0.035 & 2.484\(\pm\)0.098 & 0.032\(\pm\)0.027 & 14.28\({}^{+0.64}_{-0.63}\) & 8.11e-50 & KP \\ HD192310 c & 549.1\(\pm\)4.5 & 1.3\(\pm\)0.1 & 0.078\(\pm\)0.073 & 14.96\({}^{+1.21}_{-1.18}\) & 3.64e-27 & KP \\ HD192310 I & 3836\(\pm\)240 & 1.48\(\pm\)0.11 & 0.34\(\pm\)0.15 & 29.3\({}^{+3.33}_{-3.07}\) & 1.64e-49 & ACT-M \\ HD192310 II & 43.614\(\pm\)0.023 & 0.93\(\pm\)0.13 & 0.5\(\pm\)0.1 & 3.83\(\pm\)0.44 & 2.41e-13 & ACT-R \\ HD192310 III & 39.509\(\pm\)0.059 & 1.0\(\pm\)0.1 & 0.22\(\pm\)0.11 & 4.48\(\pm\)0.46 & 1.43e-09 & ACT-R \\ HD192310 IV & 24.559\(\pm\)0.016 & 0.6\(\pm\)0.1 & 0.16\(\pm\)0.12 & 2.46\({}^{+0.39}_{-0.4}\) & 7.74e-06 & Candidate \\ HD207129 I & 1964\(\pm\)49 & 4.02\(\pm\)0.61 & 0.44\(\pm\)0.16 & 72.95\({}^{+8.37}_{-8.07}\) & 3.88e-11 & ACT-M \\ HD209100 I & 1313 We also use RVSearch to carry out an uninformed search on each star's combined S-index measurements, similar to the RV fitting described above. By providing RVSearch with data sets composed of the observation time stamp, S-index, and the empirical instrument-by-instrument S-index errors described in Section 3.2 from each observation, we are able to determine whether the S-index data contain significant periodic signals. Further, if the empirical errors are underestimated, RVSearch's use of a jitter term will adjust them to more accurately capture the scatter on a star-by-star basis. A list of all of the detected S-index signals is provided in Table 6. We then carry out a side-by-side comparison of the signals found in the activity search to the signals found in the radial velocity search. In instances of overlapping periods between the two search results, we assert that the signal in the radial velocities is likely caused by stellar activity, rather than by the gravitational effects of an orbiting exoplanet. New RV signals that show evidence of this period overlap are reported in Table 4 as 'Activity'. Instances where there is no overlap between the significant periods detected in the RV and S-index data sets are treated on a more individual basis. If the signal detected in the radial velocity periodogram peak is well defined, the strength of the RV signal increases roughly monotonically with the number of observations, and we do not find a correlated activity signal, then we mark the signal as a 'Candidate' in Table 4. For less obvious cases, where the RV signal is one of a set of numerous peaks clustered in a narrow period range but we find no corresponding signal in the activity data, we then consider the star's spectral type and known activity history to decide whether the signal is likely to be due to activity. These cases are discussed in detail in each star's subsection. We adopt the classification Source Requiring Confirmation ('SRC') for signals that do not yet have enough evidence to be classified as either Candidate or Activity. Signals marked as SRC in Table 4 require further followup analysis to determine their nature. Results for HD 115617's RVSearch analysis of the S-index data are shown in Figure 5. No significant detections are made, although we note that a strong signal is present at \(P=3995.5\,\mathrm{days}\) in the periodogram. The strength of this signal is limited by the time span of observations for this target; we expect that given several more years' worth of data, we would be able to state conclusively whether this signal is physical or not. When compared with the radial velocity analysis in Figure 4, we see no overlapping periods between the three previously published planet signals the S-index signal of growing strength in the periodogram. We thus affirm that the planets HD 115617 b, c, and d are not false signals caused by activity, but rather true exoplanets. We include summary figures like Figure 5 from each star's S-index data in a figure set in the online journal. As referenced in 3.3, the _UCLES_ spectrograph does not cover a wide enough wavelength range for the Ca II H & K lines to be observed simultaneously with the iodine region. For targets whose RV signals are largely driven by _UCLES_ data, this causes the corresponding S-index activity analysis to be less definitive as a planet vetting step. Instead, we use EW\({}_{H\alpha}\) measurements as an activity indicator for those stars. ### Identification of Candidate Activity Signals in UCLES _Eu\({}_{h\alpha}\)_ Data Stellar variability in the _UCLES_ time series, which does not provide coverage of the Ca II H & K lines, was instead assessed using measurements of the equivalent width of the H\(\alpha\) absorption line (EW\({}_{H\alpha}\)). Although the H\(\alpha\) line is usually more informative for cooler, M dwarf stars (see, e.g., Robertson et al., 2013) it is still sensitive to some activity variations in hotter, Sun-like stars. We subjected the EW\({}_{H\alpha}\) time series to the same uninformed search process with RVSearch described for the S-index data above, again beginning with the removal of any 5+\(\sigma\) outliers within each star's data set. We run the cleaned time series through RVSearch and record any significant periodicities so that they can be compared with the periods (if any) detected in that star's RV data. We find that the long term structure present in EW\({}_{H\alpha}\) is not sufficiently periodic to show up in the RVSearch\(\Delta\)BIC periodograms, but caution that it may still obscure some lower amplitude activity signals within the data. A list of all of the detected EW\({}_{H\alpha}\) signals is provided in Table 7. **Fig. Set 6. H\(\alpha\) Activity Analysis Summary Plots** Figure 6 shows an example summary figure from H\(\alpha\) analysis of HD 115617. Our analysis returns three detections, only one of which we believe to be of astrophysical \begin{table} \begin{tabular}{l l l} \hline \hline HD & Alias & RV Trend \\ \hline 100623 & 20 Crt & 0.00514923 m s\({}^{-1}\) day\({}^{-1}\) \\ 131977 & GJ 570A & -0.0116872 m s\({}^{-1}\) day\({}^{-1}\) \\ 188512 & \(\beta\) Aql & 0.00262165 m s\({}^{-1}\) day\({}^{-1}\) \\ 190248 & \(\delta\) Pav & -0.00055 m s\({}^{-1}\) day\({}^{-1}\) \\ \hline \hline \end{tabular} Stars from our sample for which the preferred RVSearch model included a linear and/or quadratic trend. All appear to be due to known stellar companions except for \(\delta\) Pav. \end{table} Table 5: Linear/Quadratic RV Trends Figure 6: RVSearch results for the relative EW\({}_{H\alpha}\) measurements of HD 115617. Panel (a) shows the EW\({}_{H\alpha}\) time series with the best-fit model plotted behind them while panel (b) shows the EW\({}_{H\alpha}\) residuals. Panels (c), (e), and (g) show phase folded curves for the signals identified by RVSearch, and panels (d), (f), and (h) show the periodograms associated with each signal, with the yellow horizontal dotted line marking the minimum \(\Delta\)BIC for a 1% FAP significance and the red and green vertical dotted lines showing the one month and one year aliases of the tallest peak. Panel (i) shows the effective strength of each signal as a function of the number of observations, and panel (j) shows the residuals periodogram, indicating that no further signals are found in the data set. The complete set of \(H\alpha\) activity summary plots (31 figures) can be found in the online journal. \begin{table} \begin{tabular}{l l l||l l l} \hline \hline ID & Period [days] & S-index Amp. & ID & Period [days] & S-index Amp. \\ \hline HD4628 I & 3699\(\pm\)310 & 0.0161\(\pm\)0.0016 & HD85512 VIII & 51.74\(\pm\)0.06 & 0.0152\(\pm\)0.0023 \\ HD14412 I & 2312\(\pm\)73 & 0.013\(\pm\)0.0034 & HD100623 I & 3741\(\pm\)93 & 0.0228\(\pm\)0.0025 \\ HD14412 II & 5686\(\pm\)1600 & 0.0191\(\pm\)0.0079 & HD114613 I * & 6722.80 & – \\ HD16160 I & 4232\(\pm\)310 & 0.0417\(\pm\)0.0073 & HD125072 I & 2989\(\pm\)100 & 0.098\(\pm\)0.011 \\ HD16160 II & 3204\(\pm\)110 & 0.0253\(\pm\)0.0061 & HD125072 II & 40.49\(\pm\)0.04 & 0.0336\(\pm\)0.0072 \\ HD20766 I * & 1553.62 & – & HD131977 I & 22.77\(\pm\)0.0 & 0.29\(\pm\)0.17 \\ HD22049 I & 1086.7\(\pm\)7.1 & 0.0496\(\pm\)0.0048 & HD131977 II & 3.88\(\pm\)0.0 & 0.192\(\pm\)0.065 \\ HD26965 I & 3177\(\pm\)84 & 0.0206\(\pm\)0.0018 & HD131977 III & 2.09\(\pm\)0.0 & 0.067\(\pm\)0.014 \\ HD30495 I & 71.46\(\pm\)0.11 & 0.0303\(\pm\)0.0046 & HD146233 I & 2812\(\pm\)290 & 0.0094\(\pm\)0.0032 \\ HD32147 I & 3774\(\pm\)250 & 0.063\(\pm\)0.016 & HD146233 II & 5272\(\pm\)1500 & 0.0116\(\pm\)0.0043 \\ HD32147 II & 3204\(\pm\)310 & 0.043\(\pm\)0.016 & HD149661 I & 1649\(\pm\)55 & 0.0423\(\pm\)0.0065 \\ HD32147 III & 381.7\(\pm\)2.4 & 0.0093\(\pm\)0.0019 & HD149661 II & 3874\(\pm\)1200 & 0.068\(\pm\)0.095 \\ HD32147 IV & 343.2\(\pm\)2.7 & 0.0088\(\pm\)0.0018 & HD156026 I & 378.9\(\pm\)2.2 & 0.05\(\pm\)0.01 \\ HD32147 V & 95.6\(\pm\)0.24 & 0.005\(\pm\)0.0016 & HD160346 I & 2975\(\pm\)600 & 0.0883\(\pm\)0.0094 \\ HD50281 I & 2264\(\pm\)11 & 0.0748\(\pm\)0.0042 & HD160346 II & 392.6\(\pm\)3.2 & 0.05\(\pm\)0.013 \\ HD50281 II & 2102\(\pm\)12 & 0.065\(\pm\)0.005 & HD160346 III & 7.96\(\pm\)0.01 & 0.0313\(\pm\)0.0093 \\ HD50281 III & 139.42\(\pm\)0.05 & 0.0345\(\pm\)0.0039 & HD160346 IV & 2.54\(\pm\)0.0 & 0.0177\(\pm\)0.0081 \\ HD50281 IV & 12.48\(\pm\)0.0 & 0.0266\(\pm\)0.0039 & HD190248 I * & 6810.18 & – \\ HD50281 V & 16.5\(\pm\)0.0 & 0.022\(\pm\)0.0036 & HD192310 I & 3817\(\pm\)60 & 0.0409\(\pm\)0.0013 \\ HD50281 VI & 5.39\(\pm\)0.0 & 0.083\(\pm\)0.084 & HD192310 II & 345.34\(\pm\)0.48 & 0.0093\(\pm\)0.0058 \\ HD50281 VII & 2.7\(\pm\)0.0 & 0.0169\(\pm\)0.0047 & HD192310 III & 44.01\(\pm\)0.11 & 0.0044\(\pm\)0.0011 \\ HD69830 I & 3989\(\pm\)190 & 0.0146\(\pm\)0.0017 & HD192310 IV & 432.6\(\pm\)3.4 & 0.015\(\pm\)0.0031 \\ HD69830 II & 731\(\pm\)31 & 0.0038\(\pm\)0.0018 & HD192310 V & 40.8\(\pm\)0.1 & 0.00383\(\pm\)0.00088 \\ HD69830 III & 2530\(\pm\)180 & 0.008\(\pm\)0.002 & HD192310 VI & 34.6\(\pm\)0.03 & 0.00521\(\pm\)0.00082 \\ HD72673 I & 3217\(\pm\)200 & 0.0097\(\pm\)0.0016 & HD192310 VII & 133.38\(\pm\)0.43 & 0.0069\(\pm\)0.0015 \\ HD75732 I & 3801\(\pm\)130 & 0.0263\(\pm\)0.0015 & HD192310 VIII & 33.73\(\pm\)0.05 & 0.00563\(\pm\)0.00097 \\ HD85512 I & 4245\(\pm\)52 & 0.2106\(\pm\)0.0029 & HD207129 I * & 1897.99 & – \\ HD85512 II & 1294\(\pm\)14 & 0.0443\(\pm\)0.0035 & HD209100 I & 2063\(\pm\)160 & 0.0588\(\pm\)0.0046 \\ HD85512 III & 478.1\(\pm\)2.2 & 0.0324\(\pm\)0.0027 & HD209100 II & 32.87\(\pm\)0.07 & 0.045\(\pm\)0.036 \\ HD85512 IV & 322.05\(\pm\)0.85 & 0.0351\(\pm\)0.0032 & HD216803 I & 3.89\(\pm\)0.0 & 0.066\(\pm\)0.008 \\ HD85512 V & 45.52\(\pm\)0.04 & 0.0187\(\pm\)0.0021 & HD216803 II & 4.08\(\pm\)0.0 & 0.051\(\pm\)0.016 \\ HD85512 VI & 44.18\(\pm\)0.03 & 0.0188\(\pm\)0.0016 & HD216803 III & 2.8\(\pm\)0.3 & 0.019\(\pm\)0.013 \\ HD85512 VII & 104.3\(\pm\)0.15 & 0.022\(\pm\)0.0034 & & & \\ \hline \end{tabular} This table contains all significant S-index signals identified by RVSearch. We report the period and S-Index semi-amplitude (in units of the Mt. Wilson S-index) of each signal. For signal interpretations, see each star’s individual discussion section. Signals with a * designation failed to return well constrained MCMC results, and so we instead report the MAP fits for their orbital periods. \end{table} Table 6: S-index Signals Identified by RVSearch causes. The first signal, with \(P=346.3\pm 1.9\) d, is extremely close to one year. Just as with _HARPS_ data, we expect to see yearly systematics within the H\(\alpha\) data caused by the observing cadence. This signal is therefore attributed to systematics. H\(\alpha\) signal II is close to the rotation period predicted for this star. Because we have a longer observation baseline and more precise measurements than were used for previous rotation period estimates in the literature, we report H\(\alpha\) signal II as an update to the stellar rotation period. H\(\alpha\) signal III is most likely too long-period to be caused by differential rotation, though we discuss the possibility in detail in SS5.14. It also does not correspond to any peaks in RV or S-index data. We leave it to future, more in-depth studies of stellar activity to characterize the cause of this detection. The resulting summary figures for all 31 stars in this study observed by _UCLES_ are available in a figure set in the online journal. ### Injection and Recovery Analysis After conducting and analyzing the uninformed radial velocity and stellar activity indicator searches, we use RVSearch to execute an injection/recovery (I/R) analysis for each star's residual RV data set. I/R analyses characterize the completeness of each star's RV time series, quantifying what combinations of companion orbital periods and minimum masses we are currently sensitive to. The results of these I/R efforts make clear what types of planets we would expect to be able to detect in each star's Habitable Zone given the current data sets, and can help to prioritize future RV surveys that aim to push sensitivity limits to lower mass temperate planets. While we are primarily interested in the current RV sensitivity within each star's Habitable Zone, we use this exercise as an opportunity to quantify our planet sensitivity across the entirety of the orbital period space covered by the combined RV data sets. To accomplish this, 5000 synthetic planet signals are injected into the RV residuals of each star's uninformed search results. These "planets" are assigned orbits and \(m\sin i\) values drawn from log-uniform distributions. The corresponding periods and RV semi-amplitudes span 2 to 10,000 days and 0.1 to 1,000 m s\({}^{-1}\), respectively. Key properties of the data set such as observation baseline, measurement values, and uncertainties are preserved. Following the results of Kipping (2013), which examined the eccentricities of the population of RV detected exoplanets, the synthetic planets have eccentricities drawn from a \(\beta\) distribution. The same planet search algorithm used in the uninformed search is then run on these modified data sets to determine whether the injected signals can be recovered. This quantifies the planet sensitivity of the existing data, calculating the probability that a planet of a given \(m\sin i\) and orbital period would be detected within the data. A completeness contour plot is generated, demonstrating what regions of \(m\sin i\) and orbital period space we are already sensitive to with the existing data. Figure 7 shows an example of a completeness contour plot for HD 115617. The three planets published in Vogt et al. (2010) and detected in the uninformed search stage of our analysis are depicted as black circles, and lie within the region of \(m\sin i\) and semi-major axis space where we expect to be sensitive to Keplerian signals. The three regions of red points above the detected signals are remnants of the way the injection and recovery analysis works. Once an injected planet is recovered, it is compared with the already-fit model to see whether it would be in a reasonably stable orbit location compared with what has already detected. If the injected planet has the same orbit as an already-fit signal, RVSearch would not include the planet in the model due to orbital stability constraints, and so the injected signal is "not recovered." This results in columns of non-recovered injected planets that align with previously detected/removed Keplerian signals. The black circle corresponding to the final candidate detected in the uninformed search, the long period signal that overlaps with a period in the activity search, is located in a regime with a much lower probability of detection. While the probability of detecting a synthetic at this period and semi-amplitude is \(<\)10% according to the figure, the False Alarm Probability (FAP) of this long period signal detected in the RV time series is 1.22e-10, making it very unlikely to be a result of random fluctuations in the data. **Fig. Set 7.** **Injection and Recovery Analysis Contour Plots** Based on Figure 7, we would expect that for planets on a 0.1 AU orbit the existing RV data would be sensitive down to planet masses of \(m\sin i\,=\,\)2.6 \(M_{\oplus}\), a mass that could include terrestrial planets. But when considering planets on more temperate orbits near 0.914 AU (this star's EEID) the existing RV data is only sensitive to planets with masses of \(\geq\)15 \(M_{\oplus}\). Thus efforts to detect an Earth analog around HD 115617 would not succeed with the existing data set. However, we can say with some confidence that if a 15 \(M_{\oplus}\,\) or larger planet were orbiting the star within its habitable zone, which would preclude the existence of an Earth-analog on a similar orbit, that we would already be able to detect it. Thus, at the moment, there is nothing to eliminate the possibility that HD 115617 could host an Earth-analog. Table 12 describes the results of these injection and recovery tests. For each star in the set, we report what regions of \(m\sin i\,\) and period space we are already sensitive to using the compiled archival data. We include minimum mass values for several semi-major axes, marking the limits of sensitivity for each star's archival data set. ### Speckle Imaging Analysis In an attempt to identify any stellar companions from the speckle imaging observations obtained for some of the target stars, reconstructed images derived from the image reduction process were used (Howell et al., 2011; Horch et al., 2011). Distribution of all local maxima and minima in the background of the images as a function \begin{table} \begin{tabular}{l l l||l l l} \hline \hline \multicolumn{1}{c}{ ID} & \multicolumn{1}{c}{Period [days]} & \multicolumn{1}{c}{EW\({}_{H\alpha}\) Amp.} & \multicolumn{1}{c}{ID} & \multicolumn{1}{c}{Period [days]} & \multicolumn{1}{c}{EW\({}_{H\alpha}\) Amp.} \\ \hline HD20794 I & 2204\(\pm\)16 & 0.0057\(\pm\)0.0041 & HD115617 III & 44.93\(\pm\)0.07 & 0.00151\(\pm\)0.00026 \\ HD20794 II & 1753\(\pm\)46 & 0.00283\(\pm\)0.00095 & HD125072 I * & 7137.76 & 0.00630685 \\ HD20807 I * & 2859.91 & 0.00367565 & HD136352 I * & 376.779 & 0.00913149 \\ HD23249 I & 49.57\(\pm\)0.1 & 0.00241\(\pm\)0.00049 & HD136352 II * & 7207.58 & 0.00537 \\ HD26965 I & 43.5\(\pm\)0.07 & 0.00316\(\pm\)0.00054 & HD140901 I & 7161\(\pm\)2100 & 0.01205\(\pm\)0.00082 \\ HD72673 I & 341.2\(\pm\)3.6 & 0.00508\(\pm\)0.00076 & HD140901 II & 19.99\(\pm\)0.02 & 0.0036\(\pm\)0.0007 \\ HD100623 I & 3205\(\pm\)130 & 0.01136\(\pm\)0.00043 & HD160691 I * & 30888.2 [peak=5293.7] & 0.00379764 \\ HD102365 I * & 369.144 & 0.00752353 & HD160691 II * & 362.611 & 0.00247693 \\ HD102365 II * & 18549.80 [peak=7273.6] & 0.00322183 & HD190248 I & 352.9\(\pm\)1.5 & 0.00261\(\pm\)0.00032 \\ HD102365 III * & 49.68 & 0.00130454 & HD190248 II & 1171\(\pm\)36 & 0.0021\(\pm\)0.00033 \\ HD114613 I * & 27460.5 [peak=7652.9] & 0.00360532 & HD192310 I * & 13621.7 & 0.00676791 \\ HD114613 II * & 365.429 & 0.00211492 & HD192310 II * & 363.678 & 0.00499708 \\ HD115617 I & 346.3\(\pm\)1.9 & 0.00376\(\pm\)0.00027 & HD207129 I & 5455\(\pm\)1900 & 0.0036\(\pm\)0.00036 \\ HD115617 II & 24.63\(\pm\)0.02 & 0.00152\(\pm\)0.00025 & HD207129 II & 1726\(\pm\)71 & 0.00309\(\pm\)0.00061 \\ \hline \end{tabular} This table contains all significant EW\({}_{H\alpha}\) signals identified by RVSearch. We report the period and EW\({}_{H\alpha}\) semi-amplitude of each signal. For signal interpretations, see each star’s individual discussion section. Systems with a * designation failed to return well constrained MCMC results, generally producing period uncertainties larger than the median period value, and so we instead report their MAP orbital solutions. In three cases, even the MAP period for the long period signal is more than 3x larger than the significant peak in the \(\Delta\)BIC periodogram and so we note the period of the original peak in brackets next to the MAP result. \end{table} Table 7: H \(\alpha\) Signals Identified by RVSearch Figure 7: The RVSearch completeness contour plot for HD 115617. The large black dots indicate the periodic signals identified by RVSearch in the archival RV data (see figure 4). The colored points depict the synthetic planets that were injected into the RV residuals – blue points represent planets that were successfully recovered, while red points were not recovered. The red contours display the probability of detection averaged over small regions of semi-major axis and \(m\sin i\,\) space. The black line is the 50% detection probability contour. The complete set of Injection and Recovery Analysis plots (46 figures) can be found in the online journal. of separation were examined by drawing five concentric annuli each with width of 0.2\({}^{\prime\prime}\) centered at radii of 0\({}^{\prime\prime}\).2, 0\({}^{\prime\prime}\).4, 0\({}^{\prime\prime}\).6, 0\({}^{\prime\prime}\).8, and 1\({}^{\prime\prime}\).0 from the primary star. Standard deviations of these extrema from the mean background in each annulus were computed by averaging the values obtained from both maxima and minima. A 5\(\sigma\) detection limit, which is five times brighter than the mean background within each annulus, was then estimated. Any peak in the image that was above the 5\(\sigma\) limit at a specific angular separation was considered a companion candidate for further study. For the 6 targets that had speckle imaging observations, no such peaks were found and therefore no stellar companions were identified. For these non-detections, 5\(\sigma\) limits derived for each annulus in terms of instrumental magnitude difference (\(\Delta\)m\({}_{i}\), where \(i\) is filter type) were used as a conservative upper limit above which stars should be detected, thus providing a constraint of the possible undetected low mass companions nearby. Since \(\Delta\)m\({}_{i}\) varies as a function of separation from the primary where at smaller separations \(\Delta\)m\({}_{i}\) is slightly smaller, we reported the estimated \(\Delta\)m\({}_{i}\) at both 0\({}^{\prime\prime}\).1 and 1\({}^{\prime\prime}\).1 from the primary in Table 8. Figure 8 summarizes these results for HD 1581. **Fig. Set 8. Speckle Imaging Analysis Plots** ## 5 Systems with updated parameters In this section, we present results from targets for which we recover previously published planetary systems, stellar companions, and activity cycles. For the cases where we have additional data or increased precision, we cite the current accepted values and report updates to these systems' parameters. Table 9 contains former and new (this work) parameters for previously reported exoplanet systems. ### HD 13445 (GJ 86 A) HD 13445 (GJ 86 A, HR 637, HIP 10138) is a nearby K1V star (Gray et al., 2006) at \(d\) = 10.76 pc (\(\varpi\) = 92.9251 \(\pm\) 0.0461 mas; Gaia Collaboration et al., 2020). GJ 86A has both a known stellar companion (GJ 86B, WD 0208-510) and an exoplanet (GJ 86Ab, HD 13445b). Farihi et al. (2013) characterize the white dwarf companion GJ 86B and constrain its orbit - estimating a spectral type of DQ6, mass = \(0.59\pm 0.01\)\(M_{\odot}\), orbital period of \(P=120-481\) yr, and adopted system age of 2.5 Gyr. We detect two signals in the RVs. One is the known exoplanet, and the other may be caused by the binary companion, but is too poorly constrained to say for certain. Butler et al. (2001) published the planet with 15.76 day period. We derive orbital parameters for GJ 86Ab of \(P_{b}=15.764862\pm 0.000043\) d, \(K_{b}=377.58\pm 0.77\) m s\({}^{-1}\), \(e_{b}=0.0485\pm 0.0018\). The second significant detection has a peak in the periodogram with \(P=20504\) days. Because this is much longer than the observation baseline for the target, the MCMC fit for the signal is not physical. We categorize this signal as LPS and note that more data would be required to constrain this signal further. Further discussion of the LPS category of detection can be found in Section 4.1. Finally, we note that H\(\alpha\) analysis shows strength in the periodogram at \(P=2001.7\) days, though the detection does not cross the False Alarm Probability threshold. ### HD 16160 (GJ 105 A) HD 16160 (GJ 105 A, HR 753, HIP 12114) is a nearby K3V spectral standard star (Keenan and McNeil, 1989) in a triple system, located at \(d=7.23\) pc (\(\varpi=138.2084\pm 0.1436\) mas; Gaia Collaboration et al., 2018). The star is a \(5.1\pm 1.1\) Gyr-old thin disk (Ramirez et al., 2012), low-activity (\(\log R^{\prime}_{HK}\)= -4.87; Gomes da Silva et al., 2021) star with magnetic activity cycle of \(P_{\rm cyc}\ \simeq 12.18\) yr (Willamo et al., 2020) or \(12.7\pm 0.11\) yr (Boro Saikia et al., 2018). From analysis of the Mt. Wilson survey data, Donahue et al. (1996) reports an average rotation period of \(P_{\rm rot}=48.0\) d over 5 seasons, with Figure 8: Speckle Imaging Analysis plot for HD 1581. Top right: the reconstructed Speckle images for HD 1581 for each wavelength. Larger plot: 5\(\sigma\) flux detection limit relative to image backgrounds, measured in concentric circular annuli from the center of the image. The complete set of Speckle Imaging Analysis plots (6 figures) can be found in the online journal. individual seasonal rotation periods ranging from 42.2 d to 51.5 d (i.e, pronounced differential rotation). GJ 105 A has a faint M7V companion GJ 105 C (HD 16160B, WDS J02361+0653B) observed at separations between 1\(\arcsec\).7 and 3\(\arcsec\).3 (Golimowski et al., 1995, 2000; Mason et al., 2001)14, and the M4.0V star GJ 105 B star on a very wide orbit at 164\(\arcsec\) separation (Mann et al., 2015; van Maanen, 1938). Astrometric perturbations attributed to a low-mass stellar companion to GJ 105 A (BD+6\({}^{\circ}\) 398) were first reported by Lippincott (1973), who measured photographic plate positions from the Sproul astrometric program taken between 1937 and 1968 to estimated the orbit to have \(P=50\) yr and \(e=0.6\), and predicted the companion to be a \(0.10\,M_{\odot}\) star of type M6. Footnote 14: References to the Washington Double Star (WDS) catalog (Mason et al., 2001) are actually referring to the regularly updated WDS table at Vizier ([https://cdsarc.unistra.fr/viz-bin/cat/B/wds](https://cdsarc.unistra.fr/viz-bin/cat/B/wds)). Golimowski et al. (2000) concluded that the companion C first observed in 1993 with the Palomar 60" Adaptive Optics Coronagraph (Golimowski et al., 1995) was consistent with (1) the astrometric perturbations with \(\sim\)50-60 yr periodicities reported by Lippincott (1973) (and later refined by Ianna (1992) and Heintz & Cantor (1994)), and (2) the 11 m s\({}^{-1}\) yr\({}^{-1}\) radial velocity trend observed during the 1990s by Cumming et al. (1999). Ianna (1992) analyzed photographic plate positions for GJ 105 A between 1915 and 1992 and estimated \(P=59.5\) yr, astrometric amplitude \(\alpha=0\arcsec.293\), \(e=0.35\), and companion mass \(M_{C}=0.13\,M_{\odot}\). Our analysis of the archival RV data with RVSearch yields a long-period signal with \(P=22999\pm 1200\) d (\(63.0\pm 3.3\) yr), \(K=702.5\pm 2.9\) m s\({}^{-1}\), and \(e=0.6075\pm 0.0092\), which is reasonable for the orbit of GJ 105 C. These values are not far from the most recent astrometric-only orbital analysis from Heintz & Cantor (1994), who estimated \(P=61\) yr, \(e=0.67\), \(i=49^{\circ}\) (consistent with companion mass \(M_{C}=0.10\,M_{\odot}\)). They also align well with the values reported in the recent Rosenthal et al. (2021), who found \(a=16.37\pm 0.28\) AU and \(e=0.6427^{+0.0038}_{-0.0039}\). Using the stellar mass adopted in this paper, we calculate the period to be approximately \(P=77\) years. This also aligns with the signal we recover, though our detection is significantly less well constrained than in other works. A joint analysis of the radial velocity, astrometric, and imaging data over the past century could yield stronger orbital constraints and a more accurate dynamical mass estimate, but is beyond the scope of this study. Analysis of S-indices using RVSearch returns two significant activity detections, with parameters: \(P_{I}=4232\pm 310\) d, \(e_{I}=0.17\pm 0.055\) and \(P_{II}=3204\pm 110\) d, \(e_{II}=0.413\pm 0.092\). The first of these aligns fairly well with the magnetic activity cycle reported by Willamo et al. (2020). We recommend a more in-depth study of this star's activity to fully characterize the sources of these periodic signals in the S-indices. ### Hd 20794 (82 Eri) 82 Eri (GJ 139, HD 20794, HR 1008, HIP 15510) is a G8V star (Gray et al., 2006) at \(d=6.00\) pc (\(\varpi=166.5242\pm 0.0784\) mas; Gaia Collaboration et al., 2020). The star is somewhat cooler than the Sun (\(T_{\rm eff}=5398\) K), metal poor ([Fe/H] \(=-0.41\)) (Tsantaki et al., 2013), and very inactive (\(\log R^{\prime}_{HK}=\) -5.025) (Lovis et al., 2011). Lovis et al. (2011) reported a magnetic activity cycle of \(P_{\rm cyc}=751^{+290}_{-25}\) d (\(2.06^{+0.79}_{-0.07}\) yr) based on 197 \(\log R^{\prime}_{HK}\) measurements over a span of 2694 d. The star was reported to host three planets by Pepe et al. (2011), with orbital periods of \(P_{b}=18.1\) d, \(P_{c}=40.1\) d, and \(P_{d}=90.3\) d, based upon their analysis of 173 _HARPS_ RV data points taken between 2003 and 2011. A reanalysis of the system was published in 2017, which made use of an updated _HARPS_ data set containing \begin{table} \begin{tabular}{c|c|c|c|c|c|c} HD & Instrument & Date (UT) & \(\Delta\)m\({}_{562}\) (0.1\(\arcsec\)) & \(\Delta\)m\({}_{562}\) (1.1\(\arcsec\)) & \(\Delta\)m\({}_{332}\) (0.1\(\arcsec\)) & \(\Delta\)m\({}_{832}\) (1.1\(\arcsec\)) & EW\({}_{H\alpha}\) Correlation \\ \hline \hline 1581 & Zorro & 2020 Oct 29 & 4.31 & 7.50 & 4.43 & 7.95 & N/A \\ \hline 20766 & Zorro & 2020 Oct 23 & 5.02 & 7.36 & 4.82 & 8.00 & -0.48 \\ \hline 20807 & Zorro & 2020 Oct 23 & 4.80 & 6.29 & 4.70 & 8.22 & -0.17 \\ \hline 140901 & Zorro & 2021 Jul 22 & N/A & N/A & 4.44 & 7.79 & -0.68 \\ \hline 146233 & ‘Alopeke & 2021 Jun 27 & 4.66 & 6.56 & 4.74 & 9.33 & -0.06 \\ \hline 196761 & ‘Alopeke & 2021 Jun 27 & 4.97 & 6.02 & 5.05 & 8.22 & N/A \\ \hline \hline \end{tabular} Note. – All columns without N/A values presents speckle imaging details, except for the last column that includes Pearson coefficient values of correlation between _UCLES_ RVs and H\(\alpha\) EWs. Some _UCLES_ RVs returned no significant signals when analyzed alone and so no correlation is calculated, thus N/A. For HD 140901, no data was acquired for the blue channel due to alignment issue. \end{table} Table 8: Speckle Imaging Results 713 RV epochs obtained between 2003 and 2013 (Feng et al., 2017). The Feng et al. (2017) results confirm the Keplerian nature of the 18 and 90 day signals put forth in Pepe et al. (2011) and identify two additional planet candidates with orbital periods of 147 and 330 days. They find only weak evidence of the \(\sim\)40 day signal reported by Pepe et al. (2011), however, and assert that more data are necessary to determine the nature of this signal. Our data set for HD 20794 contains 763 _HARPS_ epochs, spanning 2003 - 2016, along with 549 _UCLES_ points, and 77 _PFS_ points. Running this combined RV data set through RVSearch, we confirm HD 20794 b (\(P_{b}=18.305\pm 0.0052\) d, \(K_{b}=0.807\pm 0.089\) m s\({}^{-1}\)\(e_{b}=0.17\pm 0.11\)) and HD 20794 d (\(P_{d}=89.766\pm 0.085\) d, \(K_{d}=0.86\pm 0.12\) m s\({}^{-1}\)\(e_{d}=0.27\pm 0.11\)) in Tables 9 and 4. Similarly to Feng et al. (2017), we do not register a detection of the \(\sim\)40 day signal attributed to HD 20794 c, though we note that the residuals periodogram shows significant power for a signal at 40.2 days, which likely corresponds to the reported 40.1-day period of HD 20794 c in Pepe et al. (2011). If the signal were Keplerian in nature, however, we would expect its statistical significance to increase as more RV data points are added to the analysis. This is especially true for a star that exhibits the low levels of RV scatter we see in the HD 20794 RVs, where RMS = 1.99 m s\({}^{-1}\) and 1.00 m s\({}^{-1}\) for _HARPS_ and _PFS_, respectively. Another possibility is that the 40 day signal is tied to stellar variability. The star's low chromospheric activity (\(\log R^{\prime}_{HK}\)= -5.03) is consistent with a rotation period of \(P_{\rm rot}\) \(>\) 34 days for a star of its color (using activity-rotation relations of Mamajek & Hillenbrand, 2008), and so rotational modulation at a period of roughly 40 days would not be surprising. Yet applying RVSearch to the star's assembled S-index data does not reveal significant power at or near a 40 day period. We also do not significantly detect planet candidates e, f, or g, as reported by Feng et al. (2017). There is another peak in the RV residuals periodogram close to 330 days, the orbital period of candidate f, but similarly to the 40 day signal it does not cross the threshold of being detected by RVSearch. Given the more sophisticated treatment of stellar variability and correlated noise in the that paper, however, we do not view our non-detections as a refutation of these candidates. S-index analysis returns no significant signals, but analysis of H\(\alpha\) in the _UCLES_ data yields two significant detections: \(P_{I}=2204\pm 16\) d, \(e_{I}=0.886\pm 0.049\) and \(P_{II}=1753\pm 46\) d, \(e_{II}=0.68\pm 0.15\). The high eccentricities fit to these signals are cause for skepticism regarding their exactness, but we regard them as good evidence for the existence of an approximately 7-8 year activity cycle for this star. ### HD 22049 (\(\epsilon\) Eri) \(\epsilon\) Eri (GJ 144, HD 22049, HR 1084, HIP 16537, Ran) is a young, active K2V spectral standard star (Keenan & McNeil, 1989) at \(d=3.22\) pc (\(\varpi=310.5773\pm 0.1355\) mas; Gaia Collaboration et al., 2020) with a candidate planet and debris disks (Mawet et al., 2019). Analysis of the Mt. Wilson Ca II H & K data of \(\epsilon\) Eri by Donahue et al. (1996) found strong evidence for differential rotation, with season-averaged rotation periods ranging from \(P_{\rm rot}\) = 11.04 to 12.18 d over 9 seasons, with average \(P_{\rm rot}\) = 11.68 d. An archival analysis of 45 years of chromospheric activity data by Metcalfe et al. (2013) identified two prominent activity cycles for \(\epsilon\) Eri at \(P_{cyc1}=2.95\pm 0.03\) yr and \(P_{cyc2}=12.7\pm 0.3\) yr, at approximately 0.68\(\times\) and 2.94\(\times\) planet orbital period reported by Mawet et al. (2019). We recover the one confirmed planet (\(P=2690\pm 30\) d = \(7.365\pm 0.082\) yr; Mawet et al., 2019), but with a less certain period of \(P_{b}=2832\pm 120\) d (\(7.76\pm 0.33\) yr), and semiamplitude and eccentricity \(K_{b}=11.1\pm 1.2\) m s\({}^{-1}\), \(e_{b}=0.09\pm 0.08\). The periodogram residuals show a signal at 12.4 days, which agrees well with rotation periods reported by Donahue et al. (1996). We detect one S-index activity signal with parameters \(P_{I}=1086.7\pm 7.1\) d (2.98\(\pm\)0.02 years) and \(e_{I}=0.268\pm 0.081\). This agrees with the 2.95 year activity cycle reported by Mawet et al. (2019). We note that we do not detect the false positive \(P=773.4^{+4.7}_{-4.8}\)d signal reported by Rosenthal et al. (2021). ### HD 26965 (40 Eri A) 40 Eri A (\(o^{2}\) Eri A, GJ 166 A, HD 26965, HR 1325, Keid) is a famous nearby (\(d=4.98\) pc) (\(\varpi=200.62\pm 0.23\) mas; van Leeuwen, 2007) K0.5V standard star (Keenan & McNeil, 1989) in a triple system with a white dwarf (B) and M dwarf (C) component. From time series analysis of chromospheric activity data from the Mt. Wilson survey, the rotation period of the star has been previously measured to be 43 d (Baliunas et al., 1996) and 42 d (Frick et al., 2004), and _predicted_ rotation periods (based on \(\log R^{\prime}_{HK}\) values and correlations with rotation for other cool dwarfs) have been reported to be 37.1 d (Saar & Osten, 1997), \(42.2\pm 4.4\) d (Lovis et al., 2011), and 43 d (Isaacson & Fischer, 2010). Long term monitoring of Ca H & K emission from 40 Eri A has revealed a magnetic activity period with measured period \(P_{cyc}=10.1\pm 0.1\) yr (Baliunas et al., 1995), 10.4 yr (3800 d; Frick et al., 2004), 9.18\({}^{+2.20}_{-1.48}\) yr (Lovis et al., 2011), and \(10(9.57-10.5)\) yr (Olah et al., 2016), or \(10.23\pm 0.07\) yr (Boro Saikia et al., 2018). Diaz et al. (2018) presented an extensive analysis of \(\sim\)1100 spectra taken using _HIRES_, _PFS_, CHIRON, and _HARPS_, and reported a strong signal at \(P=42.364\pm 0.015\) d, \(K=1.59\pm 0.15\) m s\({}^{-1}\), \(e=0.017\pm 0.046\), but found it challenging to distinguish this signal from the star's rotation. Shortly after, Ma et al. (2018) conducted a reanalysis of the Diaz et al. (2018) data combined with 133 new spectroscopic observations taken with the TOU instrument. Ma et al. (2018) found that while there were signals in the star's activity indices at \(41.2\pm 0.9\) d and \(39.2\pm 0.7\) d likely corresponding to (differential) stellar rotation, the well-defined \(P=42.38\) d signal persisted over the seasons and between activity states - concluding that the signal was most likely due to a planet. Rosenthal et al. (2021) reported a signal at \(P=42.305^{+0.015}_{-0.019}\) d (\(K=1.82^{+0.43}_{-0.31}\) m s\({}^{-1}\)) and considered it a false positive attributed to the star's rotation, and another longer signal at \(P=3560^{+200}_{-580}\) d (\(K=1.89^{+0.37}_{-0.32}\) m s\({}^{-1}\)) attributed to long-period magnetic activity cycle. We detect a strong significant RV signal with \(P_{I}=42.303\pm 0.025\) days, \(K_{I}=1.40\pm 0.22\) m s\({}^{-1}\), \(e_{I}=0.37\pm 0.17\), very similar to that reported previously by Diaz et al. (2018) and Ma et al. (2018). Analysis of H\(\alpha\) data for this target returns a well-correlated detection with \(P=43.504\pm 0.066\) d, \(e=0.37\pm 0.18\). The extreme proximity of these two detections leads us to classify this RV detection conclusively as activity. Additionally, we detect RV signals with \(P_{II}=37.33\pm 0.02\) d, \(K_{II}=1.17\pm 0.19\) m s\({}^{-1}\), \(e_{II}=0.14\pm 0.12\), and \(P_{II}=367.9\pm 3.1\) d, \(K_{III}=1.63\pm 0.88\) m s\({}^{-1}\), \(e_{III}=0.46\pm 0.27\). Looking closely at the periodogram, we note that the 37-day period signal is extremely close to the yearly alias of the 42-day signal, and report it as such. The 365-day signal is likely driven by the window function of this star as the phase folded fit makes clear that a significant (\(>\)25%) portion of the orbital phase space is unpopulated due to seasonal observing constraints. We therefore classify this as a false positive signal. In the S-index activity analysis, we find a signal with \(P_{I}=3177\pm 84\) d (\(8.70\pm 0.23\) yr) and \(e_{I}=0.059\pm 0.051\), which agrees well with Rosenthal et al. (2021) and which we report as an update to the 10-year magnetic cycles previously published. ### HD 39091 (\(\pi\) Men) \(\pi\) Men (GJ 9189, HD 39091, HR 2022, HIP 26394) is a G0V star (Gray et al., 2006) at \(d=18.28\) pc (\(\varpi=54.6825\pm 0.0354\) mas; Gaia Collaboration et al., 2020). \(\pi\) Men has three published planets. \(\pi\) Men b was first published in Jones et al. (2002), and was discovered using radial velocity data from the _UCLES_ instrument. We recover \(\pi\) Men b in our radial velocity data with \(P_{b}=2089.05\pm 0.46\) d, \(K_{b}=196.5\pm 0.6\) m s\({}^{-1}\) and \(e_{b}=0.6428\pm 0.0017\). These parameters are comparable to recent estimates by Huang et al. (2018), Gandolfi et al. (2018), and Xuan and Wyatt (2020). Our analysis includes newly released _PFS_ data, building upon the _HARPS_ + _UCLES_ orbital fits performed in the previous works, and so we report our detection as an update to the orbital parameters of \(\pi\) Men b. \(\pi\) Men c was the first new transiting planet discovered by NASA's Transiting Exoplanet Survey Satellite (TESS; Huang et al., 2018). The planet was not robustly detected by RVSearch's uninformed search of the RVs, although there is a well defined peak in the residuals periodogram at the expected period of \(P_{c}=6.2\) d. \(\pi\) Men d is a recently detected, sub-Neptune mass planet candidate reported to have \(P_{d}=124.64^{+0.48}_{-0.52}\) days, \(K_{d}=1.68\pm 0.17\) m s\({}^{-1}\), and \(e_{d}=0.22\pm 0.079\)(Hatzes et al., 2022). These parameters are driven largely by observations taken as part of intensive _HARPS_ and _ESPRESSO_ observing campaigns, the data for which is not included in this analysis. We detect a similar signal, consistent to within 1.5\(\sigma\) on all parameters, albeit with larger uncertainties on the planet's RV semi-amplitude. Our best fit results for this third signal are \(P_{d}=125.58\pm 0.27\) d, \(K_{d}=2.16\pm 0.42\) m s\({}^{-1}\), and \(e_{d}=0.16\pm 0.15\). Activity analysis of both S-index and H\(\alpha\) data for this target recovers no significant signals. ### HD 69830 (GJ 302) HD 69830 (GJ 302, HR 3259, HIP 40693) is a well-studied star of type G8+V (Gray et al., 2006) at distance \(d=12.58\) pc (\(\varpi=79.4953\pm 0.0400\) mas; Gaia Collaboration et al., 2020), famous for hosting a planetary system of three Neptunes (Lovis et al., 2006) and a dusty debris disk (Beichman et al., 2005). The stellar rotation period has been estimated by Isaacson and Fischer (2010) to be 42 days, while Simpson et al. (2010) report \(35.1\pm 0.8\) d. With 1515 additional RV measurements since Lovis et al. (2006), we recover all three of the same planets with slightly different periods and amplitudes (see Table 9 for a full comparison). For HD 69830 b, we report \(P_{b}=8.66897\pm 0.00028\) d, \(K_{b}=3.4\pm 0.1\) m s\({}^{-1}\), and \(e=0.128\pm 0.028\). For HD 69830 c: \(P_{c}=31.6158\pm 0.0051\) d, \(K_{c}=2.6\pm 0.1\) m s\({}^{-1}\), and \(e_{c}=0.030\pm 0.027\), and for HD 69830 d: \(P_{d}=201.4\pm 0.4\) d, \(K_{d}=1.5\pm 0.1\) m s\({}^{-1}\), and \(e_{d}=0.080\pm 0.071\). The uncertainties on our derived orbital periods are slightly smaller than those recently reported by Rosenthal et al. (2021), which used data from _HIRES_ and the _APF_ but not the other instruments included here, and appear to be the most precise yet reported. Rosenthal et al. (2021) find two false positives in their analysis with periods of 201 and 382 days, which they attribute to systematic errors. However, the \(P=201\) d signal is in fact a detection of Lovis et al. (2006)'s planet d, and its inclusion in Rosenthal et al. (2021)'s false positive table (Table 7) is a typo. We do not recover the 382 day false positive reported by Rosenthal et al. (2021), but our inclusion of multiple instruments' data which are not included in Rosenthal et al. (2021) may dilute individual facilities' systematics. Additionally, we recover three significant signals in the S-index activity analysis. S-index signal I has \(P_{I}=3989\pm 190\) d (\(10.93\pm 0.52\) yr), which is similar to the Sun's own 11-year activity cycle (e.g. Hathaway, 2015). Lovis et al. (2011) reported a poorly constrained activity cycle period of \(P_{\rm cyc}=5865^{+\infty}_{-1235}\) d, which is \(1.46\sigma\) longer than our measured activity signal I, but they are likely detections of the same long-term magnetic activity cycle. We report S-index activity signal I as a magnetic activity cycle. S-index activity signal II has \(P_{II}=731\pm 31\) d (\(2.00\pm 0.08\) yr) which we note is almost twice the expected _HARPS_ yearly systematic. Attribution of this signal to a _HARPS_ systematic is further supported by the complete lack of corresponding signal in the _UCLES_ H\(\alpha\) analysis for this target, which not only returns no significant detections but shows almost no strength in the periodogram at this period. Finally, S-index activity signal III has \(P_{III}=2530\pm 180\) d (\(6.93\pm 0.49\) yr). This may be another magnetic activity cycle, though we note that there appears to be a minimum in the H\(\alpha\) periodogram at this period. We recommend further investigation by future work to understand this signal. Comparison of the star's rotation period and cycle periods with other nearby Sun-like stars in Fig. 9 of Boro Saikia et al. (2018) indicate that HD 69830 may be a rare case of a slow-rotating star with two detected activity cycles (\(10.93\pm 0.52\) yr and \(6.93\pm 0.49\) yr, both of which are near the "inactive branch" locus in \(P_{rot}\) vs. \(P_{cyc}\) space).15 Footnote 15: A similar example from Boro Saikia et al. (2018) is the K2V star HD 149661, with activity cycles of \(15.3\pm 0.4\) and \(7.7\pm 0.12\) yr (see also Saar & Brandenburg, 1999). ### HD 75732 (55 Cnc) 55 Cnc (\(\rho^{1}\) Cnc, GJ 324 A, HD 75732, HR 3522, HIP 43587, Copernicus) is a famous, K0IV-V (Gray et al., 2003) exoplanet host star at \(d=12.58\) pc (\(\varpi=79.4482\pm 0.0429\) mas; Gaia Collaboration et al., 2020). 55 Cnc also has a wide separation (\(85^{\prime\prime}\), \(\sim\)1060 AU ) low-mass stellar companion 55 Cnc B. Bourrier et al. (2018) presents an extensive review of the 55 Cancri system and its five exoplanets (see also e.g., Fischer et al., 2008; Endl et al., 2012; Fischer, 2018). Bourrier et al. (2018)'s analysis contains 1552 RV measurements from a combination of both first generation and more modern precise RV spectrographs spanning 25 years. In comparison, this work includes only modern precise RV data sets and contains 837 RV measurements taken over 18 years. Our analysis does, however, have longer _HIRES_ and _APF_ baselines than present in Bourrier et al. (2018). We recover signals corresponding to all five reported planets around 55 Cnc and report updates to the parameters of planets b, c, e, and f in Table 4. Our detection of the long period planet 55 Cnc d suffers from our more limited observational baseline and the months of time between the _HIRES_ -Pre and -Post data sets. The \(\Delta\)BIC periodogram peak suggests a 4421 day period, which is notably shorter than the P\({}_{d}=5574.2^{+93.8}_{-88.6}\) result from Bourrier et al. (2018). After fitting the full system, RVSearch arrives at a best-fit model of \(P_{d}=14951\pm 5100\) d, \(K_{d}=54\pm 5\) m s\({}^{-1}\), and \(e_{d}=0.515\pm 0.086\) for this long period signal. This overly long, poorly constrained result exhibits similar behavior to the other 'LPS' signals in this work due to the lack of full orbital phase coverage. Due to the period of the initial periodogram peak we attribute this signal to 55 Cnc d, but we do not report this as an updated orbital parameter result. Baluev (2015) report an activity cycle for 55 Cnc of period \(P_{cyc}=12.6^{+2.5}_{-1.0}\) yr, with a prediction that an activity minimum would occur around 2014-2015. This correlates with our S-index detection at \(P=3801\pm 130\) d (\(10.4\pm 0.36\) yr), so we report this signal as an update to the previously published activity cycle. The activity cycle prediction from Baluev (2015) also does not align with the signal reported in Rosenthal et al. (2021). ### HD 85512 (GJ 370) HD 85512 (GJ 370, HIP 48331) is a nearby, somewhat metal poor ([Fe/H] = -0.26) (Tsantaki et al., 2013), inactive (log \(R^{\prime}_{HK}\)= -4.976; Costes et al., 2021) K6V(k) star (Gray et al., 2006) at \(d=11.28\) pc (\(\varpi\) = 461.446 mas; Gaia Collaboration et al., 2020). The star has one previously reported planet at \(P_{b}=58.43\pm 0.13\) days (Pepe et al., 2011), recovered through radial velocity analysis of 185 _HARPS_ data points. Our RVSearch analysis, run on an additional 1,127 data points, does _not_ detect a 58-day signal, but rather a shorter period signal with parameters: \(P_{b}=51.195\pm 0.073\) d, \(K_{b}=0.438\pm 0.079\) m s\({}^{-1}\), and \(e_{b}=0.3\pm 0.19\). This change, from \(P_{b}=58.43\pm 0.13\) days in Pepe et al. (2011) to \(P_{b}=51.195\pm 0.073\) days in this study amounts to a \(48\sigma\) difference, well beyond any expected planetary orbit refinement. The reported amplitudes are also somewhat inconsistent, with Pepe et al. (2011) reporting \(K_{b}=0.769\pm 0.090\) m s\({}^{-1}\) a \(2.8\sigma\) difference from our RVSearch result. We note that there is a suggestive similarity between the reported Doppler periods from both Pepe et al. (2011) and our work and the predicted rotation period for the star. Based on HD 85512's chromospheric activity, Pepe et al. (2011) predicted \(P_{\rm rot}=47.13\pm 6.98\) d, and Lovis et al. (2011) predicted \(P_{\rm rot}=50.9\pm 7.0\) d - i.e. within \(0.58\sigma\) and \(0.04\sigma\) of the RV signal we measure (\(P_{b}=51.195\pm 0.073\) d). Pepe et al. (2011) searched for power in the \(\log R^{\prime}_{HK}\) activity indicator and the CCF line bisector (BIS) but did not detect any excess power consistent with stellar rotation between 50 and 100 days. Our analysis of the S-index measurements detects significant periods at 44, 45, and 51 days, causing us to suspect rotation as the cause of this signal. To investigate the consistency of these two periods over time, we generated a moving Bayes Factor Periodogram (BFP) using the AGATHA software suite (Feng et al., 2017). Since RV data are typically not measured in a uniform way, especially when combining results from different surveys, the consistency of a true Keplerian signal may depend on the sampling cadence even if the power is normalized. Moving periodograms can help to identify false positives if a signal is found to be inconsistent even during spans where data was taken at a high cadence and over a number of nights comparable with or longer than the signal period. The moving periodogram results for HD 85512 (Figure 9) show a prominent peak in the 58 day region when looking at the first half of the RV time series. This signal, however, bifurcates in roughly 2017 (JD \(\simeq 2458000\)) and splits into two weaker periodicities of 59 and 57 days. At approximately the same time a more prominent peak appears at the \(P\simeq 51\) day period identified by RVSearch, and becomes the most significant period for the duration of our observational baseline. As low-mass stars can manifest differential rotation (e.g. Donahue et al., 1996), the reported 58 d and 51 d periods could be due to active regions rotating at different latitudes. The trend in differential rotation among G/K dwarfs from the Mt. Wilson survey shown by Donahue et al. (1996) (their Fig. 3) shows that K dwarfs with \(P_{\rm rot}\) \(\simeq\) 50 d could exhibit differential rotation at the \(\Delta P\,\simeq\) 9 d level. Indeed the previously discussed K3V star HD 16160 (see Sec. 5.2) was the Mt. Wilson survey poster child for such extreme differential rotation, exhibiting seasonal mean rotation periods ranging from 42.2 to 51.5 d over five seasons. So it is certainly reasonable for a slow-rotating mid-K dwarf like HD 85512 to manifest differential rotation over \(P_{\rm rot}\) \(\simeq\) 44-58 d. Given the lack of periodic consistency for both signals across the RV time series, we assert that the reported companion HD 85512 b from Pepe et al. (2011) is not caused by a physical planet, but rather that the signal is due to the star's rotation. We adopt the notation of calling this HD 85512 RV Signal II (rather than HD Figure 9: Moving periodogram (MP) for the combined HD 85512 radial velocity data sets. The colors encode the scaled MP power, which is truncated to optimize the visualization of signals. The previously reported 58.43 day planet is visible as a dark red horizontal band on the left side of the MP, denoting its significance in the first half of the RV time series, but bifurcates and then disappears in later years. A second signal at \(P\simeq\) 51 d takes over in the latter half of the time series, but does not exhibit a tight enough period range to be seriously considered as a planet candidate. 85512 b) in Table 4, since it is the second RV signal fit by the RVSearch algorithm. RVSearch finds one additional significant signal in the RV data, with an initial \(\Delta\)BIC periodogram peak of 3891 days. The best fit period for RV Signal I after running the RV data through RVSearch's MCMC analysis is \(P_{I}=9646\pm 5500\) days, which we categorize as a 'LPS' due to the large error bars and period that stretches beyond the baseline of the combined RV data. While Pepe et al. (2011) presented time series \(\log R^{\prime}_{HK}\) data for the star over a span of 2745 d, and sinusoidal-looking activity variability was observed, they did not estimate a magnetic cycle period. By eye, interpreting the Pepe et al. (2011) \(\log R^{\prime}_{HK}\) data near JD 2453000 as a minimum and JD 2454500 as a maximum, one can infer \(P_{\rm cyc}\simeq 3000\) d. And indeed, when analyzing the majority of the Pepe et al. (2011) data set (175 of the 185 observations) Lovis et al. (2011) estimated an activity cycle of \(P_{\rm cyc}\,=3793^{+806}_{-566}\) d (\(10.38^{+2.21}_{-1.55}\) yr). Here we compile in total 1312 S-index measurements taken over a baseline of \(\sim\)7000 days. With this longer baseline, we see a subsequent activity minimum around JD 2456600 and a maximum around JD 2458800 and RVSearch identifies a significant S-index signal with \(P=4245\pm 52\) d (\(11.62\pm 0.14\) yr). This falls well within the 1\(\sigma\) uncertainties of the Lovis et al. (2011) activity cycle, and better constrains the period by a factor of 10. It is also consistent with the \(\Delta\)BIC periodogram peak in the RV data, and so we note that it is likely that the LPS in the RVs is caused by a magnetic activity cycle, however more data is needed to definitively characterize the nature of this signal. Our S-Index analysis returns a multitude of additional signals falling between the rotation period and magnetic activity cycle of HD 85512. To fully characterize the significance of all these signals, a much more in depth study of the system is required than is covered within the scope of this work. The full list of signals can be found in the activity summary plot contained in the figure set in the online journal. We leave further analysis of the activity results from this target to future work. ### Hd 102365 (Gj 442a) HD 102365 (GJ 442 A, HR 4523, HIP 57443) is a G2V star (Gray et al., 2006) at \(d=9.32\) pc (\(\varpi=107.3024\pm 0.0873\) mas; Gaia Collaboration et al., 2020). The star is just slightly cooler (\(T_{\rm eff}=5618\pm 14\) K) and of similar chromospheric activity to the Sun (\(\log R^{\prime}_{HK}=\)-4.94) (Meunier et al., 2022), however the star is substantially more metal poor ([Fe/H]\(=-0.31\pm 0.02\)) compared to the Sun (Soubiran et al., 2022). Recent age estimates for the star make it ancient: \(11.3\pm 0.9\) Gyr (Nissen et al., 2020), \(12.46^{+1.04}_{-1.42}\) Gyr (Gaia Collaboration et al., 2021), \(13.1\pm 1.5\) Gyr (Casali et al., 2020). The star has a low-mass stellar companion of type M4V (Henry et al., 2002) at projected separation 22''.72 or 211 AU (Tian et al., 2020). The masses of the stars A and B are \(0.88^{+0.02}_{-0.03}\) \(M_{\odot}\)(Aguilera-Gomez et al., 2018) and 0.192 \(M_{\odot}\)(Mugrauer, 2019), respectively. Tinney et al. (2011) reported an exoplanet with orbital period \(P_{\rm orb}\)\(=122.1\pm 0.3\) d, \(K=2.40\pm 0.35\) d, and eccentricity \(e=0.34\pm 0.14\), corresponding to a Neptune-like predicted mass of \(16.0\pm 2.6\) \(M_{\oplus}\). No subsequent orbital solution has been reported over the past decade. We recover this same planet as (Tinney et al., 2011), with updated orbital parameters thanks to our additional RV observations. We report parameters for HD 102365 b of \(P=121.3\pm 0.25\) d, \(K=1.38\pm 0.23\) m s\({}^{-1}\), and \(e=0.28\pm 0.15\). Note that our RV amplitude is only 58% of that reported by Tinney et al. (2011), resulting in a significantly lower \(m\sin i\) of \(9.34^{+1.52}_{-1.50}\) \(M_{\oplus}\) (Table 4). We find no significant signals in analysis of the S-index data, but the H\(\alpha\) data analysis yields three periodic signals. The first has a period of approximately one year, so we assert that it is most likely caused by the seasonal availability of the star. The second H\(\alpha\) signal is fit to a periodogram peak of 7273.6 days, longer than the baseline of the _UCLES_ data. This peak is most likely the same long-period trend present in most of the _UCLES_ H\(\alpha\) data as described in Section 3.3, and so we disregard from consideration as astrophysical. The third H\(\alpha\) peak has period \(P_{III}=49.68\) d and \(e_{III}=0.08\). While the peak is sharp and well-defined, and could reasonably correspond to the stellar rotation period for this ancient star, it does not show any corroborating signal in the S-indices. We note that the MCMC fit to the EW\({}_{H\alpha}\) data settles on a solution for the longest signal that is nonsensical (\(P=36122\pm 64000\) days) and so we report the MAP best fit solution in the summary table. Even the MAP result suffers from the similarity between the long period signal and the total observational baseline of the _UCLES_ data, however, and the resulting fit lands on an orbital period of P=18549 days; clearly well beyond the scope of our data set. ### Hd 114613 (Gj 9432) HD 114613 (GJ 9432, GJ 501.2, HR 4979, HIP 64408) is a G4IV star (Gray et al., 2006) at \(d=20.46\) pc (\(\varpi=48.8691\pm 0.1058\) mas; Gaia Collaboration et al., 2020). Brewer et al. (2016) finds that the star is more massive (\(1.24\pm 0.17\) \(M_{\odot}\)) and metal-rich ([Fe/H] \(=0.17\)) than the Sun, and also cooler (\(T_{\rm eff}\,=5641\) K) and larger (\(2.14\pm 0.06\) \(R_{\odot}\)) with lower surface gravity (\(\log g=\)3.87). Lovis et al. (2011) report the star to have very low chromospheric activity (\(\log R^{\prime}_{HK}\) = -5.509) with a magnetic activity cycle of \(P_{\rm cyc}=897^{+61}_{-53}\) d (\(2.46^{+0.15}_{-0.17}\) yr). Wittenmyer et al. (2014) reported a planet with \(P_{\rm b}=3827\pm 105\) days and a semi-amplitude of \(K_{\rm b}=5.4\pm 0.4\) m s\({}^{-1}\). Our search of the updated RV data set, which contains 980 additional velocities from _HARPS_, _HIRES_, and _PFS_, does not recover this planet but rather reveals a much longer period signal at \(P_{I}=6622\pm 270\) d (\(18.13\pm 0.74\) yr) with a semi-amplitude of \(K_{I}=7.29\pm 0.44\) m s\({}^{-1}\), suggestive of a planet candidate with \(m\sin i\) \(\simeq\) 0.74 \(M_{\rm J}\). However, _both_ the S-index data and the EW\({}_{H\alpha}\) periodograms identify significant signals at similar periods to the RV periodogram, with the S-index peak appearing at 7563 days and the EW\({}_{H\alpha}\) peak at 7653 days. Both of these signals struggle with similarity in duration to the baselines of their respective data sets, however, as the lack of a single activity indicator that covers the full 8500 day span of radial velocities prevents a clean resolution of the activity signals. Instead, both the S-index signal and the EW\({}_{H\alpha}\) signal are pushed to much larger and nonsensical values: 81942\(\pm\)190000 days for the S-index data, and 29213\(\pm\)41000 days for the EW\({}_{H\alpha}\) data. Because of this, we opt to disregard the MCMC analyses and proceed with the signals evident in the initial \(\Delta\)BIC periodograms. We take the agreement of these two activity indicator periodogram peaks and their overlap with the RV signal (which is well resolved) to be sufficient evidence that all three signals are manifestations of the same underlying magnetic cycle. Given this overlap, and the lack of a re-detection of the original 3827 day signal in the radial velocities, we assert that the previously claimed HD 114613 b from Wittenmyer et al. (2014) is not in fact a planet, but may be attributed to a long period magnetic cycle. RVSearch detects two additional RV signals with period and semi-amplitude pairings of \(P_{II}=73.14\pm 0.06\) d, \(K_{II}=2.54\pm 0.48\) m s\({}^{-1}\) and \(P_{III}=1954\pm 39\) d, \(K_{III}=2.98\pm 0.52\) m s\({}^{-1}\). There are no corresponding signals in the S-index or EW\({}_{H\alpha}\) periodograms, and so we label each of these signals as SRC. As a subgiant star, we would expect HD 114613 to exhibit higher levels of RV jitter (Luhn et al., 2020), which may drive some of the scatter seen in the RV summary figure. ### Hd 115617 (61 Vir) 61 Vir (GJ 506, HD 115617, HR 5019, HIP 64924) is a G7V star (Gray et al., 2006) at \(d=8.53\) pc (\(\varpi=117.1726\pm 0.1456\) mas; Gaia Collaboration et al., 2020). The star is slightly less active than the Sun, with reported \(\log R^{\prime}_{HK}\) values between -4.93 and -5.03 (e.g. Baliunas et al., 1996; Hall et al., 2007; Isaacson & Fischer, 2010; Vogt et al., 2010; Wittenmyer et al., 2006; Lovis et al., 2011; Brewer et al., 2016; Meunier et al., 2017). Baliunas et al. (1996) report an average rotation period of \(P_{\rm rot}\) = 29 d based on analysis of the Mt. Wilson survey data, while Lovis et al. (2011) reported a chromospheric activity cycle of \(P_{\rm cyc}\) = \(1548^{+266}_{-811}\) d and a predicted rotation period of \(P_{\rm rot}\) = \(33.9\pm 3.6\) d. According to Vogt et al. (2010), 61 Vir is a three-planet system with \(P_{b}\) = 4.21 days, \(P_{c}=38.021\pm 0.034\) days, and \(P_{d}=123.01\pm 0.55\) days. We recover the same three planets as Vogt et al. (2010) with slightly updated parameters: \(P_{b}=4.21498\pm 0.00014\) d, \(K_{b}=2.47\pm 0.11\) m s\({}^{-1}\), \(e_{b}=0.033\pm 0.029\); \(P_{c}=38.079\pm 0.008\) d, \(K_{c}=3.56\pm 0.12\) m s\({}^{-1}\), \(e_{c}=0.026\pm 0.023\); \(P_{d}=123.2\pm 0.2\) d, \(K_{d}=1.47\pm 0.17\) m s\({}^{-1}\), \(e_{d}=0.15\pm 0.11\). We report these parameters as an improvement to the previously reported ones, due to a additional 2473 RV observations since Vogt et al. (2010). After analysis of residual signals, Rosenthal et al. (2021) report the 123-day period signal as a yearly alias. We do not see similar evidence in our data as they report, after examining an additional ten years of data. We therefore report this signal as an update to the currently confirmed planet d. We recover one additional RV signal, RV signal I, with parameters \(P=20565\pm 21000\) d, \(K=2.23\pm 0.46\) m s\({}^{-1}\), and \(e=0.97\pm 0.024\). The original periodogram peak being fit by this Keplerian is at 5910.9 days, which is close to the observation baseline, so the fit is poorly constrained. The presence of the peak is evidence of a long-period trend which is not yet well defined, so we classify this signal as LPS. Analysis of the S-index activity does not yield any detections, but we note that there is significant strength in the S-index periodogram at 3995 days. This may be indicative of an approximately 11 year magnetic activity cycle, but there is insufficient data to push this signal past the false alarm probability threshold. Continued study of this target would allow for further understanding of potential origins of this signal. Analysis of H\(\alpha\) data returns three detections. The first of these has \(P_{I}=346.3\pm 1.9\) d, which we attribute to observation cadence effects. The second and third signals have \(P_{II}=24.63\pm 0.02\) d and \(P_{III}=44.932\pm 0.069\) d. The observed H\(\alpha\) periodicities II and III are bit shorter and longer, respectively, than the predicted rotation period (\(P_{\rm rot}\) = \(33.9\pm 3.6\) d) from Lovis et al. (2011). H\(\alpha\) signal II is close to the rotation period reported by Baliunas et al. (1996), though it is shorter by several days. Because the Lovis et al. (2011) rotation period is predicted, and the Baliunas et al. (1996) \(P_{\rm rot}\) is 26 years old, we assert that it is possible that H\(\alpha\) signal II is due to stellar rotation. H\(\alpha\) signal III does not correlate with any periodicity in Ca H & K data nor RV data, and is unlikely to be caused by differential rotation, as its period is \(\sim\)20 d greater than signal II. However, we note that it is not impossible that H\(\alpha\) signal III is caused by differential rotation. Quantifying the differential rotation in terms of \(\alpha=|P_{2}-P_{1}|/P_{max}=0.45\), this would suggest surface shear approximately twice that of the Sun (\(\alpha_{\odot}=0.2\); Reinhold et al., 2013). The observed differential rotation trend for nearby solar-type stars from Donahue et al. (1996) predicts that for a mean rotation period of \(\sim\)35 days, one would predict observing \(\Delta P\simeq 7.6\) d. However, the data from Donahue et al. (1996) also show that there are cases for rotators with \(\sim\)month-long periods of having \(\Delta P\) as high as \(\sim\)18 d! Because of this, we believe it is somewhat plausible that H\(\alpha\) signals II and III could be hinting at strong differential rotation, but further observations would be needed to test this idea further. ### HD 136352 (\(\nu^{2}\) Lup) \(\nu^{2}\) Lup (HD 136352, GJ 582, HR 5699, HIP 75181) is a nearby G2-V (Gray et al., 2006) star at \(d=14.74\) pc (\(\varpi=67.8467\pm 0.0601\) mas; Gaia Collaboration et al., 2020). The star is similar to the Sun in temperature and gravity, but considerably more metal poor: \(T_{\rm eff}=5664\pm 14\) K, \(\log g=4.39\pm 0.02\), \([Fe/H]=-0.34\pm 0.01\)(Sousa et al., 2008). Given its high velocity, low metallicity, and \(\alpha\)-element enhancement (\([\alpha/{\rm Fe}]\simeq 0.17\)) (Soubiran and Girard, 2005), the star is widely classified as a thick disk star (e.g. Ibukiyama and Arimoto, 2002; Soubiran and Girard, 2005; Adibekyan et al., 2012; Hinkel et al., 2017; Kane et al., 2020)16. Lovis et al. (2011) report a magnetic activity cycle of \(P_{\rm cyc}=104^{+81}_{-97}\) d (\(2.85^{+1.59}_{-0.27}\) yr), with a predicted rotation period of \(P_{\rm rot}=25.0\pm 3.1\) d based on the mean activity level (\(\log R^{\prime}_{HK}=\) -4.986)17. Udry et al. (2019) reported three planet signals in _HARPS_ radial velocity observations for HD 136352. Kane et al. (2020) present a detailed study of \(\nu^{2}\) Lupi, reporting _TESS_ observations that planets \(b\) and \(c\) were observed to be transiting, with their derived radii and densities consistent with being on either side of the planet radius gap. Footnote 16: The only known star brighter in \(V\) and \(G\) band than HD 136352 with transiting exoplanets in the NASA Exoplanet Archive is HD 219134, which is an \(\alpha\)-poor thin disk star of approximately solar metallicity (Mishenina et al., 2004; Ramirez et al., 2012). The only known star brighter in \(K_{s}\) band than HD 136352 is 55 Cnc, which is a metal-rich thin disk star (Mishenina et al., 2004; Ramirez et al., 2012). Hence, HD 136352 (\(\nu^{2}\) Lup) appears to be the brightness - either in visible or near-IR bands – thick disk star known to have transiting exoplanets. Footnote 17: Independently, Isaacson and Fischer (2010) predicts the rotation period of HD 136352 to be 23 d based on \(\log R^{\prime}_{HK}\). \(\nu^{2}\) Lup b is a \(4.62^{+0.45}_{-0.44}\,M_{\oplus}\)\(1.482^{+0.058}_{-0.056}\,R_{\oplus}\) planet with period \(P=11.57779^{+0.00091}_{-0.0011}\) d - likely the stripped core of a sub-Neptune (now a "super-Earth"), and c is \(11.29^{+0.73}_{-0.69}\,M_{\oplus}\), \(2.608^{+0.078}_{-0.077}\,R_{\oplus}\) exoplanet with period \(P=27.5909^{+0.0028}_{-0.0031}\) d - a "sub-Neptune" (Kane et al., 2020). \(\nu^{2}\) Lup d is a planet with radius \(2.56\pm 0.09\,R_{\oplus}\) and mass \(8.82\pm 0.94\,M_{\oplus}\) with orbital period \(P=107.245\) d (Delrez et al., 2021). We recover all three of these planets, but defer to Kane et al. (2020) and Delrez et al. (2021) for the most accurate parameters. We recover one additional RV signal with \(P=121.66\pm 0.26\) d, \(K=0.68\pm 0.13\), \(e=0.22\pm 0.19\). Udry et al. (2019) recovered a similar signal in their RV analysis, with a period of 123 d, and discarded it as a three-planet fit was favored over four in their analysis. Our search results find that the signal just crosses the threshold for being considered a valid additional signal, however the significance of the signal in the running periodogram wanes notably as more and more RV data points are added, which suggests a non-Keplerian origin. Additionally, a 121-day planet would be dynamically inconsistent with the confirmed 107-day planet, ruling this out as a planetary signal, and leaving only the possibility of an activity signal. Because of this we classify the fourth signal as being due to stellar activity. We note, also, that Rosenthal et al. (2021) detect a false positive signal with a period of 244 days, which is quite nearly double that of the fourth signal detected here and in Udry et al. (2019). The RVSearchanalysis of the S-index data returns no significant detections. Despite our large increase in observation timeline since Lovis et al. (2011), from 2543 days to 5771 days, we find no evidence for the \(P_{\rm cyc}\) 2.85 yr activity cycle reported in that work. Analysis of the EW\({}_{H\alpha}\) data returns two significant periodicities. Signal I has period \(P=364.7\) days in the periodogram, which is clearly close to one year and we suspect is due to sampling effects. The signal also has high eccentricity, but there is a significant gap in the orbital phase coverage, which RVSearch addresses by using a high eccentricity solution to try and fit a Keplerian curve to this jump. Signal II is detected based on a periodogram peak close to 6000 days, which is approximately the observation baseline for the _UCLES_ data. The MCMC Keplerian fit is poorly constrained because of this fact, so we attribute this signal to the long-period _UCLES_ trend and disregard it as significant. When run through RVSearch/'s MCMC analysis, the long period signal produces nonsensical error bars of \(P=19924\pm 33000\). We therefore choose to report the best fit MAP orbital solution in the EW\({}_{H\alpha}\) summary table. ### HD 160346 (GJ 688) HD 160346 (GJ 688, HIP 86400) is a nearby K2.5V (Gray et al., 2003) at \(d\) = 11.00 pc (\(\varpi\) = 90.91 \(\pm\) 0.67 mas; van Leeuwen, 2007). The star has published chromospheric activity estimates ranging from \(\log R^{\prime}_{HK}\) = -4.766 (Meunier et al., 2017) to -4.85 (Gondoin, 2020) - comparable to the active Sun. Analysis of the Mt. Wilson Ca II H & K survey data by Donahue et al. (1996) detected seasonal rotation periods ranging from \(P_{\rm rot}\) = 35.4 to 37.8 d, with an average over 5 seasons of \(P_{\rm rot}\) = 36.4 d. Boro Saikia et al. (2018) report a Ca II H & K activity cycle of \(P_{\rm cyc}\) = 7.19 \(\pm\) 0.04 yr. GJ 688 is a SB1 with three published orbits listed in the SB9 catalog (Pourbaix et al., 2004), with orbits by Tokovinin (1991), Katoh et al. (2013), and Halbwachs et al. (2018). The latter provides: \(P=83.7140\pm 0.0120\) d, \(e=0.2100\pm 0.0120\), \(K=5644\pm 57\) m s\({}^{-1}\). Our updated RV analysis produces best-fit orbital parameters of \(P=83.7286\pm 0.0005\) d, \(e=0.2048\pm 0.00033\), and \(K=5690.3\pm 2.3\) m s\({}^{-1}\), shrinking the error bars on all three parameters by over an order of magnitude. The S-index analysis detects a number of significant signals, the first two of which have periods of \(P=2975\pm 600\) d and \(392.6\pm 3.2\) d. Comparison of their RVSearch periodograms, however, suggests that one of these signals is likely an alias of the other. The longer period signal seems more likely to be due to a decade-long magnetic cycle much like the Sun's, however we caution that our analysis does not include the detailed phase analysis necessary to identify which of these signals is the true manifestation of the star's activity. The additional two S-index signals detected by RVSearch have much shorter periods of \(P=7.9567\pm 0.0055\) d and \(P=2.54223\pm 0.00068\) d. While these are both too short to be due to HD 160346's rotation, there is a possibility that one of the signals could be due to flux contributions from a fast rotating, low mass companion. But given the relative sparsity of the data, we do not have sufficient evidence to say anything truly definitive about their origins. ### HD 160691 (\(\mu\) Ara) \(\mu\) Ara (HD 160691, GJ 691, HR 6585, HIP 86796, Cervantes) is a G3IV-V (Gray et al., 2006) star at \(d=15.6\) pc (\(\varpi=64.0853\pm 0.0904\) mas; Gaia Collaboration et al., 2020), with 4 previously reported planets (Pepe et al., 2007). The star is metal-rich ([Fe/H] = \(0.27\pm 0.05\)) and magnetically inactive (\(\log R^{\prime}_{HK}\) = -5.11; Gomes da Silva et al., 2021), with slightly lower surface gravity than typical G dwarfs (\(\log g=4.20\pm 0.02\); Ramirez et al., 2013). Combining asteroseismic observations with evolutionary models, Soriano & Vauclair (2010) find that \(\mu\) Ara is most likely near the beginning of its subgiant branch phase, with a mass of \(1.10\pm 0.02\,M_{\odot}\) and age of \(6.34\pm 0.80\) Gyr. The most recent parameters for this system are from Benedict et al. (2022). We recover the same four signals with minor revisions but generally good agreement on the best fit values, and significant improvements to all but one of the parameters' uncertainties: \(P_{b}=644.93\pm 0.28\) d, \(K_{b}=35.7\pm 0.2\) m s\({}^{-1}\), \(e_{b}=0.0499\pm 0.0082\); \(P_{c}=9.6394\pm 0.0008\) d, \(K_{c}=2.8\pm 0.2\) m s\({}^{-1}\), \(e_{c}=0.132\pm 0.069\); \(P_{d}=308.4\pm 0.23\) d, \(K_{d}=12.7\pm 0.3\) m s\({}^{-1}\), \(e_{d}=0.074\pm 0.016\); \(P_{e}=4035\pm 21\) d, \(K_{e}=22.25\pm 0.24\) m s\({}^{-1}\), \(e_{e}=0.026\pm 0.013\). We detect no significant S-index activity signals, but do find two signals in the H\(\alpha\) data. The first of the H\(\alpha\) signals is fit from a \(\Delta\)BIC periodogram peak at 5293.7 d, which is close to the _UCLES_ observation baseline, so we attribute this to the long-period _UCLES_ systematic present in all the H\(\alpha\) data (see Section 3.3 for further discussion). H\(\alpha\) Signal II has a period of \(P=362.4\pm 1.6\) d, close to one year. This signal is likely caused by the star's seasonal availability and the observing cadence, so we disregard it as a significant detection for this system. ### HD 192310 (GJ 785) HD 192310 (HR 7722, GJ 785, HIP 99825) is a K2+V star (Gray et al., 2006) of roughly solar metallicity ([Fe/H] = \(-0.03\pm 0.04\)) (Tsantaki et al., 2013) at \(d=8.81\) pc (\(\varpi=113.4872\pm 0.0516\) mas; Gaia Collaboration et al., 2020), and with two previously reported planets (Pepe et al., 2011). Lovis et al. (2011) detect a magnetic activity cycle of \(P_{\rm cyc}\) = \(3792^{+806}_{-566}\) d and predict the rotation period to be \(P_{\rm rot}\) = \(43.7\pm 4.9\) d based on the their estimate of the star's mean chromospheric activity (\(\log R^{\prime}_{HK}\) = -4.996). Combining the star's mean chromospheric activity levels (\(\log R^{\prime}_{HK}\) = -4.993) recently reported by Meunier et al. (2017), with its color (\(B-V\) = 0.884; Mermilliod, 2006) and the rotation-activity relations from Mamajek & Hillenbrand (2008), one predicts the star's rotation to be approximately \(P_{\rm rot}\) \(\simeq\) 48 d. We detect two RV signals that appear to correspond to the previously reported planets b and c, with \(P_{b}=74.278\pm 0.035\) d, \(K_{b}=2.484\pm 0.098\) m s\({}^{-1}\), \(e_{b}=0.032\pm 0.027\), and \(P_{c}=549.1\pm 4.5\) d, \(K_{c}=1.3\pm 0.1\) m s\({}^{-1}\), \(e_{c}=0.077\pm 0.073\). These appear to be the most accurate periods yet derived, with Rosenthal et al. (2021) recently reporting \(P_{b}=74.062\pm 0.085\) d, and Pepe et al. (2011) reporting \(P_{c}=525.8\pm 9.2\) d. And our derived ampli tude for \(b\) is three times more precise than that derived by Rosenthal et al. (2021) (\(2.49^{+0.35}_{-0.33}\) m s\({}^{-1}\)), thanks to the addition of the _HARPS_ and _UCLES_ data. We note that our amplitude for planet \(c\) is less than half that reported by Pepe et al. (2011) (\(2.27\pm 0.28\) m s\({}^{-1}\)). Given the increase in observational baseline and the number of instruments contributing data, and the additional signals resolved in the RV data, some shifting in the semi-amplitude is expected. However, as this is an almost \(3.5\sigma\) offset from the Pepe et al. (2011) value, a more thorough analysis that treats the RV and activity indicator data simultaneously is well warranted. RVSearch also identifies four additional RV signals, which we designate as Signals I, II, III, and IV. Signal I has \(P=3836\pm 240\) d, \(K=1.48\pm 0.11\) m s\({}^{-1}\), and \(e=0.34\pm 0.15\). We suspect that Signal I is caused by activity due to a corresponding peak in the S-index data at \(P=3817\pm 60\) d (\(10.45\pm 0.16\) yr), which matches the magnetic activity cycle period (\(P_{\rm cyc}=3792^{+806}_{-566}\) d \(=10.38^{+2.21}_{-1.55}\) yr) reported by Lovis et al. (2011). We therefore attribute it to a magnetic activity cycle. RV signals II and III have similar periods: \(P=43.614\pm 0.023\) d for signal I and \(P=39.509\pm 0.059\) d for signal II. Analysis of S-index data using RVSearch returns various signals with periods between 35 and 50 days. Recalling that Lovis et al. (2011) predicted the star's rotation period to be \(P_{\rm rot}\) = \(43.7\pm 4.9\) d we attribute these signals II and III to differential rotation of the star, as the appearance of active regions at various latitudes over the course of the star's magnetic activity cycle could lead to a wide range of measured periods. Finally, RV signal IV has parameters \(P=24.559\pm 0.016\) d, \(K=0.6\pm 0.1\) m s\({}^{-1}\), and \(e=0.16\pm 0.12\). The periodogram peak for this signal is sharp and well defined, and the RV fit is very well constrained. The period is sufficiently distinct from the rotation period that it is unlikely to be a signature of differential rotation. We therefore report RV Signal IV as a Candidate in Table 4, and recommend further investigation of this signal to determine whether it is planetary in origin. In addition to the differential rotation signals, the RVSearch S-index analysis returns three well-defined signals with \(P=345.34\pm 0.048\) d, \(P=432.6\pm 3.4\) d, and \(P=133.38\pm 0.043\) d. As these are logarithmically almost half way between the star's rotation period and its magnetic activity cycle, no obvious activity-based explanation exists for these signals. We note that the eccentricities of these signals (\(e=0.918\), \(0.7\), and \(0.78\), respectively) are significantly higher than those of the other S-index detections (\(e=0.26\) for the magnetic activity cycle, and \(0.14\) on average for the four rotation-associated periods). This could suggest that these intermediate periods are being driven by small amounts of outlier points that were not far enough removed from the mean to be rejected by our \(5\sigma\) outlier clipping. Rosenthal et al. (2021) detect a false positive signal at \(P=1630^{+51}_{-53}\) d and \(K=1.95^{+0.49}_{-0.36}\) m s\({}^{-1}\), which they attribute to a long-period magnetic activity cycle. We do not recover this same signal. The EW\({}_{H\alpha}\) data for this star produce two significant signals when analyzed with RVSearch. The first is a very long period signal, with an initial periodogram peak of 30,840 days (well beyond the _UCLES_ baseline) and a best-fit period of \(P=62641\pm 110000\) days. Given the extreme nature of both the signal duration and the corresponding uncertainty on the period, we report the best fit MAP solution in place of the MCMC solution. That gives \(P=13627.7\) d for the longer period signal, and \(P=363.678\) d for the second signal which seems to be driven by seasonal observing impacts that leave \(\sim\)1/3 of the orbital phase with little to no data, and therefore make it easier for high eccentricity signals to be fit to the data. ### HD 209100 (\(\epsilon\) Ind A) \(\epsilon\) Indi A (HD 209100, GJ 845, HR 8387, HIP 108870) is a well-studied K4V(k) star (Gray et al., 2006) at \(d\) = 3.64 pc (\(\varpi\) = 274.8431 \(\pm\) 0.0956 mas; Gaia Collaboration et al., 2020). Lovis et al. (2011) report a magnetic activity cycle of \(P_{\rm cyc}=1719^{+217}_{-315}\) d (\(4.71^{+0.59}_{-0.86}\)), and predicted the star's rotation period to be \(P_{\rm rot}\) = \(37.6\pm 6.2\) d based on the star's chromospheric activity (\(\log R^{\prime}_{HK}\) = -4.806). It has one reported planet, a cold Jupiter-mass planet with a period of \(P_{b}\) = 45.2 yr (Feng et al., 2019). We detect only a very poorly constrained, long-period signal in our RV data, with a \(\Delta\)BIC periodogram peak of 13138.7 days (35.97 yr), and a best fit MCMC solution of \(P=13626\pm 110000\) d (\(37.3\pm 30.1\) yr). Due to the signal clearly stretching beyond the bounds of our RV baseline, we note this as an 'LPS' in our detections table. Feng et al. (2019), however, analyses a longer RV baseline than used here, as it includes data from previous generation RV instruments such as the Coude Echelle Spectrograph Long Camera and Very Long Camera (see Zechmeister et al., 2013), and the Ultraviolet and Visible Spectrometer (UVES, Dekker et al., 2000) and utilizes a combination of RVs and Hipparcos and Gaia astrometry to constrain the planet's orbit. It is therefore not surprising that we do not resolve the Feng et al. (2019) planet in our RV time series, and we defer to their publication for orbital parameters on \(\epsilon\) Indi A b. In our S-index activity analysis for HD 209100, we detect a signal with period \(P=2063\pm 160\) d (\(5.65\pm 0.44\) yr) which matches the Lovis et al. (2011) magnetic activ ity cycle to within \(1.5\sigma\). We report this as an updated fit to the previously published activity cycle. We detect an additional, much shorter, S-index signal with \(P=32.87\pm 0.067\) d which matches the Lovis et al. (2011) predicted rotation period to within \(1\sigma\). We therefore take this to be an updated, and better constrained, measurement of the star's rotation period. ## 6 Targets with new signals Here we present results from targets whose analyses returned signals that have not previously been published. Rather than the lettering system which is used to identify planets and companions, we refer to our new detections with Roman numerals for discussion purposes. Signals are interpreted and reported in Table 4. ### _Hd 1581 (\(\zeta\) Tuc)_ \(\zeta\) Tuc (HD 1581, GJ 17, HR 77, HIP 1599) is an F9.5V type star (Gray et al., 2006) at distance \(d=8.59\pm 0.05\) pc (\(\varpi=116.46\pm 0.16\) mas; van Leeuwen, 2007). The star is slightly hotter (\(T_{\rm eff}\) = \(5932\pm 12\) K), and more metal poor ([Fe/H]\(=-0.21\pm 0.01\)) than the Sun, but with similar gravity (\(\log g\) = \(4.43\pm 0.03\)) (Soubiran et al., 2022). Lovis et al. (2011) reports a magnetic activity cycle of \(P_{\rm cyc}\) = \(1018^{+51}_{-47}\) d (\(2.79^{+0.14}_{-0.13}\)) based on \(127\log R^{\prime}_{HK}\) measurements over a 2625 d span, and using the mean activity level (\(\log R^{\prime}_{HK}\) = \(-4.954\)) they predict the rotation period to be \(P_{\rm rot}\) = \(16.7\pm 2.6\) d. There are no confirmed exoplanets for this system. We recover three RV signals with RVSearch. RV signal I has parameters \(P=635.0\pm 4.4\) d, \(K=0.89\pm 0.14\), and \(e=0.55\pm 0.13\). This signal may correspond to magnetic activity, as the period is long, and the peak in the periodogram is somewhat broad and accompanied closely by several other peaks. The next strongest peak, located at \(P\simeq 860\) days sits directly on top of the yearly alias for the detected 635 day signal, denoted by the red dashed line in the RVSearch summary plot. Examination of the window function for this data set reveals a dramatic yearly period in the periodogram, further supporting the concept that one of these signals is indeed the yearly alias of the other. Given the similarity of their periodogram peaks, however, identifying which signal is the true Keplerian would require a full analysis of the phases of the peaks in the window function as seen in Dawson & Fabrycky (2010). This is beyond the scope of our current effort, but we encourage future investigation into the true nature of these two signals. For the time being, as there are no correlated peaks in the activity periodogram for the RVSearch-identified \(P=635.0\) day peak, report only this signal in our summary table and classify it as an SRC. The two remaining signals (II and III) are more well-defined periodogram peaks. Signal II has \(P=15.653\pm 0.005\) d, \(K=0.662\pm 0.096\), and \(e=0.106\pm 0.097\). This aligns with the rotation period predicted by Lovis et al. (2011). With our increased observation baseline of 3600 RV measurements, we have the ability to measure the rotation period for this star much more accurately than previous works. We report RV signal II as a measurement of the rotation period for HD 1581. Signal III has parameters \(P=29.4661\pm 0.0041\) d, \(K=1.6\pm 1.1\) m s\({}^{-1}\), and \(e=0.89\pm 0.12\). The period of this detection is almost exactly twice the rotation period and we therefore suspect it has stellar and not planetary origins. A periodogram analysis of 456 measurements (spanning 6157 days) of the H\(\alpha\) equivalent width measurements taken with _UCLES_ yielded a signal just below the detection threshold of False Alarm Probability 0.001 at \(P=29.7\) days, which further points to the 29-day RV signal being caused by activity. We therefore classify RV Signal III as Activity in Table 4. Analysis of the S-index activity data returns no significant detections. ### _Hd 2151 (\(\beta\) Hyi)_ \(\beta\) Hyi (GJ 19, HD 2151, HR 98, HIP 2021) is a bright (\(V=2.82\); ESA 1997) G0V star (Gray et al., 2006) at distance \(d=7.46\) pc (\(\varpi=134.07\pm 0.11\) van Leeuwen, 2007). Asteroseismic analysis of \(\beta\) Hyi by Brandao et al. (2011) yields a mass of \(1.08\pm 0.03\) \(M_{\odot}\) and age of \(6.40\pm 0.56\) Gyr. We recover one RV signal with \(P=5365\pm 1400\) d, \(K=3.21\pm 0.58\) m s\({}^{-1}\), and \(e=0.54\pm 0.16\). We suspect that the signal is activity-induced due to the lack of consistent growth in the running periodogram which quantifies the signal's power as a function of number of RV data points included. Analysis of H\(\alpha\) measurements from _UCLES_ shows a long-period trend just under the detection threshold at \(P=4957.8\) d. We regard this signal with some skepticism as it is close to the observation baseline for the spectrograph (see discussion in Section 3.3) but note that it supports a conclusion that this signal may be caused by activity. S-index data is too sparse to make any significant detections, so we cannot completely corroborate that suspicion and report the signal as SRC rather than activity. A future, more in-depth study of stellar activity is recommended to completely characterize this signal. Finally, we note that the RV residuals periodogram contains a well-defined peak at 73.3 days, which may correspond to rotation, as this is a slightly evolved star. \(\zeta^{1}\) Ret (HD 20766, GJ 136, HR 1006, HIP 15330) was classified as G2.5V H\(\delta\)1 by Keenan and McNeil (1989) and lies at \(d=12.04\) pc (\(\varpi=83.0240\pm 0.0438\) mas; Gaia Collaboration et al., 2020). \(\zeta^{1}\) Ret is the secondary of a wide binary18 (\(309\arcsec\)) with \(\zeta^{2}\) Ret (HD 20807; see Sec. 6.4).). A few conflicting estimates of the rotation period have been reported for this star: \(P_{\rm rot}\!<\!12.1\) d (Cincunegui et al., 2007), \(P_{\rm rot}\!=14.81\) d (Oelkers et al., 2018), and \(P_{\rm rot}/{\rm sin}i=15.9\) d (Ammler-von Eiff and Reiners, 2012). Recently, Flores et al. (2021) found evidence for an activity cycle of \(P_{\rm cyc}\!=\!1527\pm 43\) d (\(4.18\pm 0.12\) yr). Footnote 18: From the Gaia DR3 astrometry, Kervella et al. (2022) report that \(\zeta^{2}\) and \(\zeta^{1}\) Ret have projected separation \(309\arcsec\).11 (\(3720\) au) and \(V_{tan}\) that agree within \(0.40\pm 0.01\) km s\({}^{-1}\), with predicted escape velocity \(v_{esc}=0.91\) km s\({}^{-1}\). Using Gaia DR3, we estimate that the stars are co-distant to \(\Delta d\,=1095\pm 2240\) au. The difference in the mean radial velocities reported by Soubiran et al. (2018) (\(11.953\pm 0.0031\) km s\({}^{-1}\)for \(\zeta^{2}\) Ret and \(12.488\pm 0.0019\) km s\({}^{-1}\)for \(\zeta^{1}\) Ret) is \(\Delta v_{R}\!=\!0.535\pm 0.004\) km s\({}^{-1}\). Ignoring possible differences due to gravitational redshift and convective blueshift as negligible, since the stars are nearly twins, we interpret the velocity offset as true orbital motion. The total relative orbital motion between \(\zeta^{2}\) and \(\zeta^{1}\) is then only \(v_{orb}=662\pm 9\) m s\({}^{-1}\), and with current 3D separation of \(s=4100^{+1000}_{-3500}\) (\(68\%\) CL). The system is consistent with being a bound binary with a \(\simeq 4500\) au and \(P\simeq 220\) kyr, although further analysis would be needed to constrain the orbit further. We report one significant RV detection. The periodogram peak occurs at \(P=5643.5\) d, which is fairly close to the observation baseline of approximately 6000 days for this target. As discussed in Section 4.1, the RVSearch MCMC fitting for this signal yields nonphysical results (\(P=10218\pm 10000\) days, \(K=12\pm 2\) m s\({}^{-1}\), \(e=0.82\pm 0.11\)), so we record the periodogram peak as the best estimate of this signal and classify it as LPS. The turnaround we see in the center of the RV time series was also reported by Zechmeister et al. (2013) based on the _HARPS_ data. We do not recover this signal in our S-index analysis, but the non-detection is unsurprising given that the RV signal is driven by _UCLES_ data, which lack S-indices. We report one S-index activity detection, which encounters a similar period-to-baseline fitting issue as the detection in the RVs; our observation baseline is just over 1200 days, while the detection peaks at \(P=1406\) d (\(3.85\) yr) in the \(\Delta\)BIC periodogram. This appears to correspond to the \(4.18\pm 0.12\) yr activity cycle reported by Flores et al. (2021). In this case the MCMC fit cannot even reach a final fit solution, and so instead we report just the MAP period fit in the S-index table while noting the signal's LPS-esque behavior. This signal overlaps with one just barely below the detection threshold in the EW\({}_{H\alpha}\) data, with \(P=1059\) d (\(2.90\) yr). ### HD 20807 (\(\zeta^{2}\) Ret) \(\zeta^{2}\) Ret (HD 20807, GJ 138, HR 1010, HIP 15371) is a slightly metal poor ([Fe/H]\(=-0.215\pm 0.010\)) (Adibekyan et al., 2016) G1V standard star (Keenan & McNeil, 1989), and fairly nearby at distance 12.04 pc (\(\varpi=83.0606\pm 0.0608\) mas; Gaia Collaboration et al., 2020). \(\zeta^{2}\) Ret is the primary of a wide binary (\(309^{\prime\prime}\)) with \(\zeta^{1}\) Ret (HD 20766; see Sec. 6.3). Lovis et al. (2011) reported a magnetic activity cycle of \(P_{\rm cyc}\) = \(1133^{+1090}_{-65}\) d (\(3.10^{+2.98}_{-0.18}\) yr) based on only 38 log \(R^{\prime}_{HK}\) measurements over a span of 2309 d. Flores et al. (2021) present an analysis of the time series chromospheric activity data for \(\zeta^{2}\) Ret, finding an activity cycle of \(P_{\rm cyc}\)= 7.9\(\pm\)0.38 yr (\(\sim\)2885 \(\pm\) 139,), and predicting a rotation period of \(P_{\rm rot}\)= \(16.5\pm 1.8\) d based on \(\log R^{\prime}_{HK}\). Zechmeister et al. (2013) also reported correlations between the RVs and \(\log R^{\prime}_{HK}\) FWHM, and BIS based on their limited HARPS data that spanned \(\sim\)1500 days. The star has been claimed to have far-IR excess (70, 100 \(\mu\)m) from a dusty debris disk (Trilling et al., 2008; Eiroa et al., 2013; Gaspar et al., 2013; Sierchio et al., 2014), however recent ALMA observations have shown that the mm emission in the vicinity of \(\zeta^{2}\) Ret is likely to be attributable to background sources (Faramaz et al., 2018). We detect one significant RV signal with \(P=3180\pm 130\) d, \(K=2.9\pm\)0.4 m s\({}^{-1}\), and \(e=0.23\pm 0.11\). This signal, corresponding to a period of 8.7 years, is just beyond the 1\(\sigma\) error overlap with the activity cycle reported by Flores et al. (2021), prompting suspicion about its nature. Our own S-index analysis does not yield any significant detections, and indeed the star appears to be very inactive. Our EW\({}_{H\alpha}\) analysis detects one significant signal with a periodogram peak at \(P=2897\) d (7.9 yr) which aligns well with the activity cycles reported in the literature. We note, however, that this is also approximately half the _UCLES_ observation baseline and a period where the stacked periodogram of EW\({}_{H\alpha}\) non-detections exhibits significant power (Figure 3). We therefore regard this activity detection with some uncertainty as discussed in Section 3.3. Because of this uncertainty and the lack of our own S-index detection, we report our RV Signal I as SRC rather than activity, and recommend a more in depth study of activity indicators for this target to confirm the nature of this signal. ### HD 23249 (& Eri) \(\delta\) Eri (HD 23249, GJ 150, HR 1136, Rana) is a K0+IV spectral standard star (Keenan & McNeil, 1989) at \(d=9.09\) pc (\(\varpi=110.0254\pm 0.1944\) mas; Gaia Collaboration et al., 2020). The star is a slightly metal-rich, evolved star (\(T_{\rm eff}\) = 5045 K, \(\log g\) = 3.77 \(\pm\) 0.02, [Fe/H] = 0.06 \(\pm\) 0.01; Jofre et al., 2014), slow-rotating (\(P_{\rm rot}\) = 71 d, \(v\)sin\(i\)\(=1.54\pm 0.23\) km s\({}^{-1}\); Baliunas et al., 1996; Jofre et al., 2015), and magnetically very inactive - both chromospherically (\(\log R^{\prime}_{HK}\) = -5.184; Baliunas et al., 1996) and coronally (\(\log(L_{X}/L_{\rm bol})=-7.14\pm\)0.18; Morel et al., 2004). Despite the very low activity, the star is oddly classified in the General Catalog of Variable Stars (Samus' et al., 2017) as an RS CVn variable (chromospherically active binary) - which typically implies a very magnetically active detached stellar binary with orbital period between \(\sim\)1 and \(\sim\)14 days (Hall, 1976). This RS CVn classification appears to be erroneous and can be traced to time series photometric observations which used a fast-rotating spotted star as a photometric standard. Fisher et al. (1983) reported \(\delta\) Eri to be a suspected RS CVn variable based on detection of \(\sim\)0.02 mag amplitude variability with period \(\sim\)10 days. Unfortunately the observations used \(\epsilon\) Eri as a photometric standard, itself a spotted variable star with \(P_{\rm rot}\)\(\simeq\) 10-12 d and variability at the \(\sim\)0.01-0.03 mag level (Frey et al., 1991; Croll et al., 2006). Subsequent VLTI/VINCI interferometry measurements by Thevenin et al. (2005) ruled out the existence of any stellar companion down to about \(\sim\)2% the luminosity of \(\delta\) Eri. We concur with findings of Eaton & Poe (1985), Frey et al. (1991), and Thevenin et al. (2005) that \(\delta\) Eri is unlikely to be a RS CVn variable, and suggest that this four-decade-old misclassification for this bright nearby star be dropped from the GCVS and SIMBAD. RVSearch recovers one significant RV signal, with parameters \(P=596.6\pm 2.6\) d, \(K=3.0\pm 1.1\) m s\({}^{-1}\) and \(e=0.65\pm 0.14\). Though the peak is well-defined, as expected for a planet, the eccentricity is a bit high and is being pulled quite strongly by a few _UCLES_ points. We thus classify this signal as SRC, and suggest future work investigate this signal more thoroughly. No signals are detected in the S-index activity data. H\(\alpha\) activity analysis returns one significant signal just over the detection threshold, with \(P=49.568\pm 0.097\) d, \(e=0.21\pm 0.18\). This is substantially shorter than the reported rotation period from Baliunas et al. (1996) (71 d). It seems possible that we could be seeing differential rotation (\(\alpha=|P_{2}-P_{1}|/P_{max}=0.43\)) with surface shear approximately twice that of the Sun (\(\alpha_{\odot}=0.2\); Reinhold et al., 2013). There is a general trend that slower rotating stars exhibit enhanced differential rotation (e.g. Donahue et al., 1996), however the behavior is not well-constrained observationally for periods longer than a \(\sim\)month, or for subgiants (e.g., Reinhold et al., 2013). ### Hd 32147 (Gj 183) HD 32147 (GJ 183, HR 1614, HIP 23311) is a metal-rich ([Fe/H] = \(+0.29\pm 0.02\); Maldonado et al., 2012) K3+V star (Gray et al., 2003) at \(d=8.84\) pc (\(\varpi=113.0715\pm 0.0222\) mas; Gaia Collaboration et al., 2020). Although Baliunas et al. (1996) report the star to have low chromospheric activity (\(\log R^{\prime}_{HK}\) = -4.948) and slow rotation (\(P_{\rm rot}\) = 47 d), it is classified as a BY Dra variable with amplitude 0.03 mag in the General Catalog of Variable Stars (Samus' et al., 2017). More recently, Willamo et al. (2020) report \(\log R^{\prime}_{HK}\) = -4.939 and rotation period \(P_{\rm rot}\) = 33.7 d, and activity cycle period \(P_{\rm cyc}\) = 10.40 yr (\(\sim\)3800 d). Boro Saikia et al. (2018) estimate the activity period to be \(P_{\rm cyc}\) = 10.84 \(\pm\) 0.15 yr, whereas analysis of Mt. Wilson survey between 1967 and 2002 by Garg et al. (2019) reported two activity cycles of 9.33 yr (\(\sim\)3408 d) and 12.42 yr (\(\sim\)4536 d). We report one radial velocity signal with \(P=2866\pm 140\) d, \(K=1.8\pm 0.21\) m s\({}^{-1}\), and \(e=0.34\pm 0.13\). Rosenthal et al. (2021) find a similar signal with \(P=3444.0^{+91.0}_{-81.0}\) d, which they classify as a false positive due to activity as well. Our signal is not quite close enough for us to consider it to be from the same source, and so we classify our RV Signal I as SRC rather than activity. Analysis of S-index activity data returns a multitude of signals. The first two of these signals have periods of \(P=3774\pm 250\) d and \(3204\pm 310\) d, which appear to be the same periodogram peak being fit multiple times. This signal correlates well with the 9.33 year (3405.5 days) signal from Garg et al. (2019) and the false positive from Rosenthal et al. (2021), so we report it as that same cycle but make no update to the period as our detection is clearly not well constrained. We recover a second set of similar signals, with periods \(P=381.7\pm 2.4\) d and \(P=343.2\pm 2.7\) d. The periodograms clearly show that these are aliases of the respective first two signals, so we disregard these detections as having any significance. We recover one further activity signal with parameters \(P=95.6\pm 0.24\) d, \(e=0.39\pm 0.22\). This signal does not appear in the radial velocity data, though we note that it is approximately twice the rotation period of 47 days reported by Baliunas et al. (1996). Additionally, the RV residual periodogram shows a strong peak at 44.4 days, which may correspond to the reported rotation period. The residual peak in the periodogram at 44.3 days likely corresponds to the rotation period, which Baliunas et al. (1996) measured to be 47 days. Additionally, Rosenthal et al. (2021) report a false positive at \(P=51.997^{+0.078}_{-0.039}\) d, which they attribute to an annual or instrumental systematic. Our data includes instruments not included in their study such as _HARPS_ and _PFS_, so we expect not to detect this systematic from _HIRES_ as strongly. ### Hd 38858 (Gj 1085) HD 38858 (GJ 1085, HR 2007, HIP 27435) is a nearby star at distance 15.21 pc (\(\varpi=65.7446\pm 0.0307\) mas; Gaia Collaboration et al., 2020) classified as G2V (Gray et al., 2003), with just slightly higher gravity (\(\log g\)= \(4.51\pm 0.01\)) but more metal poor ([Fe/H]\(=-0.22\pm 0.01\)) (Sousa et al., 2008). Isaacson and Fischer (2010) and Lovis et al. (2011) predict rotation periods of 24 d and 23.6\(\pm\)3.1 d based on mean chromospheric activity levels. The star's mean chromospheric activity level (\(\log R^{\prime}_{HK}\) = -4.948; Lovis et al., 2011)) is similar to that of the Sun. We detect one RV signal, with \(P_{I}=2893\pm 150\) d, \(K_{I}=2.8\pm 0.3\) m s\({}^{-1}\), and \(e_{I}=0.19\pm 0.12\). The long period and broad shape of this peak in the periodogram lead us to suspect that this signal is caused by magnetic activity. Though S-index analysis returns no significant detections, we note the presence of a growing signal in the periodogram at \(P=2615.0\) d. This signal does not meet our detection threshold of \(\Delta BIC=10\), but is strong evidence supporting our classification of RV Signal I as a magnetic activity cycle. Rosenthal et al. (2021) report a similar signal with \(P=3113^{+82.0}_{-79.0}\) d, \(K=4.43^{+0.73}_{-0.64}\) m s\({}^{-1}\), which they attribute to an activity cycle as well. ### Hd 100623 (20 Crt) 20 Crt (HD 100623, GJ 432 A, HR 4458, HIP 56452) is a K0-V star (Gray et al., 2006) at distance \(d=9.55\) pc (\(\varpi=104.6570\pm 0.0267\) mas; Gaia Collaboration et al., 2020). 20 Crt is cooler (\(T_{\rm eff}\) = 5189 K) and metal poor ([Fe/H]\(=-0.37\)) (Valenti and Fischer, 2005). It has a wide separation (15''.3, projected separation 146 AU ; Tian et al., 2020) white dwarf companion 20 Crt B (GJ 432B, HD 100623B, VB 4) of type DC10 (Holberg et al., 2016). Kervella et al. (2019) analysis of _Hipparcos_, and _Gaia_ astrometry finds 20 Crt A to have a tangential velocity anomaly of \(41.26\pm 5.38\) m s\({}^{-1}\) with a position angle of velocity anomaly vector of PA= \(131^{\circ}.24\pm 5^{\circ}.18\), which is remarkably close to the observed PA to component B (PA = 129\({}^{\circ}\)) (Mason et al., 2001). Adopting fiducial masses of \(M_{A}=0.78\) \(M_{\odot}\)(Aguilera-Gomez et al., 2018) and \(M_{B}=0.66\) \(M_{\odot}\)(Gentile Fusillo et al., 2019), and assuming the projected separation is representative of the semi-major axis, one would estimate a system mass of \(\sim\)1.44 \(M_{\odot}\), orbital period of \(\sim\)1470 yr, and approximate orbital velocities of \(\sim\)1.4 and \(\sim\)1.6 km s\({}^{-1}\) for A and B, respectively. Analysis using RVSearch fits the RV data using a linear trend rather than a Keplerian orbit. The signal is very evident in the radial velocity time series and we recover a best-fit trend of \(0.00482\pm 0.00022\) m s\({}^{-1}\) d\({}^{-1}\) for HD 100623. We assert that this signal is due to the companion, but our observation baseline is obviously not long enough to constrain its orbit well. Rosenthal et al. (2021) also report a long term linear trend of \(\dot{\gamma}=0.00475\pm 0.00028\) m s\({}^{-1}\) day\({}^{-1}\) (\(1.73\pm 0.10\) m s\({}^{-1}\) yr\({}^{-1}\)), which is consistent with our result, suggesting that our two signal detections are likely being caused by the same source. Additionally, we report one significant signal in the S-index activity analysis, with parameters \(P=3729\pm 89\) d and \(e=0.288\pm 0.073\). The peak is fairly well-defined, and the long period makes this detection a plausible new magnetic activity cycle. ### Hd 131977 (Gj 570a) HD 131977 (GJ 570 A, HR 5568, HIP 73184, KX Lib, Lalande 27173) is the primary in a complicated multiple star system with at least two other stellar companions situated 24'' away (HD 131976, resolved into the M dwarf pair GJ 570B and C; Forveille et al. 1999), and a distant substellar companion, GJ 570D, 274'' away (Burgasser et al., 2000). HD 131977 is 5.89 pc away (\(\varpi=169.8843\pm 0.0653\) mas; Gaia Collaboration et al. 2020) and classified K4V (Keenan & McNeil, 1989). There are two published rotation periods, \(P_{\rm rot}\,=44.6\) d Cincunegui et al. (2007) and 39.993 d (Fuhrmeister et al., 2022). There is a surprisingly wide range of quoted metallicities for HD 131977, ranging from [Fe/H] \(=-0.24\pm 0.05\)(Mishenina et al., 2012) to \(0.12\pm 0.03\)(Valenti & Fischer, 2005). Through analysis with RVSearch we recover only a linear trend for this system, likely attributable to one of the (sub-)stellar companions. There are only 55 data points for this target, all from _HARPS_, spanning \(\sim\)6 years. Because of these constraints, it is unsurprising that we do not recover full stellar companion orbits for this system and we recommend further observations to better constrain the parameters of the system. Our S-index analysis returns three significant detections. The first detection has \(P=22.7657\pm 0.0049\)d, which we note is half the rotation period published by Cincunegui et al. (2007). Detection of a P\({}_{\rm rot}/2\) signal can be caused by stellar spots on different hemispheres of the star being observed over multiple observing seasons, and so we attribute this signal to stellar rotation. The other two signals are extremely short-period (\(P=3.87799\pm 0.00054\)d and \(P=2.08913\pm 0.00044\)d) and we suspect that the relatively small amount of data for this target stretched over 6 years allows for Keplerian signals to fit multiple short-period cycles to the sparse sampling. We disregard these signals from being astrophysically significant at this point in time, and recommend more observations of this target to better characterize the star's activity. ### Hd 140901 (Gj 599 A) HD 140901 (GJ 599 A, HR 5864, HIP 77358) is a G7IV-V type star (Gray et al., 2006) with a high proper motion. It is located at \(d=15.25\) pc (\(\varpi=65.5889\pm 0.0342\) mas; Gaia Collaboration et al. 2020), and has a 14''.6 separation white dwarf companion HD 140901B (GJ 599 B). It is slightly cooler than the Sun (\(T_{\rm eff}\)\(=5602\)\(\pm\) 14 K), and slightly more metal-rich at [Fe/H]\(=0.10\pm 0.02\)(Soubiran et al., 2022). There are no confirmed planets or published rotation periods for this star. Using the average \(\log R^{\prime}_{HK}\) value from Gomes da Silva et al. (2014), color from _Hipparcos_ (\(B-V=0.715\)), and the activity-rotation calibration from Mamajek & Hillenbrand (2008), we predict that the rotation period of the star would be \(P_{\rm rot}\)\(\simeq 21.5\) d. Our radial velocity analysis in RVSearch recovers one signal, with \(P=5084\pm 1200\) d, \(K=11.6\pm 2.4\) m s\({}^{-1}\), and \(e=0.44\pm 0.25\). S-index analysis does not return any significant detections. The majority of our radial velocity data comes from _UCLES_, and because we do not have S-index activity data from this instrument, it makes sense that we do not see this same RV signal within the S-index data. H\(\alpha\) data analysis recovers two significant detections. The first of these signals is too long period to be well constrained by the Keplerian fit because its duration is on par with the _UCLES_ data observation baseline, so we defer to the original periodogram peak as the best estimator of this signal: \(P_{I}=5431.8\) d. This period agrees well with our RV detection, but because it also aligns with the long-period trend present in all the _UCLES_ data, we refrain from concluding definitively that RV Signal I is caused by magnetic activity. We classify it instead as SRC and recommend further study of this target to confirm the source of this signal. The second H\(\alpha\) signal has parameters \(P_{II}=19.986\pm 0.019\) d and \(e_{II}=0.27\pm 0.19\). This is in good agreement with our prediction of a 21.5 day rotation period. We report H\(\alpha\) activity signal II as a measurement of this star's rotation period. ### Hd 146233 (18 Sco) 18 Sco (HD 146233, GJ 616, HR 6060, HIP 79672) is a well-characterized solar twin and G2Va spectral standard star (Keenan & McNeil, 1989) at \(d=14.13\) pc (\(\varpi=70.7371\pm 0.0631\) mas; Gaia Collaboration et al. 2020). Spina et al. (2018) reports stellar parameters extremely similar to those of the Sun: \(T_{\rm eff}=5808\pm 3\) K, \(\log g=4.440\pm 0.009\), [Fe/H] \(=+0.041\pm 0.003\), \(\tau=4.0\pm 0.4\) Gyr, \(M=1.022\pm 0.004\,M_{\odot}\). The star also has both rotation (\(P_{\rm rot}\) = 22.9 d; Vidotto et al., 2014) and chromospheric activity levels (\(\log R^{\prime}_{HK}\) = -4.919; Meunier et al., 2017), very similar to the Sun as well. Lovis et al. (2011) report a magnetic activity cycle of \(P_{\rm cyc}=2803^{+2663}_{-392}\) d, with predicted rotation period \(P_{\rm rot}\) = 23.8\(\pm\) 3.2 d based on the mean activity level (\(\log R^{\prime}_{HK}\) = -4.923). Boro Saikia et al. (2018) report a period of \(P_{\rm rot}\) = 22.7 d and activity cycle of \(P_{\rm cyc}=11.36\pm 1.23\) yr. It has no reported exoplanets. We find three radial velocity signals within this system: \(P_{I}=2374\pm 47\) d, \(K_{I}=5.47\pm 0.33\) m s\({}^{-1}\), \(e_{I}=0.21\pm 0.07\); \(P_{II}=6256\pm 370\) d (\(17.1\pm 1.01\) yr), \(K_{II}=4.96\pm 0.57\) m s\({}^{-1}\), \(e_{II}=0.59\pm 0.06\); \(P_{III}=19.8777\pm 0.0062\) d, \(K_{III}=1.73\pm 0.26\) m s\({}^{-1}\), \(e_{III}=0.38\pm 0.16\). Additionally, there is one signal in the residuals periodogram at \(P=10.5\) d that falls just below the detection threshold. Butler et al. (2017) reported a planet candidate at roughly the same period as our Signal I (P\({}_{Butler}\) = 2528.8\(\pm\) 105.5 days) and an S-index periodicity of 4190 days in their _HIRES_ data. Our detection of \(P_{I}=2374\pm 47\) d corresponds directly to a signal recovered in the S-index activity data (\(P=2812\pm 290\) d), so we report this signal as an update to the magnetic activity cycle in Table 9. We also note the existence of a broad peak in the H\(\alpha\) periodogram at around 2000 days, which further supports our conclusion that this signal is caused by magnetic activity rather than a planet as proposed by Butler et al. (2017). The discrepancy between our analysis and that of Butler et al. (2017) comes from their work including only data from _HIRES_, while ours incorporates _HARPS_, _HIRES_, _PFS_, and _UCLES_. Our signal is mainly driven by _HARPS_, and comparatively, the error bars on the _HIRES_ measurements are significantly larger. It makes sense that we recover the activity detection while the _HIRES_ -only work did not. This is confirmed by Rosenthal et al. (2021), who report a similar signal with \(P=2426.0^{+60.0}_{-42.0}\) d as an activity cycle as well. The 6256-day signal we believe to be activity as well, due to its long period and periodogram peak shape. The S-index activity data also yields a significant detection at \(P=5272\pm 1500\), which corresponds well to this long-period signal. They are not exact matches, but the presence of a 5000-day signal in both data sets further supports the conclusion that this signal is caused by magnetic activity. RV Signal III has parameters \(P_{III}=19.877\pm 0.0062\) d, \(K_{III}=1.73\pm 0.26\) m s\({}^{-1}\), and \(e=0.38\pm 0.16\). Rotation periods reported by Lovis et al. (2011) and Vidotto et al. (2014) are both \(>20\)d, and this signal is fit to extremely high precision to 19.877 days. A signal caused by rotation should also appear in the Ca H&K data, but there is no strength in the S-index periodogram around 19 days. Additionally, the periodogram peak is very sharp and well-defined, which would be highly unusual if the signal was caused by rotation- rotation signals in RV come from observing stellar spots as the star rotates, and spots migrate and change slightly over time, so we expect to see some level of imprecision or variation in these RV measurements. The definition in this periodogram peak suggests no variation in the period over our approximately 20 year observation baseline, which is highly unusual. We therefore classify this signal as a Candidate, and recommend further study of this signal to confirm whether it is planetary in origin. Though it is not fit by RVSearch, we note that the 10.5 d peak in the residual periodogram is also very well-defined and extremely close to the false alarm probability line. A future, more in-depth study of this target could investigate this signal further to address the cause of this significant period. Analysis of H\(\alpha\) activity from the _UCLES_ instrument returns no significant detections. ### Hd 188512 (\(\beta\) Aql) \(\beta\) Aql (HD 188512, GJ 771 A, HR 7602, HIP 98036, Alshain) is a high proper motion star at \(d=13.69\) pc (\(\varpi\) = 73.00 \(\pm\) 0.20 mas; van Leeuwen, 2007), and is the primary spectral standard for type G8IV (Johnson & Morgan, 1953; Keenan & McNeil, 1989). The star is the most luminous star in our sample, and is somewhat cooler than the Sun (\(T_{\rm eff}\) = \(5117\pm 10\) K), less metal-rich ([Fe/H] = \(-0.19\pm 0.01\)), and somewhat evolved with a lower surface gravity (\(\log g=3.64\pm 0.03\)) (Maldonado & Villaver, 2016). Butkovskaya et al. (2017) report a magnetic activity cycle of \(P_{\rm cyc}\)= \(969\pm 27\) d (\(2.653\pm 0.074\) yr) and a surprisingly short rotation period of \(P_{\rm rot}\)= \(5.08697\pm 0.00031\) days. Corsaro et al. (2012) report asteroseismic analysis of radial velocity data for \(\beta\) Aql showing intra-night oscillations at the \(\sim\)5-10 m s\({}^{-1}\) amplitude level. The star appears to be an evolved star somewhat more massive than the Sun (\(1.36\pm 0.17\,M_{\odot}\), \(1.337\pm 0.021\,M_{\odot}\); Corsaro et al., 2012; Gomes da Silva et al., 2021), hence the relatively fast rotation for a subgiant likely reflects that the star spent its main sequence life blueward of the Kraft break. The star appears to be consistent with being an intermediate-mass star, with isochronal age estimates consistently slightly younger than the Sun (\(\sim\)3-4 Gyr; Maldonado et al., 2013; Jofre et al., 2015; da Silva et al., 2015; Brewer et al., 2016; Gomes da Silva et al., 2021), and chromospheric age estimates which had assumed that the star was a typical solar-type dwarf (9.6, 11.4 Gyr; Mamajek and Hillenbrand, 2008) are likely to be significantly overestimated. After finding and subtracting a linear trend (0.00225 m s\({}^{-1}\) day\({}^{-1}\)) in the California Planet Search data set for \(\beta\) Aql A, Luhn et al. (2020) reports a Doppler signal "\(b\)" with \(P=10524.603\) d and velocity amplitude \(K=5.43\) m s\({}^{-1}\), which would correspond to a \(m\)sin\(i\) = 0.167 \(M_{\rm J}\) companion at \(a=10.18\) au. \(\beta\) Aql is in the Washington Double Star Catalog (Mason et al., 2001) with components A, B, and C (WDS J19553+0624 = STT 532), although component C at separation 214'' (TYC 493-72-1) is reported to be an unrelated interloper (Kiyaeva et al., 2008). \(\beta\) Aql B is a M2.5V star (Montes et al., 2018) at projected separation 13''.27 (182 au), and clearly a physical companion sharing similar proper motion and parallax (\(\varpi=73.3889\pm 0.0215\) mas; Gaia Collaboration et al., 2020). Kervella et al. (2022) reports that the inferred tangential velocity calculated from Gaia EDR3 astrometry differs from that of \(\beta\) Aql A by only 1.60 km s\({}^{-1}\). However, the astrometric perturbation on \(\beta\) Aql A, in the form of the tangential velocity anomaly as estimated through comparing _Hipparcos_ and _Gaia_ astrometry, appears to be negligible (\(5.74\pm 10.65\) m s\({}^{-1}\); Kervella et al., 2022). Analysis of two decades' worth of RV data with RVSearch returns a linear trend rather than a full Keplerian signal. We find a best-fit RV trend of 0.00262 m s\({}^{-1}\) d\({}^{-1}\), in good agreement with the Luhn et al. (2020) result. The long-period trend is undoubtedly associated with the perturbation induced by the M dwarf companion B at separation at \(\sim\)180 au. As the position angle of the \(\beta\) Aql binary has changed by only 23deg between 1838 and 2016, this suggests the AB orbital period to be of order few thousand years. Analysis of the S-index data for this target returns no significant detections. The star does not have any _UCLES_ observations, so there are no EW\({}_{H\alpha}\) measurements to study. ### Hd 190248 (\(\delta\) Pav) \(\delta\) Pav (HD 190248, GJ 780, HR 7665, HIP 99240) is a G8IV (Gray et al., 2006) star at \(d=6.10\) pc (\(\varpi=163.9544\pm 0.1222\) mas; Gaia Collaboration et al., 2020) and chromospherically quite inactive (\(\log R^{\prime}_{HK}\) = -5.10) (Gomes da Silva et al., 2014). Ramirez et al. (2013) report the star to have \(T_{\rm eff}=5517\pm 60\) K, \(\log g=4.28\pm 0.03\), and to be fairly metal-rich ([Fe/H] = \(0.33\pm 0.07\)). The star's rotation period has been estimated to be \(P_{\rm rot}=21.4\pm 9.3\) d (Hojjatpanah et al., 2020). RVSearch identifies one Keplerian signal in the combined RV data for this star, with \(P_{I}=360.8\pm 1.9\) days and \(K=1.21\pm 0.43\) m s\({}^{-1}\). The _HARPS_ and _UCLES_ data exhibit significant disagreements with one another in the phase folded plot, however, and the signal seems to be driven strongly by the seasonality of the _HARPS_ data as evidenced by the sudden increase in the strength of the signal as a function of observation (see HD 190248's RVSearch final summary in the accompanying figure set). We therefore suspect that this signal is due to observational sampling effects and not a planet. RVSearch also detects a linear trend in the data, with \(dv_{r}/dt=-0.00055\pm 0.00009\) m s\({}^{-1}\) day\({}^{-1}\) (\(-0.201\pm 0.033\) m s\({}^{-1}\) yr\({}^{-1}\)). Such trends are often suggestive of long period sub-stellar or giant planet companions. We can compute initial estimates of the minimum mass and semi-major axis for this companion by considering the linear trend to fall in the non-quadrature portion of an RV phase curve. In this case, we assume that the period of such a companion must be at least twice our observational baseline, as otherwise we would have expected to see some level of quadratic or sinusoidal curvature by now, and that its RV semi-amplitude must be at least half of the total RV span covered by the linear trend in the data set. That sets P\({}_{\rm min}\) = 37 years and K\({}_{\rm min}\) = 1.85 m s\({}^{-1}\). Folding in our knowledge of the host star's stellar mass, M\({}_{\star}\) = 1.001 M\({}_{\odot}\), we find that the planet must be at least 69 M\({}_{\oplus}\) (0.22 \(M_{Jup}\)) and on an orbit with a minimum semi-major axis \(a_{min}=11.1\) au. Comparing with the RVSearch injection/recovery summary plot, this combination of planet mass and orbital distance falls into a region that is not reliably recovered and so it is not surprising that the potential companion inducing this signal is not yet detectable with our current RV data set. Makarov et al. (2021) recently reported the detection of an astrometric perturbation for \(\delta\) Pav which they interpret as being likely due to a long-period giant planet. They compare the short-baseline Gaia EDR3 proper motions (Gaia Collaboration et al., 2020) for \(\delta\) Pav with long baseline astrometric parameters (\(\sim\)22-26 yr) combining _Hipparcos_ with ground-based astrometry USNO Robotic Astronomic Telescope (URAT; Zacharias et al., 2015). Combining the Gaia EDR3, _Hipparcos_ and URAT data, Makarov et al. (2021) estimate the perturbation of the tangential velocity for \(\delta\) Pav to be (17.4, -13.2) m s\({}^{-1}\) in \(\alpha\) and \(\delta\), respectively (0.995 and 0.958 confidence levels). Removing the ground-based data, and using only _Hipparcos_ and Gaia EDR3, Makarov et al. (2021) find the signal to be small but still significant: (7.7, -6.2) \(\rm m\,s^{-1}\) in \(\alpha\) and \(\delta\), respectively (at combined confidence level 0.999). Simply subtracting the proper motions from _Hipparcos_ (epoch 1991.5) from Gaia EDR3 (epoch 2016.0) yields \(\Delta\mu_{\alpha}\), \(\Delta\mu_{\delta}\) = 0.731\(\pm\)0.149, \(-0.187\pm 0.167\)\(\rm mas\,yr^{-1}\), which at the distance of \(d\) = 6.099 pc (1/\(\varpi\) from Gaia EDR3) yields differences in the tangential motions of \(21.1\pm 4.3\), \(-5.4\pm 4.8\) m s\({}^{-1}\) in \(\alpha\) and \(\delta\), respectively. Over the 24.5-yr baseline between the mean epochs for _Hipparcos_ and Gaia EDR3, the averaged tangential accelerations are then \(a_{\alpha},a_{\delta}=0.861\pm 0.176\), \(-0.220\pm 0.196\)\(\rm m\,s^{-1}\,yr^{-1}\), or total tangential acceleration \(a_{tan}=0.889\pm 0.263\)\(\rm m\,s^{-1}\,yr^{-1}\). Combining the measured radial acceleration (\(a_{rad}=-0.201\pm 0.033\) m s\({}^{-1}\) yr\({}^{-1}\)) with the tangential acceleration (\(a_{tan}\)) yields a total inferred acceleration on \(\delta\) Pav of \(a_{tot}=0.911\pm 0.265\)\(\rm m\,s^{-1}\,yr^{-1}\) (\(2.89\pm 0.84\)\(\times 10^{-8}\)\(\rm m\,s^{-2}\)). Analysis of S-index data returns one significant period, with an initial \(\Delta\)BIC periodogram peak at 6375 days and an initial MAP fit of 6810.18 days. This is suggestive of a \(\sim\)17 year magnetic cycle, but attempts to fully characterize the signal via RVSearch's MCMC analysis fail - likely due to insufficient sampling of the full orbital phase space. We therefore note the signal as an 'LPS' in the S-index detections table and report just the MAP period, but encourage further monitoring of this star in the coming years to help fully resolve the star's long term magnetic activity. The star's EW\({}_{H\alpha}\) data contains two significant signals according to RVSearch, one with a period P = 352.9\(\pm\)1.5 days, and the other with P = 1171\(\pm\)36 days. The first signal suffers from the star's seasonal availability, leaving \(\sim 1/3\) of its orbital phase curve much less populated than the rest, and we suspect it is due to observational cadence constraints. The longer period signal is well defined in the \(\Delta\)BIC periodogram but falls logarithmically between the periods expected for the star's rotation period and its potential magnetic cycle. As HD 190248 is a very inactive star, much like the sun at solar minimum, this \(\sim\)1200 day signal prompts a question of whether we are seeing less obvious activity phenomena (e.g., meridional flows Meunier & Lagrange, 2020) that operate on intermediate time scales. ### HD 207129 (GJ 838) HD 207129 (GJ 838, HR 8323, HIP 107649) is a nearby star at distance \(d\) = 15.56 pc (\(\varpi\) = 64.2717\(\pm\)0.0430 mas; Gaia Collaboration et al., 2020) classified as G0V Fe+0.4 (Gray et al., 2006), and famous for having a resolved dusty debris disk (Jourdain de Muizon et al., 1999; Krist et al., 2010). The star is a dwarf (\(\log g=4.49\pm 0.02\)) of solar metallicity ([Fe/H] = \(0.00\pm 0.01\)), just slightly hotter than the Sun (\(T_{\rm eff}=5937\pm 13\) K) (Sousa et al., 2008). Marshall et al. (2011) estimate the rotation period of the star to be \(P_{\rm rot}\)\(\simeq\) 12.6 d based on the star's \(v\)sin\(i\). Watson et al. (2011) and Lovis et al. (2011) predict the rotation period to be \(P_{\rm rot}\)\(=17.13\pm 1.61\) d and 17.6\(\pm\)2.8 based on the star's chromospheric activity. Lovis et al. (2011) report a magnetic activity cycle with period \(P_{\rm cyc}\)\(=1520^{+171}_{-139}\) d using 79 observations of \(\log R^{\prime}_{HK}\) measured over an 1876 d span. We recover one significant RV signal, with parameters \(P=1964\pm 49\) d (\(5.38\pm 0.134\) yr), \(K=4.02\pm 0.61\) m s\({}^{-1}\), \(e=0.44\pm 0.16\). We find a single significant signal in the S-index analysis, with an initial periodogram peak of \(P_{I}\)=1886 days, and a MAP fit of 1898 days. This signal does not converge when subjected to RVSearch's affine-invariant sampling, and so we interpret it as an LPS. Despite this, the MAP period of the S-index is within 2\(\sigma\) of the signal detected in the RVs, and so we report RV Signal I as a magnetic activity cycle. Our estimate of the activity cycle period is marginally consistent with that reported by Lovis et al. (2011) (2.2\(\sigma\) difference). Our signal has a longer period than the baseline of the Lovis et al. (2011) study, and so this difference between our best-fit models does not raise significant concerns. The EW\({}_{H\alpha}\) data for this star produces two significant detections, the first at \(P_{I}\)=5455\(\pm\)1900 days and the second at \(P_{II}\)=1726\(\pm\)71 days. The longer signal is close to the _UCLES_ observational baseline extent and has a large uncertainty, so we interpret it as an LPS and do not assume that it is astrophysical in nature. The second signal, however, is well defined in period and similar in duration to both the Lovis et al. (2011) \(\log R^{\prime}_{HK}\) detection and our own S-index detections. We therefore consider it to be additional evidence for a long period magnetic cycle in the star. Given these S-index and H\(\alpha\) detections, we report RV Signal I as an update to the previous, magnetic cycle driven, detection. ## 7 Targets Lacking RV Signals For the remaining 16 stars included in this study, RVSearch did not recover any significant signals in the radial velocities. We further subdivide these targets into Section 7.1, stars which returned only significant activity signals, and Section 7.2, targets which failed to return any significant signals in either the RVs or the activity. Many of these had a very limited number of RV measurements. Future radial velocity surveys should focus primarily on these targets in order to build knowledge of their exoplanetary parameter space. The stars with no significant RV signals but a nonzero number of activity detections are listed in Table 10. Stars with no detections at all are listed in Table 11. The number of measurements analyzed for each of these stars can be found in Table 2. ### Targets with Activity Detections Only #### 7.1.1 HD 4628 (GJ 33) HD 4628 (GJ 33, HR 222, HIP 3765, Lalande 1299, Wolf 25) is a metal poor ([Fe/H] = \(-0.24\pm 0.03\)) (Takeda et al., 2005) K2V star (Gray et al., 2003) at only \(d=7.43\) pc (\(\varpi=134.4948\pm 0.0578\) mas; Gaia Collaboration et al., 2020). The star is fairly slow rotating, with differential rotation observed (seasonal periods ranging from 37.2 to 41.4 d) and mean \(P_{\rm rot}\,\simeq 38.5\) d (Donahue et al., 1996). Analysis of the Mt. Wilson survey data by Donahue (1996) yielded a mean cycle period of of \(P_{\rm cyc}\,=8.6\) yr (\(\sim\)1966-1995), and subsequent analysis of a longer baseline by Garg et al. (2019) yielded cycle periods of \(P_{\rm cyc}\,=8.67\), 8.08, and 9.98 yr (mean \(P_{\rm cyc}=8.91\) yr). Boro Saikia et al. (2018) estimate the chromospheric activity cycle to be \(P_{\rm cyc}\,=8.47\pm 0.05\) yr. We recover one significant detection in the S-index data and none in the radial velocities. The fitted signal has \(P=3699\pm 310\) d and eccentricity \(e=0.33\pm 0.12\). This appears to correspond to the activity cycle for the star (\(P_{\rm cyc}\,=10.90\pm 0.41\) yr), although somewhat longer than the cycle periods reported by the longer baseline Mt. Wilson survey data (Donahue, 1996; Garg et al., 2019). #### 7.1.2 HD 14412 (GJ 95) HD 14412 (GJ 95, HR 683, HIP 10798) is a G8V type star (Gray et al., 2006) at \(d=12.83\) pc (\(\varpi=77.9140\pm 0.0295\) mas; Gaia Collaboration et al., 2020). Rotation period \(P_{\rm rot}\) estimates for HD 14412 range from13.0\(\pm\)0.3 d (Hojjatpanah et al., 2020) to 29 d (Isaacson and Fischer, 2010) (from \(\log R^{\prime}_{HK}\)), however the 13-day estimate seems surprisingly fast given the star's low chromospheric activity (\(\log R^{\prime}_{HK}\)= -4.839; Isaacson and Fischer, 2010). We recover two significant S-index activity signals for this star: \(P_{I}=2312\pm 734\), \(e_{I}=0.091\pm 0.098\) and \(P_{II}=5686\pm 1600\)d, \(e_{II}=0.5\pm 0.16\). The RV periodogram returns no significant detections but does contain one strong peak just under the detection threshold, with \(P=2074.5\) d. Howard and Fulton (2016) presented a S-value periodogram for HD 14412, showing a pronounced peak at 5.7 yr (2082 d). We report our Activity Signal I a magnetic activity cycle of \(P_{\rm cyc}\,=2312\pm 73\) d (\(6.33\pm 0.2\) yr), fairly consistent with that reported by Howard and Fulton (2016). We suspect S-index activity signal II is caused by a magnetic activity cycle as well, due to the long period and broad shape of the peak. #### 7.1.3 HD 30495 (58 Eri) 58 Eri (HD 30495, GJ 177, HR 1532, HIP 22263, IX Eri) is a nearby star at distance 13.24 pc (\(\varpi=75.5289\pm 0.0539\) mas; Gaia Collaboration et al., 2020) classified as G1.5V CH-0.5 (Gray et al., 2006). The star is a young (\(\sim\)1 Gyr) solar analog, with a rotation period \(P_{\rm rot}\)= \(11.36\pm 0.17\) d, and manifesting both short (\(\sim\)1.7 yr) and long (\(\sim\)12.2 yr) activity cycles (Egeland et al., 2015). Gaidos et al. (2000) reports time series photometry over 6 seasons, finding periods between 10.5 and 11.47 days, and reporting a mean rotation period of \(P_{\rm rot}\)= 11.3 d. RVSearch finds no significant signals in the radial velocity data, but one significant signal in the S-index activity data with \(P=71.46\pm 0.11\) d, \(e=0.31\pm 0.12\). There is a correlated peak in the radial velocity residual periodogram at 72 days, although it does not rise to the level of being a "significant detection". This signal does not correspond to the published rotation period, nor to either of the published activity cycles referenced above. Because of this, we classify this signal as SRC and recommend further study of the activity data for this target in a future work. #### 7.1.4 HD 50281 (GJ 250a) HD 50281 (GJ 250A, HIP 32984) is a K3.5V star (Gray et al., 2003) at \(d=8.74\) pc (\(\varpi=114.3547\pm 0.0418\) mas; Gaia Collaboration et al., 2020). The star is in a wide binary (separation 58\(\arcsec\).9; Mason et al., 2001) with the M dwarf GJ 250B. HD 50281 is an active star (\(\log R^{\prime}_{HK}\) = -4.554) (Gondoin, 2020), and Fuhrmeister et al. (2022) predict a rotation period of \(P_{\rm rot}\) = 16.493 d based on the chromospheric activity. Analysis of the RV periodogram yielded no significant signals. The time series data in Ca H & K shows a very complicated periodic pattern which resulted in 7 significant periodic signals detected. As the last couple appear amid a forest of slightly lower power peaks, we believe that our statistical criterion may be inadequate for this very active star and picking out true signals from background noise. We focus on the interpretation of the first \begin{table} \begin{tabular}{l l l} \hline \hline Identifier & Identifier & Identifier \\ \hline HD 4628 & HD 14412 & HD 30495 \\ HD 50281 & HD 72673 & HD 125072 \\ HD 149661 & HD 156026 & HD 216803 \\ \hline \hline \end{tabular} Stars from our sample for which RVSearch did not detect any significant Radial Velocity signals, but did return significant detections in their S-index or EW\({}_{H\alpha}\) analyses. Activity detections can be found in Tables 6 and 7. \end{table} Table 10: Targets with Activity Detections Only five prominent peaks, which had periods of \(2264\pm 11\) d, \(2102\pm 12\) d, \(139.42\pm 0.05\) d, \(12.47954\pm 0.00046\) d, and \(16.49842\pm 0.00089\) d. The first three have similar semi-amplitudes in \(\Delta\)S at the \(\sim\)0.05-0.10 level, and appear to be attempts by our code to fit a single complicated activity cycle of \(P_{\rm cyc}\) \(\simeq\) 2264 d which is inadequately fit by a single Keplerian orbit model. The latter two are well-defined and similar to the predicted rotation period from Fuhrmeister et al. (2022). Hence, we consider the 12.5 d and 16.5 d signals to be from differential rotation. #### 7.1.5 Hd 72673 (Gj 309) HD 72673 (Gj 309, HIP 41926) is a K1V star (Keenan & McNeil, 1989), with no known companions or planets. The star is fairly inactive (\(\log R^{\prime}_{HK}\) = -4.968) with a slow predicted rotation period (\(P_{\rm rot}\)= \(40.2\pm 4.1\) d; Lovis et al., 2011). We recover no significant RV signals but one S-index and one H\(\alpha\) activity detection. The S-index activity detection has parameters \(P=3217\pm 200\)d and \(e=0.14\pm 0.14\). This signal matches the previously reported magnetic activity cycle period reported by Lovis et al. (2011) (\(P_{\rm cyc}\)= \(3050^{+558}_{-408}\) d), although the uncertainty in our cycle period is 7\(\times\) smaller uncertainty. We therefore report our detection as an update to this previously published magnetic activity cycle. The H\(\alpha\) activity detection is much shorter period, with parameters \(P=341.2\pm 3.6\)d and \(e=0.16\pm 0.18\). This is obviously very close to one year, indicating a strong possibility that this signal is being driven by windowing effects similar to with the _HARPS_instrument. The peak is extremely well-defined, however, and highly significant, so we refrain from decisively calling this detection a false positive. #### 7.1.6 Hd 125072 (Gj 542) HD 125072 (Gj 542, HIP 69972) is a K3V (Houk & Cowley, 1975) star at \(d=11.82\) pc (\(\varpi=84.6029\pm 0.0218\) mas Gaia Collaboration et al., 2020). Gray et al. (2003) classified the star as K3IV subgiant, however the star's spectroscopic parameters (\(T_{\rm eff}\)= \(4899\pm 48\) K, \(\log g=4.55\pm 0.03\), [Fe/H] = \(0.28\pm 0.08\) ; Ramirez et al., 2013) and HR diagram position (\(B\) \(-\) \(V\)= 1.03, \(M_{V}=6.30\), \(\sim\)0.44 mag above main sequence) clearly flag it as a very metal-rich dwarf. Lovis et al. (2011) report a magnetic activity cycle of \(P_{\rm cyc}=1146^{+982}_{-70}\) d and a predicted rotation period of \(42.0\pm 5.9\) d based on the low mean activity level (\(\log R^{\prime}_{HK}\) = -4.941). We recover no significant RV signals, two detections in the S-index activity data, and one in the EW\({}_{H\alpha}\) data. S-index signal I, with \(P_{I}=2989\pm 100\) d, loosely correlates with the magnetic activity cycle of Lovis et al. (2011). S-index signal II has \(P_{II}=40.49\pm 0.036\) d, which is most likely caused by stellar rotation, and agrees well with the predicted cycle from Lovis et al. (2011). The EW\({}_{H\alpha}\) data analysis yields one significant detection with an initial \(\Delta\)BIC period of 5468.5 days, but fails to produce a well constrained orbital fit during the MCMC analysis (instead giving \(P=9483\pm 9400\) d). We therefore instead report the MAP best-fit solution, which has a period of 7137.76 days, which we attribute to the long-period _UCLES_ trend present in almost all the H\(\alpha\) data for all targets. We note additionally the presence of a signal in the RV residual periodogram that falls just below the detection threshold, at \(P=13.5\) d. #### 7.1.7 HD 149661 (12 Oph) 12 Oph (HD 149661, GJ 631, HR 6171, HIP 81300, V2133 Oph) is a K0V(k) (Gray et al., 2006) star at \(d\) = 9.89 pc (\(\varpi=101.0719\pm 0.0501\) mas Gaia Collaboration et al., 2020). The star has dwarf surface gravity (\(\log g\) = 4.52 \(\pm\) 0.02) and metallicity just slightly more than solar ([Fe/H] = \(0.03\pm 0.01\) ; Soubiran et al., 2022). Analysis of chromospheric activity levels (\(\log R^{\prime}_{HK}\) index) show that it varies widely over the past several decades. During the Mt. Wilson survey period of 1967-1983, the star had an average \(\log R^{\prime}_{HK}\) value of -4.583 (Baliunas et al., 1996), however the survey by Radick et al. (2018) during 1994-2016 recorded an average of \(\log R^{\prime}_{HK}\) = -4.71, while analysis of HARPS observations during 2005-2012 by Gomes da Silva et al. (2021) estimated a median activity level of \(\log R^{\prime}_{HK}\) = -4.56. From analysis of the Mt. Wilson HK survey data, Donahue et al. (1996) reports an average rotation period over 9 seasons of \(P_{\rm rot}\) = 21.07 d, with individual seasonal rotational periods ranging between 20.6 and 22.9 d. Boo Saikia et al. (2018) report two Ca HK activity cycles with periods \(P_{\rm cyc}\) = \(15.3\pm 0.4\) yr and \(P_{\rm cyc}\) = \(7.7\pm 0.12\) yr. Analysis of both RV and H\(\alpha\) data returns no detections for this target, but the S-index search yields two significant signals: \(P_{I}=1649\pm 55\)d, \(e_{I}=0.42\pm 0.12\); \(P_{II}=3874\pm 1200\)d, \(e_{II}=0.73\pm 0.21\). The first of these signals is likely to be a magnetic activity cycle, based on its long period and signal strength. The second signal is poorly constrained- the periodogram peak being fit is at 4062.0 days, which is approximately half of the observation baseline. RVSearch struggles to fit a Keplerian orbit to the signal, as there is insufficient data to constrain the orbit very well. This signal may be evidence of a longer period magnetic activity cycle, but additional data is needed to constrain the cycle well. 36 Oph C (HD 156026, GJ 664, HIP 84478, V2215 Oph, WDS J17153-2636C) is a nearby (5.88 pc; \(\varpi=169.9617\pm 0.0311\) mas; Gaia Collaboration et al., 2021) K5V(k) (Gray et al., 2006) star which is a very wide separation (731''.54) companion to the bright K0V+K1V pair 36 Oph A & B (Cayrel de Strobel et al., 1989). The orbital motion of C around AB appears to be detectable astrometrically, as Kervella et al. (2022) show that C shows a tangential velocity anomaly between the _Hipparcos_ and _Gaia DR3_ data of 5.98 \(\pm\) 1.19 m s\({}^{-1}\) with a vector of PA = 87deg.22 \(\pm\) 7deg.25 (compare to the PA between AB and C of PA = 73deg.83). The difference in tangential velocities between AB and C is 0.63 km s\({}^{-1}\), which is similar to the predicted escape velocity of C from AB (0.61 km s\({}^{-1}\)) (Kervella et al., 2022). Photometric variability at the \(\sim\)0.02 mag in \(V\)-band for 36 Oph C was reported by Lloyd Evans & Koen (1987) who estimated a period of 21.0 d. Independently, Baliunas et al. (1996) report an identical rotation period of 21 d based on analysis of Mt. Wilson Ca II H & K observations, and an average activity level of \(\log R^{\prime}_{HK}\) = -4.662. Boro Saikia et al. (2018) reports a Ca II H & K activity cycle period of \(P_{\rm cyc}\) = 21.3 \(\pm\) 0.83 yr. 36 Oph C appears to be erroneously classified as a RS CVn variable in the General Catalog of Variable Stars (Samus' et al., 2017) and SIMBAD19, and while the star is clearly spotted and active, there is no evidence of the star being a chromospherically active binary (i.e. no sign of short-period stellar binary). The radial velocity trend is flat, with scatter at the \(\sim\)2 m s\({}^{-1}\) level, consistent with the 1.57 m s\({}^{-1}\) jitter previously estimated by Isaacson & Fischer (2010). Footnote 19: [https://simbad.u-strasbg.fr/simbad/](https://simbad.u-strasbg.fr/simbad/) The S-index data shows one significant peak at \(P=378.9\pm 2.2\) d, which is likely caused by systematics, as the period is extremely close to one year. Additionally, there are weak peaks in the residual periodogram around 4.9 d, \(\sim\)22 d and \(\sim\)25 d, with the latter two suspiciously near the previously reported 2 day rotation period. #### 7.1.9 HD 216803 (TW PsA) TW PsA (Fomalhaut B) is a nearby (7.60 pc; \(\varpi=131.5525\pm 0.0275\) mas; Gaia Collaboration et al., 2021) K4Ve (Keenan & McNeil, 1989) spectral standard star which is within a very wide, young (\(\sim\)440 Myr-old) triple system with Fomalhaut A and C (LP 876-10) (Mamajek et al., 2013). The star has essentially solar metallicity and dwarf surface gravity (\(T_{\rm eff}\) = \(4601\pm 29\) K, \(\log g\) = \(4.68\pm 0.10\), [Fe/H] = \(0.04\pm 0.03\))(Soubiran et al., 2022). The star is relatively fast-rotating (\(P_{\rm rot}\) = 10.3 d, 9.87 d; Busko & Torres, 1978; Wright et al., 2011) and chromospherically active (\(\log R^{\prime}_{HK}\) = -4.44; Gomes da Silva et al., 2021). De Rosa et al. (2019) reported an astrometric acceleration of TW PsA consistent with a \(1.2^{+0.7}_{-0.6}\)\(M_{\rm J}\) planet on a \(P_{orb}=25^{+52}_{-21}\) yr orbit based on comparison of the _Hipparcos_ and _Gaia_ DR2 astrometry. However it is worth noting that an independent comparison of the _Hipparcos_ and _Gaia_ DR2 astrometry by Kervella et al. (2019) yielded a borderline significance tangential velocity anomaly (\(18.67\pm 6.39\) m s\({}^{-1}\); 2.9\(\sigma\)), a subsequent analysis using improved DR3 data by Kervella et al. (2022) yielded tighter, but less significant constraints (\(2.15\pm 1.49\) m s\({}^{-1}\); \(1.4\sigma\)). Analysis of RV and H\(\alpha\) data returns no significant signals for this target. The S-index period search yields several detections: \(P_{I}=3.8913\pm 0.0002\)d, \(P_{II}=4.08499\pm 0.00049\)d, and \(P_{III}=2.8\pm 0.3\)d. However, the S-index data for this target is fairly sparse, so RVSearch is able to fit many different short-period signals to the data easily. We do not believe that any of these signals are physically significant, and disregard them as not physically meaningful. ### Targets with No Detections Several targets included in this work did not return any significant detections in either the RVs or the activity indicators when run through RVSearch. In some cases, this is due to a lack of data on the given target. Otherwise, there are a few cases in which the target is well-studied, and likely is simply a quiet system. Table 11 lists each of the stars that had no detections, and categorizes them as having insufficient data to make a detection (ID) or well-studied but still contains no signal (NS). Additionally, for each target, we report mean RMS. This works as a valid proxy for stellar variability, to compare with our detection results. For targets designated "ID" in the table, we recommend further study for improved completeness in the future. Stars marked with an asterisk in Table 11 have signals that are close to but do not quite cross the detection threshold in their RV periodograms. HD 196761 shows strong periodicity around 26-28 days which falls just short of the False Alarm Probability mark. We believe this signal to be evidence of a rotation period for this target. HD 23356 has a strong peak at 2911.6 days. The observation baseline for this target is only about twice this period, so further observation of this target could constrain this signal better. ## 8 Discussion In carrying out this study, we sought to characterize the planet detection completeness of nearby, Sun-like stars which have been identified as candidates for future direct imaging observations based upon existing RV observations. We compiled archival RV data sets from the _HARPS_, _HIRES_, _UCLES_, _PFS_, and _APF_ spectrographs to produce a reasonably complete picture of the existing precise RV sensitivity for each star. Many of the targets in this work are hosts of previously published planetary systems. Yet despite the accumulation of many additional RV data points since their first publication, the majority of these systems' orbital parameters have not been previously updated. By utilizing the full range of archival RV data up through present day, we are able to report updated orbital parameters for many of these previously confirmed planetary systems (Table 9) and find in many cases that the uncertainties on the planets' periods, RV semi-amplitudes and eccentricities improve when compared to previous publications (Fig. 10). Some select highlights of our updated analyses are summarized below: * We provide the most precise set of orbital parameters yet published for the three Neptune-mass planets orbiting HD 69830. * We assert that the 40 day planet HD 20794 c published in Pepe et al. (2011) is due to stellar activity and not a Keplerian signal as its statistical significance has not increased despite the addition of hundreds of new precise RV data points. * We show conclusively that the 58 day planet HD 85512 b published in Pepe et al. (2011) is due to stellar activity and not a Keplerian signal, because the signal changes in period by 10+\(\sigma\) over the decade of data collected here. * We present strong evidence that the 3827 day planet HD 114613 b reported by Wittenmyer et al. (2014) is not Keplerian in nature as its statistical significance decreases despite the addition of hundreds of precise RV measurements. * We improve the best fit error bars for the period, semi-amplitude, and eccentricity of the SB1 companion to HD 160346 by over an order of magnitude. * We present strong evidence that the planet HD 26965 b (\(\alpha^{2}\) Eri b, 40 Eri b) reported by Ma et al. (2018) is not a planet, and is rather caused by stellar activity. The 42.303 d RV signal is nearly identical to a periodicity detected in H\(\alpha\) of \(P=43.504\pm 0.066\) d, which overlaps previous estimates of the star's rotation (42-43 d; Baliunas et al., 1996; Frick et al., 2004). * We report two new planet candidates to be further studied and confirmed by future works: HD 192310 RV Signal IV (\(P_{IV}=24.559\pm 0.016\) d) and HD 146233 RV Signal III (\(P_{III}=19.8777\pm 0.0062\) d). Our analysis and results thus serve as encouragement for updated analysis on other previously-confirmed planetary systems in which significant amounts of new data have been acquired since publication. In addition, we report a number of new magnetic activity cycles and signals which are not yet complete enough to be classified, all of which invite further study. In this work, our goal was to analyze each star's RV sensitivity completeness, so that we might make recommendations with respect to future work in preparation of a Direct Imaging (DI) mission that aims to search for Earth analog planets around these stars. As time on future DI missions is likely to be highly oversubscribed, it is imperative that their target lists be as thoroughly vetted as possible in order to increase these future missions' efficiency and science output. One key component of this characterization is to identify the presence of any additional planets in the system and determine whether their orbital parameters preclude the existence of the temperate, terrestrial, planets that the future DI missions seek. If such planets are detected, then these stars should be down weighted in the mission's observing priority list. Figure 11, Figure 12, and Table 12 summarize our findings in this area. While it is clear from Figure 11 that even our most well-studied targets do not come close to the 1 \(M_{\oplus}\) limit for a 1 AU orbit, we are at least able to rule out the presence of Neptune to Jupiter mass planets at \(\sim\)1 AU; such bodies would eliminate the possibility of a dynamically stable Earth analog. Figure 12 shows the range of \(m\sin i\) and planetary insolation of the known planets, candidates, \begin{table} \begin{tabular}{l c c} \hline \hline Identifier & Classification & RMS [m s\({}^{-1}\)] \\ \hline HD 693 & ID & 2.81 \\ HD 7570 & NS & 5.68 \\ HD 23356 & NS* & 5.27 \\ HD 76151 & ID & 8.97 \\ HD 102870 & NS & 5.41 \\ HD 131977 & ID & 6.93 \\ HD 196761 & NS* & 4.60 \\ \hline \hline \end{tabular} Stars from our sample for which RVSearch did not detect any significant signals. (*: Targets marked with an asterisk have strong signals in their periodogram which almost cross the detection threshold; these are discussed more in depth in Section 7.2). \end{table} Table 11: Targets with No Significant Signals \begin{table} \begin{tabular}{l l l l l|l l l} \hline \hline HD & GJ & Mass & Ref. & Lumin. & Ref. & EEID & Doppler Sens. \\ & & (\(M_{\odot}\)) & & (\(\log(L/L_{\odot}\))) & & (au) & (\(M_{\oplus}\)) \\ \hline [MISSING_PAGE_POST] \hline \hline \end{tabular} References: (1) Ramírez et al. (2012), (2) Ramírez et al. (2013), (3) Brewer et al. (2016), (4) Maldonado & Villaver (2016), (5) Soto & Jenkins (2018), (6) Luck (2017), (7) Anders et al. (2019), (8) Casagrande et al. (2011), (9) Fekel & Beavers (1983), (10) Delgado Mena et al. (2019), (11) Stassun et al. (2019), (12) Schöfield et al. (2019). \end{table} Table 12: RV Sensitivity and SRCs in this work relative to the habitable zone for an Earth-like planet around a Sun-like star as defined in (Kopparapu et al., 2014); very few of our detections fall within this region. For the majority of our stars, the minimum detectable mass planet at 1 AU is well above the mass of Neptune or even Saturn. And in some cases, where the stars have only a handful of existing RV observations, even a stellar companion could remain hidden in the data. We therefore recommend further study of all targets on this list. Future surveys could focus most strongly on those that have the least RV sensitivity in the 1 AU region. The stars with the least RV sensitivity, for this study, are those with the smallest number of RV observations. Our list contains 9 stars with under 50 RV epochs: HD 693 (16 epochs), HD 30495 (50 epochs), HD 76151 (7 epochs), HD 131977 (22 epochs), HD 147584 (1 epoch), HD 160346 (34 epochs), HD 165341 (7 epochs), HD 203608 (1 epoch), and HD 216803 (42 epochs). We recommend that future RV surveys focus strongly on these 16 targets, in order to build up their RV baseline and thus increase RV sensitivity. For those targets which are closer to the \(1M_{\oplus}\) line, we suggest more in-depth analysis of the archival data in order to push this limit. The final uninformed search and injection/recovery figures created by RVSearch are presented in the accompanying Figure Sets so that targets' results may be examined on an individual basis. The radial velocity data used to perform these fits and analyses will be published in an accompanying machine readable table. ## 9 Conclusion We expect the detection and characterization of Earth-analog planets to be an exceptionally difficult undertaking due to the challenges presented by observational constraints, instrument systematics, and, most importantly, the variability of the stars themselves. The Figure 11: 50% detection sensitivity threshold for each target star at its Earth Equivalent Irradiation Distance (EEID) – the distance from the host star at which the planet will receive the same amount of energy as the Earth receives from the Sun. For the majority of targets the existing Doppler sets are not yet sensitive to Neptune-mass planets at their respective EEIDs, which would preclude the formation of stable Earth-analogs, let alone Earth-mass planets themselves. Figure 10: Comparison of the uncertainties in previously published works and our updated RV analyses for the planets listed in Table 5 for orbital period (top plot), RV semi-amplitude (middle plot) and orbital eccentricity (bottom plot). The grey dashed lines depict a 1:1 ratio, so planets above the line have more precise results in our analysis while planets below the line have less precise results here than previously published. The green lines denote a factor of two improvement, so planets above the green lines have uncertainties that decreased by 50%. This happens most commonly in the orbital period comparisons, as the additional months/years of data added here include many more orbits of each planet. list of stars that are well suited to future direct imaging searches for such planets is limited due to stringent requirements on the stars' distance from Earth that in turn determines whether a temperate planet orbiting that star falls outside inner working angle of the DI instrument. There are \(\sim\)100 stars identified by the EPRV WG to meet the criteria both for being a suitable DI target and for being amenable to precision radial velocity observations. We have compiled archival radial velocity time series data from the majority of precision RV spectrographs that have operated in the southern hemisphere over the past two decades for these 50 nearby, Sun-like stars that are likely to be targets of future, space-based, direct imaging missions. Our primary objective was to quantify each star's RV completeness via the use of an injection/recovery analysis applied to archival RV data. Our results show that the minimum detectable planet mass at 1 AU ranges from 6.5-818.5 \(M_{\oplus}\) depending on the star, showcasing the heterogeneous state of the archival RV data collected from these targets. While additional data from the spectrographs included in this study are unlikely to reveal the presence of a 10 cm s\({}^{-1}\) signal due to a true Earth analog, there is still room for significant improvements to the stars' RV completeness using these current generation instruments. Future surveys prioritizing those stars for which we are already sensitive to super-Earth/sub-Neptune type planets (M\({}_{\rm p}\)\(\sim\) 10-20\(M_{\oplus}\)) at 1 AU could increase our sensitivity closer to the 1 \(M_{\oplus}\) limit. Alternatively, focusing on the stars for which we have the least RV data - those where giant planets at 1 AU could remain hidden - could identify currently unknown planetary companions that would preclude the existence of a temperate, terrestrial planet. In the course of preparing each star's RV time series for the injection/recovery analyses we also performed an uninformed search of the RV data to identify and remove any significant signals. In doing so, we recovered 28 previously published planets. The orbital parameters of many of these planets have not been revisited since their original publication date which is often 5-10 years ago. Our updated analysis, which generally includes both additional data from different instruments and a longer observing baseline than previous fits, is able to increase the precision on the planets' periods, eccentricities and RV semi-amplitudes. Looking at the ratio of the previously published uncertainties to our updated orbital parameter uncertainties, we find mean uncertainty improvements of 2.7\(\times\) in period, 1.3\(\times\) in RV semi-amplitude, and 1.4\(\times\) in eccentricity. The third key component to this work is the identification and characterization of many stars' variability timescales and amplitudes using the same uninformed search methodology applied to each stars' S-index and, for targets observed by the _UCLES_ instrument, \(EW_{H\alpha}\) time series. Understanding a star's rotationally modulated activity signals along with its long term magnetic activity cycles, both of which can mask the presence of low amplitude Keplerian signals, will inform the sampling baseline and cadence necessary in future EPRV surveys to model and mitigate these star-based signals. Our work is not an exhaustive analysis of the stars' activity, but in many cases it does provide an initial or refined characterization of the stars' rotation and magnetic cycles. Future work to better quantify these signals and their development over time is encouraged. If we will someday require extreme precision RV follow-up of planets detected by DI missions around these stars, then it would well serve the exoplanet community to begin new observing campaigns of these targets in the near term. Dedicated, high cadence, high precision (\(\sigma_{\rm RV}\leq 1\) m s\({}^{-1}\)) RV monitoring will enable the characterization and potentially mitigation of stellar variability signals on time scales of hours to years alongside the the detection of additional, currently unknown planetary companions. Knowledge of how to correctly model and remove signals of both natures will be crucial for any future efforts to measure precise masses for Earth-analog planets. KL, JB, and EM were supported by the NASA Exoplanet Exploration Program (ExEP). KL acknowledges support from the Whitman College Independent Study and Senior Thesis programs. The research was carried out in part at the Jet Propulsion Laboratory, Cal Figure 12: Planet, Candidate, or SRC planetary irradiation relative to Earth (\(S/S_{\oplus}\)) versus Msini (\(M_{\oplus}\)). The habitable zone for an Earth analog in a Solar analog system (Kopparapu et al., 2014) is marked in the blue shaded region. ifornia Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). The work herein is based on observations obtained at the W. M. Keck Observatory, which is operated jointly by the University of California and the California Institute of Technology, and we thank the UC-Keck and NASA-Keck Time Assignment Committees for their support. We also wish to extend our special thanks to those of Hawaiian ancestry on whose sacred mountain of Mauna Kea we are privileged to be guests. Without their generous hospitality, the Keck observations presented herein would not have been possible. The work herein is also based on observations obtained with the Automated Planet Finder (APF) telescope and its Levy Spectrometer at Lick Observatory, along with data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. We acknowledge the traditional owners of the land on which the Anglo-Australian Telescope (AAT) stands, the Gamila-raay people, and pay our respects to elders past and present. Some observations in the paper made use of the High-Resolution Imaging instrument(s) 'Alopeke (and/or Zorro). 'Alopeke (and/or Zorro) was funded by the NASA Exoplanet Exploration Program and built at the NASA Ames Research Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmett Quigley. 'Alopeke (and/or Zorro) was mounted on the Gemini North (and/or South) telescope of the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. On behalf of the Gemini partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. UCO/Lick: The APF (Levy spectrograph), Magellan: Clay (Planet Finder Spectrograph), Keck:I (HIRES), Gemini North: 'Alopeke, Gemini South: Zorro
2308.05020
The sequentially Cohen-Macaulay property of edge ideals of edge-weighted graphs
Let $I(G_{\mathbf{w}})$ be the edge ideal of an edge-weighted graph $(G,\mathbf{w})$. We prove that $I(G_{\mathbf{w}})$ is sequentially Cohen-Macaulay for all weight functions $w$ if and only if $G$ is a Woodroofe graph.
Ly Thi Kieu Diem, Nguyen Cong Minh, Thanh Vu
2023-08-09T15:31:01Z
http://arxiv.org/abs/2308.05020v1
# The sequentially Cohen-Macaulay property of edge ideals of edge-weighted graphs ###### Abstract. Let \(I(G_{\mathbf{w}})\) be the edge ideal of an edge-weighted graph \((G,\mathbf{w})\). We prove that \(I(G_{\mathbf{w}})\) is sequentially Cohen-Macaulay for all weight functions \(\mathbf{w}\) if and only if \(G\) is a Woodroofe graph. Key words and phrases:sequentially Cohen-Macaulay; edge-weighted graph; monomial ideal 2010 Mathematics Subject Classification: 05E40, 13F55, 13D02 ## 1. Introduction Let \(S=K[x_{1},\ldots,x_{n}]\) be a standard graded polynomial ring over an arbitrary field \(K\). Let \(G\) be a simple graph with vertex set \(V=\{x_{1},\ldots,x_{n}\}\) and edge set \(E(G)\). By abuse of notation, we also use \(x_{i}x_{j}\) to denote an edge \(\{x_{i},x_{j}\}\) of \(G\). Assume that \(\mathbf{w}:E(G)\to\mathbb{Z}_{>0}\) is a weight function on edges of \(G\). The edge ideal of the edge-weighted graph \((G,\mathbf{w})\) is defined by \[I(G_{\mathbf{w}})=\big{(}(x_{i}x_{j})^{\mathbf{w}(x_{i}x_{j})}\mid\{i,j\}\in E (G)\big{)}\subseteq S.\] In particular, if every edge of \(G\) has weight one then \(I(G_{\mathbf{w}})\) becomes the usual edge ideal \(I(G)\). Edge ideals of edge-weighted graphs were introduced by Paulsen and Sather-Wagstaff [PS]. In this work, the authors described a primary decomposition of \(I(G_{\mathbf{w}})\) and studied the Cohen-Macaulay property of \(I(G_{\mathbf{w}})\) when the underlying graph \(G\) is a cycle, a tree, or a complete graph. In particular, they proved that \(I(G_{\mathbf{w}})\) is Cohen-Macaulay for all weight functions \(\mathbf{w}\) when \(G\) is a complete graph. In our first main result, we prove the converse of this result. **Theorem 1.1**.: _Let \(G\) be a simple graph. The following statements are equivalent:_ 1. \(I(G_{\mathbf{w}})\) _is Cohen-Macaulay for all weight functions_ \(\mathbf{w}\)_;_ 2. \(I(G_{\mathbf{w}})\) _is Cohen-Macaulay for all weight functions_ \(\mathbf{w}\) _such that_ \(\mathbf{w}(x_{i}x_{j})\in\{1,2\}\) _for all edges_ \(x_{i}x_{j}\in E(G)\)_;_ 3. \(G\) _is a disjoint union of finitely many complete graphs._ In [FSTY], Fakhari, Shibata, Terai, and S. Yassemi characterized the unmixed property of \(I(G_{\mathbf{w}})\) when \(G\) is a very well-covered graph and proved that this is equivalent to the Cohen-Macaulay property of \(I(G_{\mathbf{w}})\). In this context, Terai [T] proposed the following conjecture **Conjecture** (Terai).: Let \(G\) be a Cohen-Macaulay very well-covered graph. Then \(I(G_{\mathbf{w}})\) is sequentially Cohen-Macaulay for all weight functions \(\mathbf{w}\). We first recall the definition of sequentially Cohen-Macaulay modules over \(S\). **Definition 1**.: Let \(M\) be a graded module over \(S\). We say that \(M\) is sequentially Cohen-Macaulay if there exists a filtration \[0=M_{0}\subset M_{1}\subset\cdots\subset M_{r}=M\] of \(M\) by graded \(S\)-modules such that \(\dim(M_{i}/M_{i-1})<\dim(M_{i+1}/M_{i})\) for all \(i\), where \(\dim\) denotes Krull dimension, and \(M_{i}/M_{i-1}\) is Cohen-Macaulay for all \(i\). An ideal \(J\) is said to be sequentially Cohen-Macaulay if \(S/J\) is a sequentially Cohen-Macaulay \(S\)-module. A graph \(G\) (resp. \((G,\mathbf{w})\)) is said to be sequentially Cohen-Macaulay if \(I(G)\) (resp. \(I(G_{\mathbf{w}})\)) is. The notion of sequentially Cohen-Macaulay was introduced by Stanley [S] as a generalization of the Cohen-Macaulay property in connection with the work of Bjorner and Wachs on nonpure shellability [BW1, BW2]. When \(J\) is a sequentially Cohen-Macaulay ideal, it is well-known that \(J\) is Cohen-Macaulay if and only if \(J\) is unmixed. In motivation to study the conjecture of Terai, we can classify graphs for which \(G_{\mathbf{w}}\) are sequentially Cohen-Macaulay for all weight functions \(\mathbf{w}\). To introduce our result, we first define a special class of simple graphs which contain \(5\)-cycles and chordal graphs. A chordless cycle \(C_{t}\) of length \(t\) is a cycle with no chord \(\{i,j\}\) for \(j\neq i+1\). Equivalently, the induced graph of \(G\) on \(\{1,\ldots,t\}\) is the cycle on \(t\) vertices. **Definition 2**.: A simple graph \(G\) is said to be a Woodroofe graph if \(G\) has no chordless cycles of length other than \(3\) or \(5\). In [Wo, Theorem 1], Woodroofe proved that if \(G\) is a Woodroofe graph, then it is vertex-decomposable. So, it is sequentially Cohen-Macaulay. Our second main result of this paper states that Woodroofe graphs are precisely graphs for which \(G_{\mathbf{w}}\) are sequentially Cohen-Macaulay for all weight functions \(\mathbf{w}\). **Theorem 1.2**.: _Let \(G\) be a simple graph. The following statements are equivalent:_ 1. \((G,\mathbf{w})\) _is sequentially Cohen-Macaulay for all weight functions_ \(\mathbf{w}\)_;_ 2. \((G,\mathbf{w})\) _is sequentially Cohen-Macaulay for all weight functions_ \(\mathbf{w}\) _such that_ \(\mathbf{w}(x_{i}x_{j})\in\{1,2\}\) _for all edges_ \(x_{i}x_{j}\in E(G)\)_;_ 3. \(G\) _is a Woodroofe graph._ Now we explain the organization of the paper. In Section 2, we prove Theorem 1.1 and Theorem 1.2. In Section 3, we give some applications of our main results. In particular, we provide counterexamples to Terai's conjecture. ## 2. Proof of the main results Throughout the paper, we denote \(S=K[x_{1},\ldots,x_{n}]\) a standard graded polynomial ring over a field \(K\). Let \(\mathfrak{m}=(x_{1},\ldots,x_{n})\) be the maximal homogeneous ideal of \(S\). We first recall some notation and results. For a finitely generated graded \(S\)-module \(L\), the depth of \(L\) is defined to be \[\operatorname{depth}(L)=\min\{i\mid H^{i}_{\mathfrak{m}}(L)\neq 0\},\] where \(H^{i}_{\mathfrak{m}}(L)\) denotes the \(i\)-th local cohomology module of \(L\) with respect to \(\mathfrak{m}\). **Definition 3**.: A finitely generated graded \(S\)-module \(L\) is called Cohen-Macaulay if \(\operatorname{depth}L=\dim L\). A homogeneous ideal \(I\subseteq S\) is said to be Cohen-Macaulay if \(S/I\) is a Cohen-Macaulay \(S\)-module. Let \(I\) be a monomial ideal in \(S\). In [H], Hochster introduced the set of associated radical ideals of \(I\), namely \(\sqrt{I:u}\) for monomials \(u\notin I\) and proved that the Cohen-Macaulay property of \(I\) can be characterized in terms of its associated radicals. In [JS], Jafari and Sabzrou showed that the sequentially Cohen-Macaulay property of \(I\) can also be characterized in terms of its associated radicals. **Lemma 2.1**.: _A monomial ideal \(I\) is (sequentially) Cohen-Macaulay if and only if \(\sqrt{I:u}\) is (sequentially) Cohen-Macaulay for all monomials \(u\notin I\)._ Proof.: The statement for Cohen-Macaulay property follows from [H, Theorem 7.1] and the fact that \(\dim S/\sqrt{I:u}\leq\dim S/I\) for all monomials \(u\notin I\). See also [JS, Proposition 2.8]. The statement for sequentially Cohen-Macaulay is [JS, Proposition 2.23]. The associated radicals also play an important role in studying the regularity of monomial ideals (see [MNPTV]). The associated radicals can be computed as follows. For a monomial \(f\) in \(S\), the support of \(f\), denoted \(\operatorname{supp}(f)\), is the set of \(x_{i}\) such that \(x_{i}|f\). The radical of \(f\) is defined by \(\sqrt{f}=\prod_{x_{i}\in\operatorname{supp}f}x_{i}\). For an exponent \(\mathbf{a}\in\mathbb{N}^{n}\), denote \(x^{\mathbf{a}}\) the monomial \(x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\). **Lemma 2.2**.: _Let \(J\) be a monomial ideal in \(S\) generated by the monomials \(f_{1},\ldots,f_{r}\) and \(x^{\mathbf{a}}\) a monomial in \(S\). Then \(\sqrt{I:x^{\mathbf{a}}}\) is generated by_ \[\sqrt{f_{1}/\gcd(f_{1},x^{\mathbf{a}})},\ldots,\sqrt{f_{r}/\gcd(f_{r},x^{ \mathbf{a}})}.\] Proof.: See [MNPTV, Lemma 2.24]. Let \(G\) denote a finite simple graph over the vertex set \(V(G)=\{x_{1},x_{2},\ldots,x_{n}\}\) and the edge set \(E(G)\). A subgraph \(H=G[W]\) is called an induced subgraph of \(G\) on \(W\subset V(G)\) if for any vertices \(u,v\in W\) then \(uv\in E(H)\) if and only if \(uv\in E(G)\). Let \(\mathbf{w}:E(G)\to\mathbb{Z}_{>0}\) be a weight function on the edges of \(G\). We first show that the property that \(I(G_{\mathbf{w}})\) are (sequentially) Cohen-Macaulay for all weight functions \(\mathbf{w}\) is equivalent to the property that all induced subgraphs of \(G\) are (sequentially) Cohen-Macaulay. **Lemma 2.3**.: _Let \(G\) be a simple graph. The following statements are equivalent._ 1. \((G,\mathbf{w})\) _is (sequentially) Cohen-Macaulay for all weight functions_ \(\mathbf{w}\)_;_ 2. \((G,\mathbf{w})\) _is (sequentially) Cohen-Macaulay for all weight functions_ \(\mathbf{w}\) _such that_ \(\mathbf{w}(x_{i}x_{j})\in\{1,2\}\) _for all edges_ \(x_{i}x_{j}\in E(G)\)_;_ 3. \(G[W]\) _is (sequentially) Cohen-Macaulay for all subsets_ \(W\subseteq V(G)\)_._ Proof.: It is obvious that \((1)\Rightarrow(2)\). Now, we prove \((2)\Rightarrow(3)\). Let \(W\) be any subset of \(V(G)\). Let \(\mathbf{w}\) be the weight function defined as follows: \[\mathbf{w}(e)=\begin{cases}2\text{ if }e\in G[W],\\ 1\text{ otherwise.}\end{cases}\] Let \(x^{\mathbf{a}}=\prod_{x_{j}\in W}x_{j}\). We first show that \[\sqrt{I(G_{\mathbf{w}}):x^{\mathbf{a}}}=I(G[W])+\text{(some variables not in $\mathrm{W}$)}+I(G[W^{\prime}]), \tag{1}\] where \(W^{\prime}\) is a subset of \(V(G)\setminus W\). By Lemma 2.2\(\sqrt{I(G_{\mathbf{w}}):x^{\mathbf{a}}}\) is generated by \(\sqrt{e^{\mathbf{w}(e)}/\gcd(e^{\mathbf{w}(e)},x^{\mathbf{a}})}\) for all edge \(e\) of \(G\). We have three cases: **Case 1.**\(e\in G[W]\). In this case, \(\mathbf{w}(e)=2\) and \(e^{2}/\gcd(e^{2},x^{\mathbf{a}})=e\). **Case 2.**\(|\operatorname{supp}e\cap W|=1\). Assume that \(e=xy\) with \(x\in W\) and \(y\notin W\). Then \(e/\gcd(e,x^{\mathbf{a}})=y\notin W\). **Case 3.**\(|\operatorname{supp}e\cap W|=0\). Then \(e/\gcd(e,x^{\mathbf{a}})=e\). Eq. (1) follows. By Lemma 2.1, \(\sqrt{I(G_{\mathbf{w}}):x^{\mathbf{a}}}\) is (sequentially) Cohen-Macaulay. In particular, \(I(G[W])+I(G[W^{\prime}])\) is (sequentially) Cohen-Macaulay. Since \(W\cap W^{\prime}=\emptyset\), by [V, Lemma 4.1] and [Wo, Lemma 20], we deduce that \(I(G[W])\) is (sequentially) Cohen-Macaulay. (3) \(\Rightarrow\) (1). By Lemma 2.2, any minimal generator of \(\sqrt{I(G_{\mathbf{w}}):x^{\mathbf{a}}}\) for any weight function \(\mathbf{w}\) and any monomial \(x^{\mathbf{a}}\) such that \(x^{\mathbf{a}}\notin I(G_{\mathbf{w}})\) is either \(xy\) where \(xy\) is an edge of \(G\) or a variable. Hence, \(\sqrt{I(G_{\mathbf{w}}):x^{\mathbf{a}}}=I(G[W])+\text{(some variables)}\) for some subset \(W\) of \(V(G)\). By assumption, they are (sequentially) Cohen-Macaulay. By Lemma 2.1, \(I(G_{\mathbf{w}})\) is (sequentially) Cohen-Macaulay. By Lemma 2.3, we see that studying the class of graphs for which \(I(G_{\mathbf{w}})\) is (sequentially) Cohen-Macaulay is equivalent to the problem of finding obstructions to (nonpure) shellability of flag complexes. This observation leads us to the proof of the main results. Proof of Theorem 1.1.: By Lemma 2.3, the Theorem follows from the following facts. 1. Disjoint unions of complete graphs are Cohen-Macaulay [V]. 2. Induced subgraphs of a disjoint union of complete graphs are disjoint unions of complete graphs. 3. \(P_{3}\) a path of length \(2\) is not Cohen-Macaulay. 4. \(P_{3}\) is not an induced subgraph of \(G\) then \(G\) is a disjoint union of complete graphs. Proof of Theorem 1.2.: By Lemma 2.3, the Theorem follows from the definition of Woodroofe graphs and the following facts. 1. Woodroofe graphs are sequentially Cohen-Macaulay [Wo, Theorem 1]. 2. Induced subgraphs of a Woodroofe graph are Woodroofe graphs. 3. The cycles \(C_{t}\) are not sequentially Cohen-Macaulay for \(t\neq 3,5\) (see [FT, Proposition 4.1] and [Wo, Theorem 10]). **Remark 2.4**.: 1. The simplicial complexes corresponding to edge ideals of disjoint unions of complete graphs are matroid of rank \(2\). 2. One can show that the classification of hypergraphs \(\mathcal{H}\) whose edge ideals of edge-weighted hypergraphs \((\mathcal{H},\mathbf{w})\), \(I(\mathcal{H}_{\mathbf{w}})\) is (sequentially) Cohen-Macaulay for all weight functions \(\mathbf{w}\) also reduces to study the obstructions to (nonpure) shellablity of simplicial complexes. 3. We can show that \(I(\mathcal{H}_{\mathbf{w}})\) is Cohen-Macaulay for all weight functions \(\mathbf{w}\) if and only if \(\Delta(I_{\mathcal{H}})\) is a matroid. 4. The obstruction to nonpure shellability is much more subtle (see [Wa] for some partial results) and we will leave that for future work. ## 3. Applications In this section, we will give some applications of our results. Firstly, one knows that when \(J\) is sequentially Cohen-Macaulay, then \(J\) is Cohen-Macaulay if and only if it is unmixed. Therefore, we obtain **Corollary 3.1**.: _Let \(G\) be a Woodroofe graph and \(\mathbf{w}:E(G)\to\mathbb{Z}_{>0}\) a weight function. Then \(I(G_{\mathbf{w}})\) is Cohen-Macaulay if and only if \(I(G_{\mathbf{w}})\) is unmixed._ When \(G\) is a Cohen-Macaulay graph, a weight function \(\mathbf{w}\) on edges of \(G\) is called Cohen-Macaulay if \((G,\mathbf{w})\) is Cohen-Macaulay. Before giving our next application, we recall the result of Paulsen and Sather-Wagstaff [PS, Theorem 4.4] on edge-weighted graph \((C_{5},\mathbf{w})\). They proved that \(\mathbf{w}\) is Cohen-Macaulay if and only if there exists a vertex \(v\) so that the weights on edges of \(C_{5}\) starting from \(v\) in clockwise order are of the form \(a,b,c,d,a\) and that \(a\leq b\geq c\leq d\geq a\). We call such a vertex \(v\) a balancing vertex of \(\mathbf{w}\). Let \(H\) be a graph formed by connecting two \(5\)-cycles by a path. By [HMT, Theorem 2.4], \(H\) is Cohen-Macaulay if and only if this path is of length \(1\). We may assume that the vertices of \(H\) are \(\{x_{1},\ldots,x_{5},y_{1},\ldots,y_{5}\}\) and edges of \(H\) are \(\{x_{1}x_{2},\ldots,x_{4}x_{5},x_{1}x_{5},y_{1}y_{2},\ldots,y_{4}y_{5},y_{1}y_ {5},x_{1}y_{1}\}\). Note that \(I(H)+(x_{i})\) and \(I(H)+(y_{i})\) are not Cohen-Macaulay for \(i\in\{2,5\}\). With this assumption, we have **Proposition 3.2**.: _The edge-weighted graph \((H,\mathbf{w})\) is Cohen-Macaulay if and only if \(\mathbf{w}\) satisfies the following conditions:_ 1. \(\mathbf{w}(x_{1}y_{1})\leq\min\{\mathbf{w}(x_{1}x_{2}),\mathbf{w}(x_{1}x_{5}), \mathbf{w}(y_{1}y_{2}),\mathbf{w}(y_{1}y_{5})\},\)__ 2. _The induced edge-weighted graphs of_ \((H,\mathbf{w})\) _on_ \(\{x_{1},\ldots,x_{5}\}\) _and_ \(\{y_{1},\ldots,y_{5}\}\) _are Cohen-Macaulay._ 3. _Balancing vertices of_ \(\mathbf{w}\) _on_ \(\{x_{1},\ldots,x_{5}\}\) _and_ \(\{y_{1},\ldots,y_{5}\}\) _can be chosen among_ \(\{x_{1},x_{3},x_{4}\}\) _and_ \(\{y_{1},y_{3},y_{4}\}\) _respectively._ Proof.: Denote \(I=I(H_{\mathbf{w}})\). Let \((H_{1},\mathbf{w}_{1})\) and \((H_{2},\mathbf{w}_{2})\) be the induced edge-weighted graphs of \((H,\mathbf{w})\) on \(\{x_{1},\ldots,x_{5}\}\) and \(\{y_{1},\ldots,y_{5}\}\) respectively. First, we prove that if \(\mathbf{w}\) is Cohen-Macaulay, it must satisfy the above conditions. For (1), assume by contradiction that \(\mathbf{w}(x_{1}y_{1})=a>\mathbf{w}(y_{1}y_{2})=b\). Let \(c=\max(\mathbf{w}(y_{3}y_{4}),\mathbf{w}(y_{4}y_{5}))\). Then \[\sqrt{I:y_{1}^{a-1}y_{4}^{c}}=I(H[x_{1},\ldots,x_{5},y_{1}])+(y_{2},y_{3},y_{5}).\] In particular, it is not Cohen-Macaulay. By Lemma 2.1, \(I(H_{\mathbf{w}})\) is not Cohen-Macaulay, a contradiction. By symmetry, \(\mathbf{w}\) must satisfy condition (1). We now prove that \((H_{2},{\bf w}_{2})\) must be Cohen-Macaulay. Assume by contradiction that, \((H_{2},{\bf w}_{2})\) is not Cohen-Macaulay. By Lemma 2.1, there exists an exponent \(y^{\bf b}\) such that \(\sqrt{I(H_{2},{\bf w}_{2}):y^{\bf b}}\) is not Cohen-Macaulay. Then we have \[\sqrt{I(H_{\bf w}):x_{2}^{a_{2}}x_{4}^{a_{4}}y^{\bf b}}=\sqrt{I(H_{2},{\bf w}_{ 2}):y^{\bf b}}+(x_{1},x_{3},x_{5}),\] where \(a_{2}=\max({\bf w}(x_{2}x_{1}),{\bf w}(x_{3}x_{2}))\) and \(a_{4}=\max({\bf w}(x_{3}x_{4}),{\bf w}(x_{4}x_{5}))\). In particular, it is not Cohen-Macaulay. By Lemma 2.1, \(I(H_{\bf w})\) is not Cohen-Macaulay, a contradiction. By symmetry, \({\bf w}\) must satisfy condition (2). Now note that if \({\bf w}(x_{2}x_{3})<{\bf w}(x_{3}x_{4})\) then \(\sqrt{I:x_{3}^{b}}=I+(x_{2})\) where \(b={\bf w}(x_{3}x_{4})-1\). Since \(I+(x_{2})\) is not Cohen-Macaulay, this implies a contradiction. Hence, \({\bf w}(x_{2}x_{3})\geq{\bf w}(x_{3}x_{4})\). By symmetry, we deduce that \({\bf w}(x_{4}x_{5})\geq{\bf w}(x_{3}x_{4})\). By [PS, Theorem 4.4] and the previous claim that \((H_{1},{\bf w}_{1})\) is Cohen-Macaulay, we deduce that a balancing vertex of \({\bf w}\) on \(\{x_{1},\ldots,x_{5}\}\) can be chosen among \(\{x_{1},x_{3},x_{4}\}\). By symmetry, \({\bf w}\) must satisfy condition (3). It remains to prove that if \({\bf w}\) satisfies conditions (1), (2), (3), then \(I=I(H_{\bf w})\) is Cohen-Macaulay. By Lemma 2.1, it suffices to prove that \(\sqrt{I:x^{\bf a}y^{\bf b}}\) is Cohen-Macaulay for all exponents \({\bf a},{\bf b}\) such that \(x^{\bf a}y^{\bf b}\notin I\). Denote \(I_{{\bf a},{\bf b}}=\sqrt{I:x^{\bf a}y^{\bf b}}\). First, we have **Claim A.** Assume that \((G,{\bf w})\) is an edge-weighted graph and \(x^{\bf a}\) is an exponent such that \(a_{i}<{\bf w}(e)\) for all edges \(e\) adjacent to \(i\) then \[\sqrt{I(G_{\bf w}):x^{\bf a}}=\sqrt{I(G_{\bf w}):x^{\bf b}}, \tag{2}\] with \(x^{\bf b}=x_{1}^{a_{1}}\cdots x_{i-1}^{a_{i-1}}x_{i+1}^{a_{i+1}}\cdots x_{n}^{ a_{n}}\). In other words, we may assume that \(a_{i}=0\). By symmetry, we may assume that \(a_{1}\geq b_{1}\). Since \(x^{\bf a}y^{\bf b}\notin I(H_{\bf w})\), we must have \(b_{1}<{\bf w}(x_{1}y_{1})\leq\min({\bf w}(y_{1}y_{2}),{\bf w}(y_{1}y_{5}))\). By Claim A, we may assume that \(b_{1}=0\). There are two cases as follows. **Case 1.**\(a_{1}\geq{\bf w}(x_{1}y_{1})\). Then \[I_{{\bf a},{\bf b}}=\sqrt{I:x^{\bf a}y^{\bf b}}=(y_{1})+\sqrt{I(H_{1},{\bf w} _{1}):x^{\bf a}}+\sqrt{I(H_{2},{\bf w}_{2}):y^{\bf b}} \tag{3}\] Since \((H_{1},{\bf w}_{1})\) is Cohen-Macaulay by [PS, Theorem 4.4], \(I_{{\bf a},{\bf b}}\) is not Cohen-Macaulay if and only if \((y_{1})+\sqrt{I(H_{2},{\bf w}_{2}):y^{\bf b}}=(y_{1},y_{2})+I(H_{2})\) or \(I(H_{2})+(y_{1},y_{5})\). Assume by contradiction that \((y_{1})+\sqrt{I(H_{2},{\bf w}_{2}):y^{\bf b}}=(y_{1},y_{2})+I(H_{2})\). Then we must have \(b_{3}\geq{\bf w}(y_{2}y_{3})\geq{\bf w}(y_{3}y_{4})\). But then \(y_{4}\in\sqrt{I(H_{2},{\bf w}_{2}):y^{\bf b}}\), a contradiction. Hence, \(I_{{\bf a},{\bf b}}\) is Cohen-Macaulay. **Case 2.**\(a_{1}<{\bf w}(x_{1}y_{1})\). By Claim A, we may assume that \(a_{1}=0\). In particular, \[I_{{\bf a},{\bf b}}=\sqrt{I(H_{1},{\bf w}_{1}):x^{\bf a}}+\sqrt{I(H_{2},{\bf w }_{2}):y^{\bf b}}+(x_{1}y_{1}). \tag{4}\] If \(x_{1}\) or \(y_{1}\) appears in \(I_{{\bf a},{\bf b}}\), with an argument similar to Case 1, we deduce that \(I_{{\bf a},{\bf b}}\) is the sum of two Cohen-Macaulay ideals on different sets of variables and an ideal generated by some other variables. Hence, it is Cohen-Macaulay. Thus, we may assume that \(x_{1},y_{1}\) does not appear in \(I_{{\bf a},{\bf b}}\). By Lemma 2.2, \(\sqrt{I(H_{1},{\bf w}_{1}):x^{\bf a}}=I(H_{1})+(x_{i}\mid i\in W_{1})\), where \(W_{1}\subseteq\{1,\ldots,5\}\). By the following facts, \(W_{1}\) must belong to \(P=\{\{2,4\},\{2,3,4\},\{3,5\},\{3,4,5\},\{3\},\{4\},\emptyset\}\). 1. By assumption, \(1\notin W_{1}\). 2. Since the balancing vertex of \((H_{1},{\bf w}_{1})\) can be chosen in the set \(\{3,4,1\}\), \({\bf w}(x_{2}x_{3})\geq{\bf w}(x_{3}x_{4})\). Hence, if \(2\in W_{1}\) then \(4\in W_{1}\). Similarly, if \(5\in W_{1}\) then \(3\in W_{1}\). 3. By [PS, Theorem 4.4] and Lemma 2.1, \(I(H_{1})+(x_{i}\mid i\in W_{1})\) is a Cohen-Macaulay ideal, \(W_{1}\) cannot be \(\{3,4\}\). 4. \(W_{1}\) cannot contain \(\{2,5\}\). Proof of (4). Assume by contradiction that \(2,5\in W_{1}\). Since \(a_{1}=0\), we must have \(a_{3}\geq{\bf w}(x_{2}x_{3})\). Similarly, \(a_{4}\geq{\bf w}(x_{4}x_{5})\). Since the balancing vertex of \((H_{1},{\bf w}_{1})\) can be chosen in the set \(\{3,4,1\}\), we have \({\bf w}(x_{3}x_{4})\leq\min({\bf w}(x_{2}x_{3}),{\bf w}(x_{4}x_{5}))\). Hence, \({\bf w}(x_{3}x_{4})\leq a_{3},a_{4}\). But this implies that \(x^{\bf a}\in I\), a contradiction. Now, it is easy to check that if \(W_{1},W_{2}\) belong to \(P\), we have \(I(H)+(x_{i}\mid i\in W_{1})+(y_{j}\mid j\in W_{2})\) is Cohen-Macaulay. The Proposition follows. Finally, by Theorem 1.2, any Cohen-Macaulay very well-covered graph that is not Woodroofe is a counterexample to Terai's conjecture. We provide some concrete examples below. Recall that a simple graph is called very well-covered if the size of every minimal vertex cover is half the number of vertices. In particular, it is unmixed. **Example 3.3**.: Let \(H\) be a suspension of a cycle \(C_{t}\) for \(t\neq 3,5\); i.e., the set of edges and the set of vertices are \[E(H)=\{x_{1}x_{2},x_{2}x_{3},\ldots,x_{t-1}x_{t},x_{t}x_{1},x_{1}y_{1},\ldots, x_{t}y_{t}\}\mbox{ and }V(H)=\{x_{1},y_{1},\ldots,x_{t},y_{t}\}.\] Let \({\bf w}\) be a weight function on \(E(H)\) taking value \(w\geq 2\) for the edges \(x_{i}x_{i+1}\) and value \(1\) otherwise. Then, \(H\) is a Cohen-Macaulay very well-covered graph, but \((H,{\bf w})\) is not sequentially Cohen-Macaulay. Proof.: The graph \(H\) is Cohen-Macaulay by [SVV, Theorem 2.1] (also see [V]). By definition, \(H\) is very well-covered. Since \[\sqrt{I(H_{\bf w}):\prod_{i=1}^{t}x_{i}^{w-1}}=I(C_{t})+(y_{1},\ldots,y_{t})\] and \(I(C_{t})\) is not sequentially Cohen-Macaulay by [FT, Proposition 4.1]. By Lemma 2.1, \(I(H_{\bf w})\) is not sequentially Cohen-Macaulay. ### Acknowledgments This paper was done while the first author was visiting the Vietnam Institute for Advanced Study in Mathematics (VIASM). He would like to thank the VIASM for the hospitality and financial support, and he also thanks the Vietnam National Foundation for Science and Technology Development (NAFOSTED) for its support under grant number 101.04-2021.19. ### Conflict of interest The authors declare no potential conflict of interests.
2304.03784
Generative AI for learning: Investigating the potential of synthetic learning videos
Recent advances in generative artificial intelligence (AI) have captured worldwide attention. Tools such as Dalle-2 and ChatGPT suggest that tasks previously thought to be beyond the capabilities of AI may now augment the productivity of creative media in various new ways, including through the generation of synthetic video. This research paper explores the utility of using AI-generated synthetic video to create viable educational content for online educational settings. To date, there is limited research investigating the real-world educational value of AI-generated synthetic media. To address this gap, we examined the impact of using AI-generated synthetic video in an online learning platform on both learners content acquisition and learning experience. We took a mixed-method approach, randomly assigning adult learners (n=83) into one of two micro-learning conditions, collecting pre- and post-learning assessments, and surveying participants on their learning experience. The control condition included a traditionally produced instructor video, while the experimental condition included a synthetic video with a realistic AI-generated character. The results show that learners in both conditions demonstrated significant improvement from pre- to post-learning (p<.001), with no significant differences in gains between the two conditions (p=.80). In addition, no differences were observed in how learners perceived the traditional and synthetic videos. These findings suggest that AI-generated synthetic learning videos have the potential to be a viable substitute for videos produced via traditional methods in online educational settings, making high quality educational content more accessible across the globe.
Daniel Leiker, Ashley Ricker Gyllen, Ismail Eldesouky, Mutlu Cukurova
2023-04-07T12:57:42Z
http://arxiv.org/abs/2304.03784v2
# Generative AI for learning: Investigating the potential of synthetic learning videos ###### Abstract Recent advances in generative artificial intelligence (AI) have captured worldwide attention. Tools such as Dalle-2 and ChatGPT suggest that tasks previously thought to be beyond the capabilities of AI may now augment the productivity of creative media in various new ways, including through the generation of synthetic video. This research paper explores the utility of using AI-generated synthetic video to create viable educational content for online educational settings. To date, there is limited research investigating the real-world educational value of AI-generated synthetic media. To address this gap, we examined the impact of using AI-generated synthetic video in an online learning platform on both learners' content acquisition and learning experience. We took a mixed-method approach, randomly assigning adult learners (\(n=83\)) into one of two micro-learning conditions, collecting pre- and post-learning assessments, and surveying participants on their learning experience. The control condition included a traditionally produced instructor video, while the experimental condition included a synthetic video with a realistic AI-generated character. The results show that learners in both conditions demonstrated significant improvement from pre- to post-learning (\(p<.001\)), with no significant differences in gains between the two conditions (\(p=.80\)). In addition, no differences were observed in how learners perceived the traditional and synthetic videos. These findings suggest that AI-generated synthetic learning videos have the potential to be a viable substitute for videos produced via traditional methods in online educational settings, making high quality educational content more accessible across the globe. Keywords:Generative AI, AI in Education, AI-generated Learning Content. ## 1 Introduction The argument for using artificial intelligence (AI) to support learning and education is well established, with a growing body of evidence demonstrating the positive impacts of using AI to support learning, engagement, and metacognitive development [1, 2, 3, 4]. However, generative AI is a relatively new area with respect to its implementation in learning contexts, and the extent to which AI-generated media can be used to support human learning remains largely unexamined. Recent advances in generative AI have captured worldwide attention. Tools such as Dalle-2 and ChatGPT, developed by OpenAI, suggest that tasks previously thought to be beyond the capabilities of AI may now augment the productivity of creative media and educational content in various new ways. Since at least 2014, new methods in generative machine learning, such as Generative Adversarial Networks (GANs), have enabled the realistic synthesis of digital content [5]. Over time, these models are increasing in size and complexity resulting in greater sophistication of their generated outputs [6], including the generation of photo-realistic images, cloning of voices, and animation of faces [7, 8, 9, 10]. More recently, methods such as Generative Pre-trained Transformers (GPT) are ushering in a new era of generative AI capability [11, 12]. Generative AI technologies like these are already being leveraged across several industries including entertainment, customer services and marketing [13]. Their introduction at scale can help us address global educational challenges such as access to high quality content across the globe and make progress towards global sustainable development goals (e.g., SDG4). In addition, they have the potential to reshape our creative and knowledge-based workforces, improve online learning, and transform large sectors of our economies. The global demand for massive open online courses (MOOCs), online degree and training programs, and employee upskilling and reskilling is growing rapidly. This is particularly true for low- and middle-income countries, where access to well-designed, high quality educational resources is a major problem. This demand in turn drives the need for educational content for these online platforms, including a significant amount of instructional learning videos requiring periodic updates to keep up with trends and rapid innovation in research and technology. The purpose of instructional videos in online learning content is to enhance the pedagogy, or message [14]. According the to the cognitive theory of multimedia learning, for these instructional learning videos to be effective, they should be designed with human cognition in mind [15]. Additionally, they should not display unnecessary, excessive, extraneous elements (e.g., overabundance of motion graphics) that can distract from learning and overwhelm cognitive load [16]. Taking an evidence-based approach to creating synthetic learning videos using generative AI is an appealing way to meet these needs due to the challenges associated with producing high quality video media (e.g., lack of on-screen experience of the instructors, significant amount of time and resources needed, and relatively under-resourced nature of educational institutions and schools). For this study, we will specifically focus on the use of generative AI tools to create synthetic videos with virtual instructors, resembling traditional lecture videos found in online learning experiences. ## 2 Background Research and Context Virtual instructors (or animated pedagogical agents) are lifelike onscreen characters, enacted by a computer to support learning by providing guidance or instruction through an online learning experience [17, 18]. Previous research demonstrates that including an animated pedagogical agent can improve learning in online settings [19, 20, 21, 22]. Similarly, multiple studies have shown that the addition of a character to virtual learning can positively impact learners' behaviors, attitudes, and motivation [23]. Given the recent advancements in generative AI, a logical next step in this line of research is to examine whether AI-generated virtual instructors can effectively support online learning [24]. To date, we know of one study investigating the use of an AI-generated virtual instructor to support human learning. In that study, researchers compared two different AI-generated characters and found that character likeability influenced participants motivation towards learning [25]. While this is a promising finding, research comparing learning from an AI-generated virtual instructor to learning from a traditional instructor is needed to evaluate the real-world educational value of such AI-generated media. To the best of our knowledge, this is the first study to make that comparison. The experiment performed for this study was done in the context of online professional learning and in collaboration with EIT InnoEnergy, a European company promoting innovation and entrepreneurship in the fields of sustainable energy. InnoEnergy is part of the European Institute of Innovation and Technology which is itself a body of the European Union. They are spearheading efforts to decarbonize Europe through the leadership of the European Battery Alliance (EBA) Academy. The subject matter of the content used for this study was sampled from an introductory lesson from InnoEnergy's EBA Academy, which was designed using the basic principles of multimedia learning (e.g., contiguity, modality, coherence, segmenting, pre-training, practice) to support an effective learning experience [16]. The key audience for the introductory lesson we sampled from is technicians seeking employment in gigafactories producing lithium-ion batteries, engineers (e.g., electrical, chemical) looking to increase their knowledge in battery technology, and knowledge workers (e.g., upper-level managers, investors) looking to expand their knowledge of the battery industry for strategic decision making. The aim of courses like this one is to achieve the goal of accelerating workforce transitions toward a clean energy economy. One significant challenge in generating learning products and services to achieve this aim is that much of the content is in new and emerging fields (such as state-of-the-art battery manufacturing), where little to no prior content exists to draw from. Another challenge is the rapid pace at which research and technology in these industries are transforming, requiring fast and frequent iterations to certain subdomains of the curriculum sometimes as frequently as a couple of times a year. One potential solution to address these challenges is to apply the research that exists in the literature around multimedia learning [e.g., 15, 17, 19, 21] to the creation of new asynchronous online learning content while exploring the practical use of AI-generated synthetic videos as an alternative to traditional production methods. This study will help to determine the viability of this approach with the end goal of ensuring that the learner experience will be enhanced through these efforts. More specifically, in this research paper, we propose to address the following research questions. 1) To what extent does the use of AI-generated synthetic videos in an online learning platform differentially impact learning performance when compared to traditionally produced instructor videos? 2) What are the perceived differences between AI-generated synthetic videos and traditionally produced instructor videos for learners in an online educational setting? Methodology **Participants.** Our sample included 83 adult learners, recruited from a global professional learning community, ranging in age from 18 to 64 with an average age of 41.5 years. Of these learners, 73% identified as male, 19% identified as female, 0% identified as non-binary, and 8% preferred not to disclose their gender identity. With regards to education, 4% held an associate degree, 15% held a bachelor's degree, 58% held a master's degree, and 23% held a doctorate degree. Additionally, 68% identified as being unfamiliar with the subject matter prior to completing the micro-learning course, while 32% reported having prior knowledge of the subject matter. **Synthetic Video Creation.** The key focus of this experiment is the introduction of AI-generated synthetic video used as instructional video content in a micro-learning course. An instructor video, produced using traditional recording methods, was used as both a control for our experiment and as the source material for generating the synthetic videos. To create the video for our experimental condition, the AI video creation platform Synthesisia was used to generate text-to-videos (TTV) with photo-realistic synthetic actors. To establish the photo realistic quality for these generated videos Synthesisia first records live footage of a particular training data of a live actor and then establishes a synthetic clone of that actor. Neural video synthesis is then applied to add realistic gestures and movements to the synthetically generated video representation of the actor initiated through TTV input for the final production asset. This process could be applied to any "actor" or "instructor" to create a synthetic video clone of themselves for repurposing in this type of AI-generated synthetic video creation. The AI-generated character used in this study was matched to the instructor as closely as possible with respect to age, gender, and race. This resulted in two videos with identical instructional content, but with different visual representations of the instructor (see Figure 1). **Learning Design.** Using the two videos described above, two micro-learning courses were designed for this experiment on the topic of energy sources and vectors at an introductory level. The main goal of the micro-course was to provide learners with an easy-to-complete learning unit that could be accessed on a mobile device and centered Figure 1: Screenshots of the AI-generated virtual instructor in the experimental condition (left) and the traditional instructor in the control condition (right) around the instructional video content. The micro-course consisted of a series of activities, including a course introduction page, a pre-learning knowledge check, the instructional video content (4.5 minutes in length), a click and reveal application activity, and a post-learning assessment. The courses were delivered to learners via EdApp a learning platform designed with a focus on both mobile users and supporting micro-learning formats. They were identical except for the instructional video content (i.e., the control condition included the traditional instructor video, while the experimental condition featured the synthetic video). #### 2.0.1 Procedure. We utilized a mixed-method approach to this research, by collecting and examining both quantitative and qualitative data. Participants were invited through an email campaign by EIT InnoEnergy via their mailing list for current students and alumni of their master's degree and EBA Academy programs, as well as their employees engaging in learning and development. All members of this mailing list had previously consented to receive promotional emails for research purposes. Upon clicking the received promotional email link, participants were directed to sign into the EdApp platform. After completing informed consent, nimble links was used to randomly assign participants into either the experimental or control condition, they completed the micro-learning course described above, and were then asked to complete a survey on their learning experience. As part of the survey, participants used a 5-point Likert Scale to indicate their level of agreement with the following questions: _1) I would consider my overall experience with this micro-learning course positive._ _2) The use of video in the course met my expectations._ _3) The use of video in the course improved my understanding of the material._ _4) I would be interested in taking other courses like this._ Additionally, participants were asked to respond to the open-ended question: _Do you have any overall suggestions for improving this course?_ ## 3 Results To evaluate Research Question 1, responses to the pre- and post-learning assessments were scored and a difference score (post minus pre) was calculated to represent knowledge gains. To confirm the appropriateness of proceeding with a parametric test, skew and kurtosis of the difference scores were estimated and the data were visually inspected for normality. Subsequently, a paired sample t-test was used to compare pre- and post-learning across the entire sample. Regression analyses were used to test if the condition (experimental vs. control) significantly predicted knowledge gains. Regression analyses were then used to examine differences between conditions, as they allowed us to control for prior subject matter knowledge. To evaluate Research Question 2, quantitative and qualitative approaches were used to examine participant responses to the learning experience survey. For close-ended questions with Likert response options, Pearson Chi-Square analyses were used to examine differences between conditions. For the open-ended question, automated sentiment and thematic analyses were used to summarize the results and compare the findings between conditions. Automated coding was then cross-checked manually by researchers who co-authored the paper. The analyses for all quantitative data were completed using relevant packages in R, and the analyses for all qualitative data were completed using NVivo. ### Impacts of AI-generated synthetic videos on learning performance Based on paired sample t-tests, learners showed significant improvement from pre-learning (\(M=0.53\), \(SD=0.65\)) to post-learning (\(M=1.51\), \(SD=1.03\)) across the full sample of 83 participants, \(t\) (82) = 8.31, \(p<.001\), \(d=0.91\). Demonstrating that the micro-learning course was effective at facilitating gains in content knowledge. Regression analyses indicated that condition (experimental vs. control) was not a significant predictor of knowledge gains (\(\beta=.03\), \(p=.80\), \(r=.03\)). This finding was unchanged when controlling for participants' pre-learning performance (\(\beta\) = -.03, \(p=.79\), \(r=.03\)) or their self-reported prior knowledge (\(\beta=.01\), \(p=.92\), \(r=.01\)). This suggests that there was no significant difference in knowledge gains for participants who viewed the AI-generated synthetic video (\(M_{gains}=1.00\), \(SD=1.04\), \(n=52\)) compared to participants who viewed the traditional instructor video (\(M_{gains}=0.94\), \(SD=1.13\), \(n=31\)). The change in learning performance from pre- to post-learning in both conditions as well as across the full sample are presented in Table 1. ### Learner perceptions of AI-generated synthetic videos **Quantitative Findings.** Of the full sample (\(n=83\)), 80% provided agreement ratings for the close-ended statements. Pearson Chi-Square analyses indicated that there were no significant differences in responses to the four close-ended questions between the experimental and control conditions (all \(p>.46\)). Specifically, agreement frequency was not significantly different between conditions for the following statements: I would consider my overall experience with this micro-learning course positive (\(\chi 2\) (3) = 0.19, \(p=.98\), \(V\)=.05); The use of video in the course met my expectations (\(\chi 2\) (4) = 4.65, \(p\) =.34, \(V=.27\)); The use of video in the course improved my understanding of the material (\(\chi 2\) (4) = 2.59, \(p=.63\), \(V=.20\)); I would be interested in taking other courses like this (\(\chi 2\) (3) = 0.08, \(p=.99\), \(V=.03\)). Figure 2 presents the percentage of participants who agreed or strongly agreed with each statement for both conditions and the full sample. These findings suggest that participants who viewed the AI-generated synthetic video were just as likely as participants who viewed the traditional instructor video to \begin{table} \begin{tabular}{l c c c c} \hline \hline & **Pre-Learning** & **Post-Learning** & **Knowledge Gains** & \\ & **M (_SD_)** & **M (_SD_)** & **M (_SD_)** & **p (_d_)** \\ \hline _Experimental (\(n=52\))_ & 0.45 (_0.61_) & 1.45 (_0.95_) & 1.00 (_1.04_) & \(<.001\) (_0.96_) \\ _Control (\(n=31\))_ & 0.66 (_0.70_) & 1.59 (_1.16_) & 0.94 (_1.13_) & \(<.001\) (_0.83_) \\ _Full Sample (\(n=83\))_ & **0.53 (_0.66_)** & **1.51 (_1.03_)** & **1.03 (_1.11_)** & \(<.001\) **(0.91)** \\ \hline \hline \end{tabular} \end{table} Table 1: Learner Performance and Knowledge Gains from Pre to Post agree that their overall learning experience was positive, that the video met their expectations, that the video improved their understanding of the content, and that they would be interested in taking other micro-learning courses like the one they took. In addition to the lack of significant difference in agreement frequency between the experimental and control conditions, the pattern of Likert responses was identical for three of the four statements as well. Only the statement _The video in the course met my expectations_ displayed a different pattern of responses between the experimental and control condition. That is, in the experimental condition participants were more likely to strongly agree with this statement than to remain neutral, while in the control conditions, participants were more likely to remain neutral than to strongly agree. Although this difference was not significant with the current sample size, the effect size indicator (_Cramer's V=_.27) suggests a medium effect. **Qualitative Findings.** For the open-ended survey question asking for overall suggestions for improving this course, 20% of the sample provided a response. This response rate was similar for participants in the experimental condition (21%) and control condition (18%). When completing the survey, participants were unaware of which condition, they were randomly assigned into, and some of the participants in the AI condition did not realize the video was synthetic. For example, one participant in the AI-generated condition responded to the open-ended question with _"I do not see where the AI Content generated comes in?"_. A qualitative sentiment analysis revealed that 47% of responses displayed negative sentiment, 37% displayed neutral sentiment, and 16% displayed positive sentiment. Although many of the comments were negative in sentiment, many contained constructive criticism (e.g., "_The video could be improved by providing more examples to better illustrate (and for the learner to better grasp) the difference between Figure 2: Percentage of participants that agreed with each of the close-ended statements broken down by condition and across the full sample. the various concepts: energy source, energy vector, secondary vectors, etc.")_. A qualitative thematic analysis identified five common themes in the open-ended responses listed in Figure 3, which serves as a visual aid demonstrating that each of the identified themes were clearly present in both the experimental and control conditions. The most common theme, demonstrated in the quote given above, was learners wanting more examples and more detailed explanations for concepts (26% of responses; 7% of the sample). However, another subsample of the participants felt that the course was already too long with the pace being slightly quick (16% of responses; 4% of the full sample). Comments like this one, _"After the MCQs [multiple choice questions] at the end, it would have been good to have feedback that suggested what I should do to solidify my understanding where there were gaps"_, made up 21% of responses (5% of the full sample) giving suggestions related to the pre- and post-learning assessments. Another 16% of responses (4% of the full sample) made comments about accessibility issues, such as _"I'm not sure if there was a transcript available for the video, there might have been one that wasn't obvious to me"_, and the same proportion of participants explicitly stated that they enjoyed the course (16% of responses; 4% of the full sample). These qualitative findings suggest that the general perceptions of the instructional video content were similar for participants who viewed the AI-generated synthetic video and participants who viewed the traditional instructional video. ## 4 Discussion The current study aimed to investigate the impact of AI-generated synthetic videos on learning performance compared to traditional instructor videos in an online learning platform. Our results indicated significant improvement in both traditional video and AI-generated video conditions between pre- and post-learning assessments with no statistical difference in terms of learning gains. This study contributes to the growing body of research on the use of synthetic characters and avatars in education by specifically focusing on the use of AI-generated synthetic videos and comparing them to traditionally produced instructor videos. However, it is important to note that AI-generated synthetic videos, which are created using generative AI methods and are not human-made, represent a new approach that differs from the use of human-made pedagogical agents Figure 3: Proportion of open-ended responses for each of the five major themes broken down by condition. and avatars. Interestingly, while our study shows an equal acceptance of the control and experimental conditions, research on pedagogical agents and avatars has generally shown that their use leads to more positive learning outcomes [15, 17, 19, 21]. It is possible that the neutral effect of these AI-generated synthetic videos compared to pedagogical agents and avatars is because the former is explicitly intended to replicate the traditional talking head video format. In contrast, the literature around pedagogical agents and avatars has focused on improving the learner experience by moving away from the talking head format to include meaningful gestures, and emotional cues [e.g., 18, 22]. In addition, the findings from both our qualitative and quantitative results examining the differences between AI-generated synthetic videos and traditionally produced instructor videos for learners in an online educational setting, suggest that there was little to no difference between learner perceptions of the two videos. Learner responses to our close-ended ended questions were nearly identical on three of the four items. Learner responses to our open-end question tended to be negative in sentiment, but this was not surprising given the wording of the prompt asking for suggestions to improve the course. We noted that sometimes the themes conflicted with one another. For example, the theme of learners wanting more examples and longer/more detailed explanations for concepts is at odds with the theme of the course being too long and the pace being slightly quick. Most of the themes were logistical and gave insight into how the micro-learning course could be improved from an accessibility standpoint. Organizing the findings from the thematic analyses by the type of video watched highlighted that there were no themes that were unique to either condition. The only observed difference in response to our close-ended items was in the extent to which the instructional video met the learners' expectations. The pattern of results showed that in the synthetic video with the AI-generated virtual instructor, more learners strongly agreed with the statement than stayed neutral. However, in the video with the traditional instructor, more learners stayed neutral than strongly agreed. While this difference in patterns between the conditions was not significant with the current sample size, the effect size indicator suggests a medium effect that warrants further investigations into potential explanations for any differences seen with larger sample sizes. Having presented the value of AI-generated synthetic media, it is important to highlight that one of the greatest challenges of using generative AI in education is the fear of false or inaccurate information being presented to learners [13]. This is a valid concern as AI-generated content can be based on biased or incomplete data, which can lead to the dissemination of false or misleading information. However, in the current study, we addressed this challenge by using input data that was unchanged and came directly from material produced in collaboration with subject matter experts. That is, while the virtual instructor and video medium itself were synthetic in our experimental condition, the content of the media was not. Through the involvement of subject matter experts in the creation of the traditionally produced video, we ensured that the information presented in our AI-generated synthetic video was accurate and reliable. Furthermore, the utilization of AI-generated synthetic videos alone may not be sufficient to support effective learning. In order to fully leverage the potential of such videos in an educational setting, it is imperative to integrate them within a larger curriculum that is grounded in sound learning science and instructional design principles [16]. This holistic approach will ensure that the use of AI-generated synthetic videos is integrated in a manner that supports the overall learning objectives and outcomes, in conjunction with other pedagogical techniques such as formative and summative assessments, interactive activities, and opportunities for application and practice. **Limitations.** Due to the desire for brevity in the micro-learning session context we studied, our pre- and post- learning assessments are brief. This limits our ability to investigate how AI-generated synthetic videos might differentially support various aspects of learning (e.g., difficulty of the material or complexity of concept) or what types of material they are most beneficial for (i.e., introductory material vs. advanced material). Additionally, our relatively small sample size may lead us to be underpowered to detect more nuanced differences in how learners perceive the two videos. This limits our ability to run more complex models to predict learning gains, including the investigation of fit between the learner and virtual instructor's demographics. Future studies are warranted with a more robust learning assessment across several subject matters using larger sample sizes. This will allow for a deeper dive into how AI-generated virtual instructors are perceived differently than traditional instructors, and for what types of concepts they are most effective. Additionally, these types of designs would allow for exploration into whether human learning from AI-generated virtual instructors could be improved further by applying the research done with pedagogical agents. Future studies should consider factors such as learner preference at a larger scale, the use of AI-generated synthetic videos in longer duration learning paths, and the integration of more advanced generative AI techniques for improved quality to be evaluated in blind studies. ## 5 Conclusion The adoption of generative AI to create synthetic instructional videos has the potential to be a viable substitute for videos produced via traditional methods in online educational settings. In terms of cost and time efficiency, the AI-generated synthetic video method is highly advantageous. The cost of production is near zero, while the traditional video required hours of human labor, film equipment, and software to produce. Additionally, the time to produce the synthetic video took only minutes or hours, as compared to the multiple hours or days required for the traditional video. Furthermore, updating or correcting errors in the original video would require a new round of filming and editing, while the AI-generated synthetic video method only requires editing the text script input and generating a new video, which can be done in minutes. These advantages can help us deliver high quality educational content for all learners across the globe. The current study is the first to indicate that learners have equal gains and learning experiences with an AI-generated virtual instructor as they do with a traditional instructor.
2303.10039
On the Reconstructability and Rediscoverability of Typed Jackson Nets (Extended Version)
A process discovery algorithm aims to construct a model from data generated by historical system executions such that the model describes the system well. Consequently, one desired property of a process discovery algorithm is rediscoverability, which ensures that the algorithm can construct a model that is behaviorally equivalent to the original system. A system often simultaneously executes multiple processes that interact through object manipulations. This paper presents a framework for developing process discovery algorithms for constructing models that describe interacting processes based on typed Jackson Nets that use identifiers to refer to the objects they manipulate. Typed Jackson Nets enjoy the reconstructability property which states that the composition of the processes and the interactions of a decomposed typed Jackson Net yields a model that is bisimilar to the original system. We exploit this property to demonstrate that if a process discovery algorithm ensures rediscoverability, the system of interacting processes is rediscoverable.
Daniël Barenholz, Marco Montali, Artem Polyvyanyy, Hajo A. Reijers, Andrey Rivkin, Jan Martijn E. M. van der Werf
2023-03-17T15:09:41Z
http://arxiv.org/abs/2303.10039v1
# On the Reconstructability and Rediscoverability of Typed Jackson Nets ###### Abstract A process discovery algorithm aims to construct a model from data generated by historical system executions such that the model describes the system well. Consequently, one desired property of a process discovery algorithm is _rediscoverability_, which ensures that the algorithm can construct a model that is behaviorally equivalent to the original system. A system often simultaneously executes multiple processes that interact through object manipulations. This paper presents a framework for developing process discovery algorithms for constructing models that describe interacting processes based on typed Jackson Nets that use identifiers to refer to the objects they manipulate. Typed Jackson Nets enjoy the _reconstructability_ property which states that the composition of the processes and the interactions of a decomposed typed Jackson Net yields a model that is bisimilar to the original system. We exploit this property to demonstrate that if a process discovery algorithm ensures rediscoverability, the system of interacting processes is rediscoverable. ## 1 Introduction Business processes are fundamental to a wide range of systems. A business process is a collection of activities that, when performed, aims to achieve a business objective at an organization. Examples of business processes are an order-to-cash process at a retailer, a medical assessment process at a hospital, or a credit check process at a bank. Business processes are modeled using process modeling languages, such as Petri nets, and used for communication and analysis purposes [1]. process and can be used to model various types of concurrent and sequential behavior [18]. A process discovery algorithm aims to automatically construct a model from data generated by historical process executions captured in an event log of the system, such that the model describes the system well. A desired property of a discovery algorithm is _rediscoverability_. This property states that if a system \(S\), expressed as a model \(M\), generates an event log \(L\), then a discovery algorithm with the rediscoverability property should construct \(M\) from \(L\). In other words, the algorithm can reverse engineer the model of the system from the data the model has generated. Only a few existing algorithms guarantee this property. For example, if the model is a block-structured workflow net, and the event log is directly-follows complete, then the \(\alpha\)-Miner algorithm [22] can rediscover the net that generated the event log. Similarly, again under the assumption that the event log is directly-follows complete, Inductive Miner [16] can rediscover process trees without duplicate transitions, self-loops, or silent transitions. Most existing process discovery algorithms assume that a system executes a single process [4]. Consequently, an event log is defined as a collection of sequences where a sequence describes the execution of a single process instance. However, many information systems, such as enterprise resource planning systems, do not satisfy this assumption. A system often executes multiple interacting processes [10, 23]. For example, consider a retailer system that executes three processes: an order, product, and customer management process, as depicted in Fig. 1. These processes are intertwined. Specifically, only available products may be ordered, and customers can only have one order at a time. Consequently, events do not belong to a single process but relate to several processes. For instance, consider an event \(e\) in some event log that occurred as transition \(G\) was executed for some customer \(c\) and created a new order \(o\) in the system. Event \(e\) relates to the customer process instance \(c\) and the order process instance \(o\). Traditional process discovery techniques require event \(e\) to be stored in multiple event logs and generate multiple models, one for each process [7]. A different approach is taken in artifact or object-centric process discovery [5, 17] and agent system discovery [20, 21]. In object-centric process discovery, Figure 1: A retailer system of three interacting processes. instead of linking each event to a single object, events can be linked to multiple objects stored in object-centric event logs [8]. Existing object-centric discovery algorithms project the input event log on each object type to create a set of "flattened" event logs. For each event log, a model is discovered, after which these models are combined into a single model [5]. In general, flattening is lossy [7], as in this step, events can disappear [5], be duplicated (convergence) [3], or lead to wrong event orders (divergence) [3]. In agent system discovery, instead of interacting objects, a system is viewed as composed of multiple autonomous agents, each driving its processes that interact to achieve an overall objective of the system [20]. An agent system discovery algorithm proceeds by decomposing the input event log into multiple event logs, each composed of events performed by one agent (type) and an event log of interactions, and then discovering agent and interaction models and composing them into the resulting system [21]. In this paper, we study under what conditions projections in event logs can guarantee rediscoverability for interacting processes, represented as typed Jackson Nets, a subclass of typed Petri nets with identifiers [19, 23]. The class of typed Jackson Nets is inspired by Box Algebra [9] and Jackson Nets [14], which are (representations of) block-structured workflow nets that are _sound_[2] by construction [16]. As we demonstrate, typed Jackson Nets exhibit a special property: they are _reconstructable_. Composing the projections of each type is insufficient for reconstructing a typed Jackson Net. Instead, if the subset-closed set of all type combinations is considered, the composition returns the original model of the system. We show how the reconstructability property can be used to develop a framework for rediscoverability of typed Jackson Nets using traditional process discovery algorithms. The framework builds upon a divide and conquer strategy, as depicted in Fig. 2. The principle idea of this strategy is to project an event log \(L\) generated by some model \(M\) of the system onto logs \(L_{1},\ldots,L_{n}\). Then, if these projected event logs satisfy the conditions of a process discovery algorithm, composition of the resulting models \(D_{1},\ldots,D_{n}\) into model \(D^{\prime}\) should Figure 2: The framework for rediscoverability of systems of interacting processes. rediscover the original model of the system. In this framework, we show that every projected event log is also an event log of the corresponding projected model. Consequently, if a process discovery algorithm guarantees the rediscoverability of projected models, then the composition operator for typed Jackson Nets can be used to ensure the rediscoverability of the original system. The next section presents the basic notions. In Section 3, we introduce typed Jackson Nets, which, as shown in Section 4, are reconstructable. We define a framework for developing discovery algorithms that guarantee rediscoverability in Section 5. We conclude the paper in Section 6. ## 2 Preliminaries Let \(S\) and \(T\) be two possibly infinite sets. The powerset of \(S\) is denoted by \(\mathcal{P}(S)=\{S^{\prime}\mid S^{\prime}\subseteq S\}\) and \(|S|\) denotes the cardinality of \(S\). Two sets \(S\) and \(T\) are _disjoint_ if \(S\cap T=\emptyset\), with \(\emptyset\) denoting the empty set. The cartesian product of two sets \(S\) and \(T\), is defined by \(S\times T=\{(a,b)\mid a\in S,b\in T\}\). The generalized cartesian product for some set \(S\) and and sets \(T_{s}\) for \(s\in S\) is defined as \(\Pi_{s\in S}T_{s}=\big{\{}f:S\rightarrow\bigcup_{s\in S}T_{s}\mid\forall s\in S :f(s)\in T_{s}\big{\}}\). Given a relation \(R\subseteq S\times T\), its range is defined by \(\textsc{rng}(R)=\{y\in T\mid\exists x\in S:(x,y)\in R\}\). Similarly, the domain of \(R\) is defined by \(\textsc{dom}(R)=\{x\in S\mid\exists y\in T:(x,y)\in R\}\). Restricting the domain of a relation to a set \(U\) is defined by \(R_{|U}=\{(a,b)\in R\mid a\in U\}\). A _multiset_\(m\) over \(S\) is a mapping of the form \(m:S\rightarrow\mathbb{N}\), where \(\mathbb{N}=\{0,1,2,\ldots\}\) denotes the set of natural numbers. For \(s\in S\), \(m(s)\in\mathbb{N}\) denotes the number of times \(s\) appears in multiset \(m\). We write \(s^{n}\) if \(m(s)=n\). For \(x\not\in S\), \(m(x)=0\). We use \(S^{\oplus}\) to denote the set of all finite multisets over \(S\) and overload \(\emptyset\) to also denote the empty multiset. The size of a multiset is defined by \(|m|=\sum_{s\in S}m(s)\). The support of \(m\in S^{\oplus}\) is the set of elements that appear in \(m\) at least once: \(\mathit{supp}\,(m)=\{s\in S\mid m(s)>0\}\). Given two multisets \(m_{1}\) and \(m_{2}\) over \(S\): _(i)_\(m_{1}\subseteq m_{2}\) (resp., \(m_{1}\subset m_{2}\)) iff \(m_{1}(s)\leq m_{2}(s)\) (resp., \(m_{1}(s)<m_{2}(s)\)) for each \(s\in S\); _(ii)_\((m_{1}+m_{2})(s)=m_{1}(s)+m_{2}(s)\) for each \(s\in S\); and _(iii)_ if \(m_{1}\subseteq m_{2}\), \((m_{2}-m_{1})(s)=m_{2}(s)-m_{1}(s)\) for each \(s\in S\). A _sequence_ over \(S\) of length \(n\in\mathbb{N}\) is a function \(\sigma:\{1,\ldots,n\}\to S\). If \(n>0\) and \(\sigma(i)=a_{i}\), for \(1\leq i\leq n\), we write \(\sigma=\langle a_{1},\ldots,a_{n}\rangle\). The length of a sequence \(\sigma\) is denoted by \(|\sigma|\). The sequence of length \(0\) is called the _empty sequence_, and is denoted by \(\epsilon\). The set of all finite sequences over \(S\) is denoted by \(S^{*}\). We write \(a\in\sigma\) if there is \(1\leq i\leq|\sigma|\) such that \(\sigma(i)=a\) and \(\mathit{supp}\,(\sigma)=\{a\in S\mid\exists 1\leq i\leq|\sigma|:\sigma(i)=a\}\). _Concatenation_ of two sequences \(\nu,\gamma\in S^{*}\), denoted by \(\sigma=\nu\cdot\gamma\), is a sequence defined by \(\sigma:\{1,\ldots,|\nu|+|\gamma|\}\to S\), such that \(\sigma(i)=\nu(i)\) for \(1\leq i\leq|\nu|\), and \(\sigma(i)=\gamma(i-|\nu|)\) for \(|\nu|+1\leq i\leq|\nu|+|\gamma|\). Projection of sequences on a set \(T\) is defined inductively by \(\epsilon_{|T}=\epsilon\), \(\left(\langle a\rangle\cdot\sigma\right)_{|T}=\langle a\rangle\cdot\sigma_{|T}\) if \(a\in T\) and \(\left(\langle a\rangle\cdot\sigma\right)_{|T}=\sigma_{|T}\) otherwise. Renaming a sequence with an injective function \(r:S\to T\) is defined inductively by \(\rho_{r}(\epsilon)=\epsilon\), and \(\rho_{r}(\langle a\rangle\cdot\sigma)=\langle r(a)\rangle\cdot\rho_{r}(\sigma)\). Renaming is extended to multisets of sequences as follows: given a multiset \(m\in(S^{*})^{\oplus}\), we define \(\rho_{r}(m)=\sum_{\sigma\in\mathit{supp}(m)}\sigma(m)\cdot\rho_{r}(\sigma)\). For example, \(\rho_{\{x\mapsto a,y\mapsto b\}}\left(\langle x,y\rangle\right)^{3}=\langle a,b\rangle^{3}\). A _directed graph_ is a pair \((V,A)\) where \(V\) is the set of vertices, and \(A\subseteq V\times V\) the set of arcs. Two graphs \(G_{1}=(V_{1},A_{1})\) and \(G_{2}=(V_{2},A_{2})\) are _isomorphic_, denoted by \(G_{1}\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{ \raise 0.4pt\hbox{$\sim$}}}}G_{2}\), if a bijection \(b:V_{1}\to V_{2}\) exists, such that \((v_{1},v_{2})\in A_{1}\) iff \((b(v_{1}),b(v_{2}))\in A_{2}\). Given a finite set \(A\) of (action) labels, a _(labeled) transition system_ (LTS) over \(A\) is a tuple \(\Gamma_{A}=(S,A,s_{0},\rightarrow)\), where \(S\) is the (possibly infinite) set of _states_, \(s_{0}\) is the _initial state_ and \(\rightarrow\subset(S\times(A\cup\{\tau\})\times S)\) is the _transition relation_, where \(\tau\not\in A\) denotes the silent action [12]. In what follows, we write \(s\xrightarrow{a}s^{\prime}\) for \((s,a,s^{\prime})\in\rightarrow\). Let \(r:A\rightarrow(A^{\prime}\cup\{\tau\})\) be an injective, total function. Renaming \(\Gamma\) with \(r\) is defined as \(\rho_{r}(\Gamma)=(S,A\setminus A^{\prime},s_{0},\rightarrow^{\prime})\) with \((s,r(a),s^{\prime})\in\rightarrow^{\prime}\) iff \((s,a,s^{\prime})\in\rightarrow\). Given a set \(T\), hiding is defined as \(\hat{\mathfrak{h}}_{T}(\Gamma)=\rho_{h}(\Gamma)\) with \(h:A\to A\cup\{\tau\}\) such that \(h(t)=\tau\) if \(t\in T\) and \(h(t)=t\) otherwise. Given \(a\in A\), \(p\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{ \raise 0.4pt\hbox{$\sim$}}}}q\) denotes a _weak transition relation_ that is defined as follows: _(i)_\(p\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{ \raise 0.4pt\hbox{$\sim$}}}}q\) iff \(p(\xrightarrow{\tau})^{*}q_{1}\xrightarrow{a}q_{2}(\xrightarrow{\tau})^{*}q\); _(ii)_\(p\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{ \raise 0.4pt\hbox{$\sim$}}}\hbox{ \raise 0.4pt\hbox{$\sim$}}}\hbox{ \raise 0.4pt\hbox{$\sim$}}}\hbox{ \raise 0.4pt\hbox{$\sim$}}\hbox{ \raise 0.4pt\hbox{$\sim$}}\hbox{ \raise 0.4pt\hbox{$\sim$}}q\) iff \(p(\xrightarrow{\tau})^{*}q\). Here, \((\xrightarrow{\tau})^{*}\) denotes the reflexive and transitive closure of \(\xrightarrow{\tau}\). Let \(\Gamma_{1}=(S_{1},A,s_{01},\rightarrow_{1})\) and \(\Gamma_{2}=(S_{2},A,s_{02},\rightarrow_{2})\) be two LTSs. A relation \(R\subseteq(S_{1}\times S_{2})\) is called a _strong simulation_, denoted as \(\Gamma_{1}\prec_{R}\Gamma_{2}\), if for every pair \((p,q)\in R\) and \(a\in A\cup\{\tau\}\), it holds that if \(p\xrightarrow{a}_{1}p^{\prime}\), then there exists \(q^{\prime}\in S_{2}\) such that \(q\xrightarrow{a}_{2}q^{\prime}\) and \((p^{\prime},q^{\prime})\in R\). Relation \(R\) is a _weak simulation_, denoted by \(\Gamma_{1}\preccurlyeq_{R}\Gamma_{2}\), iff for every pair \((p,q)\in R\) and \(a\in A\cup\{\tau\}\) it holds that if \(p\xrightarrow{a}_{1}p^{\prime}\), then \(a=\tau\) and \((p^{\prime},q)\in R\), or there exists \(q^{\prime}\in S_{2}\) such that \(q\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{ \raise 0.4pt\hbox{$\sim$}}}}_{2}q^{\prime}\) and \((p^{\prime},q^{\prime})\in R\). Relation \(R\) is called a strong (weak) _bisimulation_, denoted by \(\Gamma_{1}\sim_{R}\Gamma_{2}\) (\(\Gamma_{1}\approx_{R}\Gamma_{2}\)) if both \(\Gamma_{1}\prec\Gamma_{2}\) (\(\Gamma_{1}\approx_{R}\Gamma_{2}\)) and \(\Gamma_{2}\prec_{R^{-1}}\Gamma_{1}\) (\(\Gamma_{2}\prec_{R^{-1}}\Gamma_{1}\)). Given a strong (weak) (bi)simulation \(R\), we say that a state \(p\in S_{1}\) is strongly (weakly) rooted (bi)similar to \(q\in S_{2}\), written \(p\sim_{R}^{r}q\) (correspondingly, \(p\approx_{R}^{r}q\)), if \((p,q)\in R\). The relation is called _rooted_ iff \((s_{01},s_{02})\in R\). A rooted relation is indicated with a superscript \({}^{r}\). A weighted Petri net is a \(4\)-tuple \((P,T,F,W)\) where \(P\) and \(T\) are two disjoint sets of _places_ and _transitions_, respectively, \(F\subseteq((P\times T)\cup(T\times P))\) is the _flow relation_, and \(W:F\rightarrow\mathbb{N}^{+}\) is a _weight function_. For \(x\in P\cup T\), we write \({}^{\bullet}x=\{y\mid(y,x)\in F\}\) to denote the _preset_ of \(x\) and \(x^{\bullet}=\{y\mid(x,y)\in F\}\) to denote the _postset_ of \(x\). We lift the notation of preset and postset to sets elementwise. If for a Petri net no weight function is defined, we assume \(W(f)=1\) for all \(f\in F\). A _marking_ of \(N\) is a multiset \(m\in P^{\oplus}\), where \(m(p)\) denotes the number of _tokens_ in place \(p\in P\). If \(m(p)>0\), place \(p\) is called _marked_ in marking \(m\). A _marked Petri net_ is a tuple \((N,m)\) with \(N\) a weighted Petri net with marking \(m\). A transition \(t\in T\) is enabled in \((N,m)\), denoted by \((N,m)\left[t\right\rangle\) iff \(W((p,t))\leq m(p)\) for all \(p\in{}^{\bullet}t\). An enabled transition can _fire_, resulting in marking \(m^{\prime}\) iff \(m^{\prime}(p)+W((p,t))=m(p)+W((t,p))\), for all \(p\in P\), and is denoted by \((N,m)\left[t\right\rangle(N,m^{\prime})\). We lift the notation of firings to sequences. A sequence \(\sigma\in T^{*}\) is a _firing sequence_ iff \(\sigma=\epsilon\), or markings \(m_{0},\ldots,m_{n}\) exist such that \((N,m_{i-1})[\sigma(i))(N,m_{i})\) for \(1\leq i\leq\left|\sigma\right|=n\), and is denoted by \((N,m_{0})[\sigma)(N,m_{n})\). If the context is clear, we omit the weighted Petri net \(N\). The set of reachable markings of \((N,m)\) is defined by \(\mathcal{R}(N,m)=\{m^{\prime}\mid\exists\sigma\in T^{*}:m[\sigma)m^{\prime}\}\). The set of all possible finite firing sequences of \((N,m)\) is denoted by \(\mathcal{L}(N,m_{0})=\{\sigma\in T^{*}\mid m[\sigma\rangle m^{\prime}\}\). The semantics of a marked Petri net \((N,m)\) with \(N=(P,T,F,W)\) is defined by the LTS \(\Gamma_{N,m}=(P^{\oplus},T,m_{0},\rightarrow)\) with \((m,t,m^{\prime})\in\rightarrow\) iff \(m[t]m^{\prime}\). A Petri net \(N=(P,T,F,W)\) has underlying graph \((P\cup T,F)\). Two Petri nets \(N\) and \(N^{\prime}\) are isomorphic, denoted using \(N\leftrightsquigarrow N^{\prime}\), if their underlying graphs are. A _workflow net_ (WF-net for short) is a tuple \(N=(P,T,F,W,\mathit{in},\mathit{out})\) such that: _(i)_\((P,T,F,W)\) is a weighted Petri net; _(ii)_\(\mathit{in},\mathit{out}\in P\) are the source and sink place, respectively, with \(\mbox{\raisebox{-1.29pt}{${}^{\bullet}$}}\mathit{in}=\mathit{out}^{\bullet}=\emptyset\); _(iii)_ every node in \(P\cup T\) is on a directed path from _in_ to _out_. \(N\) is called \(k\)_-sound_ for some \(k\in\mathbb{N}\) iff _(i)_ it is proper completing, i.e., for all reachable markings \(m\in\mathcal{R}(N,[\mathit{in}^{k}])\), if \([\mathit{out}^{k}]\subseteq m\), then \(m=[\mathit{out}^{k}]\); _(ii)_ it is weakly terminating, i.e., for any reachable marking \(m\in\mathcal{R}(N,[\mathit{in}^{k}])\), the final marking is reachable, i.e., \([\mathit{out}^{k}]\in\mathcal{R}(N,m)\); and _(iii)_ it is quasi-live, i.e., for all transitions \(t\in T\), there is a marking \(m\in\mathcal{R}(N,[\mathit{in}])\) such that \(m[t]\). The net is called _sound_ if it is \(1\)-sound. If it is \(k\)-sound for all \(k\in\mathbb{N}\), it is called _generalized sound_[13]. ## 3 Typed Jackson Nets to Model Interacting Processes In this section, we introduce typed Jackson Nets as subclass of typed Petri nets with identifiers. We show that this class is a natural extension to Jackson Nets, which are representations of block-structured workflow nets. Typed Jackson Nets are identifier sound and live by construction. ### Jackson Nets Whereas WF-nets do not put any restriction on the control flow of activities, block-structured WF-nets divide the control flow in logical blocks [15]. Each "block" represents a single unit of work that can be performed, where this unit of work is either atomic (single transition), or one involving multiple steps (multiple transitions). An example block-structured WF-net is shown in Fig. 3. The main advantage of block-structured WF-nets, is that the block-structure ensures that the WF-net is sound by definition [15, 16, 14]. In this paper, we consider Jackson Figure 3: An example block-structured WF-net. Each block corresponds to a node in the Jackson type \((p_{1};(t_{1};(((p_{2};((t_{2}+t_{3})\,;p_{3}))\,\#t_{4})\,;(t_{5};p_{4})))\). As example, the choice between transitions \(t_{2}\) and \(t_{3}\) corresponds to the node \((p_{2};((t_{2}+t_{3})\,;p_{3}))\). Types and Jackson Nets [14]. A Jackson Type is a data structure used to capture all information involved in a single execution of a WF-net. Definition 1 (Jackson Type [14]): The set of _Jackson Types_\(\mathcal{J}\) is recursively defined by the following grammar: \[\mathcal{J} ::=\mathscr{A}^{p}\mid\left(\mathscr{A}^{p};\left(\mathcal{J}^{t}; \mathscr{A}^{p}\right)\right)\] \[\mathcal{J}^{t} ::=\mathscr{A}^{t}\mid\left(\mathcal{J}^{t};\left(\mathcal{J}^{p };\mathcal{J}^{t}\right)\right)\mid\left(\mathcal{J}^{t}+\mathcal{J}^{t}\right)\] \[\mathcal{J}^{p} ::=\mathscr{A}^{p}\mid\left(\mathcal{J}^{p};\left(\mathcal{J}^{t };\mathcal{J}^{p}\right)\right)\mid\left(\mathcal{J}^{p}\parallel\mathcal{J}^{ p}\right)\mid\left(\mathcal{J}^{p}\#\mathcal{J}^{t}\right)\] where \(\mathscr{A}=\mathscr{A}^{p}\cup\mathscr{A}^{t}=\{a,b,c,\ldots\}\) denotes two disjoint sets of atomic types for places and transitions, resp., and symbols \(;,\|,+,\#\) stand for sequence, parallelism, choices, and loops. \(\triangleleft\) Multiple Jackson Types may exist for the same WF-net. For example, the Jackson Type \(\left(\left(p_{1};t_{1}\right);\left(\left(\left(p_{2};\left(\left(t_{2}+t_{3} \right);p_{3}\right)\right)\#t_{4}\right);\left(t_{5};p_{4}\right)\right)\right)\) describes the WF-net of Fig. 3 as well. Each net has a unique representation [14], called its normal form. We define an algebraic equivalence between types to allow rewriting into the normal form. Definition 2 (Algebraic equivalence, normal form [14]): The _algebraic equivalence_\(\equiv_{alg}\) is the smallest equivalence relation on the set of Jackson Types that satisfies the following six rules: \[\begin{array}{ccc}\left(\left(J_{0};J_{1}\right);J_{2}\right)\equiv_{alg} \left(J_{0};\left(J_{1};J_{2}\right)\right)&\left(\left(J_{0}+J_{1}\right)+J_ {2}\right)\equiv_{alg}\left(J_{0}+\left(J_{1}+J_{2}\right)\right)\\ \left(\left(J_{0}\parallel J_{1}\right)\parallel J_{2}\right)\equiv_{alg}\left( J_{0}\parallel\left(J_{1}\parallel J_{2}\right)\right)&\left(J_{0}+J_{1}\right) \equiv_{alg}\left(J_{1}+J_{0}\right)\\ \left(J_{0}\parallel J_{1}\right)\equiv_{alg}\left(J_{1}\parallel J_{0}\right)& \left(\left(J_{0}\#J_{1}\right)\#J_{2}\right)\equiv_{alg}\left(J_{0}\#\left(J _{1}\#J_{2}\right)\right)\end{array}\] with \(J_{0},J_{1},J_{2}\in\mathcal{J}\) three Jackson Types. A Jackson Type is in _normal form_ iff all brackets are moved to the right using the above rules. \(\triangleleft\) The class of Jackson Nets is obtained by recursively applying _generation rules_, starting from a singleton net with only one place. These generation rules are similar to those defined by Murata [18] and preserve soundness [14]. Thus, any Jackson Net is sound by construction. Definition 3 (Jackson Net [14]): A WF-net \(N=\left(P,T,F,\text{in},\text{out}\right)\) is called a _Jackson Net_ if it can be generated from a single place \(p\) by applying the following five generation rules recursively: \[\begin{array}{ccc}\text{J1:}\,p\leftrightarrow\left(p_{1};\left(t;p_{2}\right) \right)&\text{J4:}\,p\leftrightarrow\left(p_{1}\parallel p_{2}\right)\\ \text{J2:}\,t\leftrightarrow\left(t_{1};\left(p_{1};t_{2}\right)\right)&\text{J5: }\,t\leftrightarrow\left(t_{1}+t_{2}\right)\\ \text{J3:}\,p\leftrightarrow\left(p\#t\right)&\end{array}\] We say that \(N\) is generated by \(p\). \(\triangleleft\) As shown in [14], Jackson Nets are completely determined by Jackson Types, and vice versa. Theorem 3 (Jackson Nets and Jackson Types are equivalent [14]): _Let \(N_{1}\) and \(N_{2}\) be two Jackson Nets that are generated by the Jackson Types \(J_{1}\) and \(J_{2}\), resp. Then \(N_{1}\) and \(N_{2}\) are isomorphic iff \(J_{1}\equiv_{alg}J_{2}\). \(\triangleleft\)_ ### Petri Nets with Identifiers Whereas WF-nets describe all possible executions for a single case, systems typically consist of many interacting processes. The latter can be modeled using typed Petri nets with identifiers (t-PNIDs for short) [23]. In this formalism, each object is typed and has a unique identifier to be able to refer to it. Tokens carry vectors of identifiers, which are used to relate objects. Variables on the arcs are used to manipulate the identifiers. Definition 4 (Identifiers, Types and Variables): Let \(\mathcal{I}\), \(\Lambda\), and \(\mathcal{V}\) denote countably infinite sets of identifiers, type labels, and variables, respectively. We define: * the _domain assignment_ function \(I:\Lambda\rightarrow\mathcal{P}(\mathcal{I})\), such that \(I(\lambda_{1})\) is an infinite set, and \(I(\lambda_{1})\cap I(\lambda_{2})\neq\emptyset\) implies \(\lambda_{1}=\lambda_{2}\) for all \(\lambda_{1},\lambda_{2}\in\Lambda\); * the _id typing_ function \(\mathtt{type}_{\mathcal{I}}:\mathcal{I}\rightarrow\Lambda\) s.t. if \(\mathtt{type}_{\mathcal{I}}(\mathtt{id})=\lambda\), then \(\mathtt{id}\in I(\lambda)\); * a _variable typing_ function \(\mathtt{type}_{\mathcal{V}}:\mathcal{V}\rightarrow\Lambda\), prescribing that \(x\in\mathcal{V}\) can be substituted only by values from \(I(\mathtt{type}_{\mathcal{V}}(x))\). When clear from the context, we omit the subscripts of \(\mathtt{type}\). We lift the \(\mathtt{type}\) functions to sets, vectors, and sequences by applying the function on each of its constituents. \(\triangleleft\) In a t-PNID, each place is annotated with a label, called the _place type_. A place type is a vector of types, indicating types of identifier tokens the place can carry. Similar to Jackson Types, we use \([p,\lambda]\) to denote that place \(p\) has type \(\alpha(p)=\lambda\). Each arc is inscribed with a multiset of vectors of identifiers, such that the type of each variable coincides with the place types. If the inscription is empty or contains a single element, we omit the brackets. Definition 5 (Typed Petri net with identifiers): A _typed Petri net with identifiers_ (t-PNID) \(N\) is a tuple \((P,T,F,\alpha,\beta)\), where: * \((P,T,F)\) is a classical Petri net; * \(\alpha:P\rightarrow\Lambda^{*}\) is the _place typing function_; * \(\beta:F\rightarrow(\mathcal{V}^{*})^{\oplus}\) defines for each arc a multiset of _variable vectors_ s.t. \(\alpha(p)=\mathtt{type}(x)\) for any \(x\in\mathit{supp}\left(\beta((p,t))\right)\) and \(\mathtt{type}(y)=\alpha(p^{\prime})\) for any \(y\in\mathit{supp}\left(\beta((t,p^{\prime}))\right)\) where \(t\in T\), \(p\in{}^{\bullet}t\), \(p^{\prime}\in t{}^{\bullet}\). \(\triangleleft\) A marking of a t-PNID is the configuration of tokens over the set of places. Each token in a place should be of the correct type, i.e., the vector of identifiers carried by a token in a place should match the corresponding place type. The set \(\mathtt{C}(p)\) defines all possible vectors of identifiers a place \(p\) may carry. Definition 6 (Marking): Given a t-PNID \(N=(P,T,F,\alpha,\beta)\), and place \(p\in P\), its _id set_ is \(\mathtt{C}(p)=\prod_{1\leq i\leq|\alpha(p)|}I(\alpha(p)(i))\). A _marking_ is a function \(m\in\mathbb{M}\left(N\right)\), with \(\mathbb{M}\left(N\right)=P\rightarrow(\Lambda^{*})^{\oplus}\), such that \(m(p)\in\mathtt{C}(p)^{\oplus}\), for each place \(p\in P\). The set of identifiers used in \(m\) is denoted by \(Id(m)=\bigcup_{p\in P\;\mathit{RNG}}(\mathit{supp}\left(m(p)\right))\) The pair \((N,m)\) is called a _marked t-PNID_. \(\triangleleft\) To define the semantics of a t-PNID, the variables need to be valuated with identifiers. **Definition 7** (Variable sets [23]): _Given a t-PNID \(N=(P,T,F,\alpha,\beta)\), \(t\in T\) and \(\lambda\in\Lambda\), we define the following sets of variables:_ * input variables _as_ \(\mathit{In}(t)=\bigcup_{x\in\beta((p,t)),p\in\bullet t}\mathit{RNG}(\mathit{ supp}\left(x\right))\)_;_ * output variables _as_ \(\mathit{Out}(t)=\bigcup_{x\in\beta((t,p)),p\in t\bullet}\mathit{RNG}(\mathit{ supp}\left(x\right))\)_;_ * variables _as_ \(\mathit{Var}(t)=\mathit{In}(t)\cup\mathit{Out}(t)\)_;_ * emitting variables _as_ \(\mathit{Emit}(t)=\mathit{Out}(t)\setminus\mathit{In}(t)\)_;_ * collecting variables _as_ \(\mathit{Collect}(t)=\mathit{In}(t)\setminus\mathit{Out}(t)\)_;_ * emitting transitions _as_ \(E_{N}(\lambda)=\{t\mid\exists x\in\mathit{Emit}(t)\land\mathtt{type}(x)=\lambda\}\)_;_ * collecting transitions _as_ \(C_{N}(\lambda)=\{t\mid\exists x\in\mathit{Collect}(t)\land\mathtt{type}(x)=\lambda\}\)_;_ * types in \(N\) _as_ \(\mathtt{type}(N)=\{\vec{\lambda}\mid\exists p\in P:\vec{\lambda}\in\alpha(p)\}\)_._ \(\triangleleft\)__ A valuation of variables to identifiers is called a _binding_. Bindings are used to inject new fresh data into the net via variables that emit identifiers, i.e., via variables that appear only on the output arcs of that transition. Note that in this definition, freshness of identifiers is local to the marking, i.e., disappeared identifiers (those fully removed from the net through collecting transitions) may be reused, as it does not hamper the semantics of the t-PNID. **Definition 8** (Firing rule for t-PNIDs): _Given a marked t-PNID \((N,m)\) with \(N=(P,T,F,\alpha,\beta)\), a binding for transition \(t\in T\) is an injective function \(\psi:\mathcal{V}\rightarrow\mathcal{I}\) such that \(\mathtt{type}(v)=\mathtt{type}(\psi(v))\) and \(\psi(v)\not\in Id(m)\) iff \(v\in\mathit{Emit}(t)\). Transition \(t\) is enabled in \((N,m)\) under binding \(\psi\), denoted by \((N,m)[t,\psi)\) iff \(\rho_{\psi}(\beta(p,t))\leq m(p)\) for all \(p\in\bullet t\). Its firing results in marking \(m^{\prime}\), denoted by \((N,m)[t,\psi)(N,m^{\prime})\), such that \(m^{\prime}(p)+\rho_{\psi}(\beta(p,t))=m(p)+\rho_{\psi}(\beta(t,p))\). \(\triangleleft\)_ The firing rule is inductively extended to sequences. A marking \(m^{\prime}\) is _reachable_ from \(m\) if there exists \(\eta\in(T\times(\mathcal{V}\rightarrow\mathcal{I}))^{*}\) such that \((N,m)[\eta)(N,m^{\prime})\). We denote with \(\mathcal{R}(N,m)\) the set of all markings reachable from \(m\) for \((N,m)\). We use \(\mathcal{L}\left(N,m\right)\) to denote all possible firing sequences of \((N,m)\), i.e., \(\mathcal{L}\left(N,m\right)=\{\eta\mid(N,m)[\eta)\}\) and \(Id(\eta)=\bigcup_{(t,\psi)\in\eta}\mathit{RNG}(\psi)\) for the set of identifiers used in \(\eta\). The execution semantics of a t-PNID is defined as an LTS that accounts for all possible executions starting from a given initial marking. We say two t-PNIDs are bisimilar if their induced transition systems are. Definition 9: Given a marked t-PNID \((N,m_{0})\) with \(N=(P,T,F,\alpha,\beta)\), its induced transition system is \(\Gamma_{N,m_{0}}=(\mathbb{M}(N),(T\times(\mathcal{V}\rightarrow\mathcal{I})),m _{0},\rightarrow)\) with \(m\xrightarrow{(t,\psi)}m^{\prime}\) iff \((N,m)\left[t,\psi\right)(N,m^{\prime})\). \(\triangleleft\)_ Soundness properties for WF-nets typically consist of proper completion, weak termination, and quasi-liveness [6]. Extending soundness to t-PNIDs gives _identifier soundness_[23]. In t-PNIDs, each object of a given type "enters" the system through an emitting transition, binding it to a unique identifier. Identifier soundness intuitively states that it should always be possible to remove objects (weak type termination), and that once a collecting transition fires for an object, there should be no remaining tokens referring to the removed object (proper type completion). **Definition 10** (Identifier Soundness [23]).: _Let \((N,m_{0})\) a marked t-PNID and \(\lambda\in\Lambda\) some type. \((N,m_{0})\) is \(\lambda\)-sound iff it is_ * _Proper_ \(\lambda\)_-completing, i.e., for all_ \(t\in C_{N}(\lambda)\)_, bindings_ \(\psi:\mathcal{V}\to\mathcal{I}\) _and markings_ \(m,m^{\prime}\in\mathcal{R}(N,m_{0})\)_, if_ \(m[t,\psi)m^{\prime}\)_, then for all identifiers_ \(\mathtt{id}\in\textsc{{RNG}}(\psi|_{\mathit{Collect}(t)})\cap Id(m)\) _and_ \(\mathtt{type}(\mathtt{id})=\lambda\)_, it holds that_ \(\mathtt{id}\not\in Id(m^{\prime})\)__5_;_ Footnote 5: Here, we constrain \(\psi\) only to objects of type \(\lambda\) that are only consumed. * _Weakly_ \(\lambda\)_-terminating, i.e., for every_ \(m\in\mathcal{R}(N,m_{0})\) _and identifier_ \(\mathtt{id}\in I(\lambda)\) _such that_ \(\mathtt{id}\in Id(m)\)_, there exists a marking_ \(m^{\prime}\in\mathcal{R}(N,m)\) _with_ \(\mathtt{id}\not\in Id(m^{\prime})\)_._ _If it is \(\lambda\)-sound for all \(\lambda\in\mathtt{type}(N)\), then it is identifier sound._ ### Typed Jackson Nets In general, identifier soundness is undecidable for t-PNIDs [23]. Similar as Jackson Nets restrict WF-nets to blocks, _typed Jackson Nets_ (t-JNs) restrict t-PNIDs to blocks, while guaranteeing identifier soundness and liveness. For t-JNs, we disallow multiplicity on arcs and variables, i.e., \(\beta(f)(v)\leq 1\) for all \(f\in F\) and \(v\in\mathcal{V}\), and imply a bijection on variables and identifier types. This prevents place types like \(\lambda=\langle x,x\rangle\). Assuming a Godel-like number on types (cf. [14]), place types and arc inscriptions can be represented as sets. Similar as Jackson Types describe Jackson Nets, we apply a notation based on Jackson Types to denote typed Jackson Nets. **Definition 11** (Typed Jackson Net).: _A t-PNID \(N\) is a typed Jackson Net if it can be generated from a set of transitions \(T^{\prime}\) by applying any of the following six generation rules recursively. If \(N\) is generated from a singleton set of transitions (i.e., \(\left|T^{\prime}\right|=1\)), \(N\) is called atomic._ 1. _Place Expansion:_ \(\left[p,\lambda\right]\leftrightarrow\left(\left[p_{1},\lambda\right];(t_{1}; \left[p_{2},\lambda\right])\right)\)__ 2. _Transition Expansion:_ \(t\leftrightarrow\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\)_, with_ \(\mathit{Var}(t)\subseteq\lambda\)__ 3. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left(\left[p,\lambda\right]\parallel\left[p^{\prime}, \lambda^{\prime}\right]\right);t_{2}\right)\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 4. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left(\left[p,\lambda\right]\parallel\left[p^{\prime}, \lambda^{\prime}\right]\right);t_{2}\right)\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 5. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left(\left[p,\lambda\right]\parallel\left[p^{\prime}, \lambda^{\prime}\right]\right);t_{2}\right)\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 6. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left(\left[p,\lambda\right]\parallel\left[p^{\prime}, \lambda^{\prime}\right]\right);t_{2}\right)\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 7. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left(\left[p,\lambda\right]\parallel\left[p^{\prime}, \lambda^{\prime}\right]\right);t_{2}\right)\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 8. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left(\left[p,\lambda\right]\parallel\left[p^{\prime}, \lambda^{\prime}\right]\right);t_{2}\right)\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 9. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left(\left[p,\lambda\right]\parallel\left[p^{\prime}, \lambda^{\prime}\right]\right);t_{2}\right)\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 10. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left[p,\lambda\right]\parallel\left[p^{\prime},\lambda^{ \prime}\right]\right);t_{2}\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 11. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left[p,\lambda\right]\parallel\left[p^{\prime},\lambda^{ \prime}\right]\right);t_{2}\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 12. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left[p,\lambda\right]\parallel\left[p^{\prime},\lambda^{ \prime}\right]\right);t_{2}\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 13. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left[p,\lambda\right]\parallel\left[p^{\prime},\lambda^{ \prime}\right]\right);t_{2}\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 14. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left[p,\lambda\right]\parallel\left[p^{\prime},\lambda^{ \prime}\right]\right);t_{2}\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 15. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left[p,\lambda\right]\parallel\left[p^{\prime},\lambda^{ \prime}\right]\right);t_{2}\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 16. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left[p,\lambda\right]\parallel\left[p^{\prime},\lambda^{ \prime}\right]\right);t_{2}\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 17. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left[p,\lambda\right]\parallel\left[p^{\prime},\lambda^{ \prime}\right]\right);t_{2}\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 18. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left[p,\lambda\right]\parallel\left[p^{\prime},\lambda^{ \prime}\right]\right);t_{2}\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 19. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda\right];t_{2}\right)\right)\leftrightarrow \left(t_{1};\left(\left[p,\lambda\right]\parallel\left[p^{\prime},\lambda^{ \prime}\right]\right);t_{2}\right)\)_, with_ \(\lambda^{\prime}\cap\mathit{Emit}(p^{\bullet})=\emptyset\)__ 20. _Place Duplication:_ \(\left(t_{1};\left(\left[p,\lambda _R4 Transition Duplication:_\(t\leftrightarrow(t+t^{\prime})\)__ _R5 Self Loop Addition:_\([p,\lambda]\leftrightarrow([p,\lambda]\,\#t)\)__ _R6 Identifier Introduction:_\(t\leftrightarrow(t\triangleleft(N_{1},[p,\lambda]\,,N_{2}))\)_, with_\((N_{1};([p,\lambda]\,;N_{2}))\) _a t-JN and_\(\lambda\cap\text{Var}(t)=\emptyset\)__ An example t-JN is given in Fig. 1. Starting with the product process, transitions \(C\) and \(D\) can be reduced using rule \(R2\). The resulting transition is a self-loop transition, and can be reduced using \(R5\), resulting in the block \((E\triangleleft(A,\text{\emph{product}},B))\). This block can be reduced using \(R6\), leaving transition \(E\). Transition \(E\) is again a self-loop, and can be reduced using \(R5\). The block containing transitions \(H\), \(J\), \(L\)\(O\), \(N\) and \(K\) can be reduced to a single place by applying rules \(R1\), \(R2\) and \(R5\) repeatedly. The remaining place is a duplicate place with respect to place \(p\), and can be reduced using \(R3\). Applying \(R2\) on \(G\) and \(Z\) results in the block \((G\triangleleft(T,\text{\emph{customer}},V))\), which can be reduced to the transition \(G\). Hence, the net in Fig. 1 is an atomic t-JN. Theorem 3.1 (Identifier Soundness of typed Jackson Nets [23]): _Given a t-JN \(N\), then \(N\) is identifier sound and live. \(\triangleleft\)_ ## 4 Decomposability of t-JNs t-PNIDs specify a class of nets with explicitly defined interactions between objects of different types within one system. However, sometimes one may want to focus only on some behaviors exhibited by a given set of object types, by extracting a corresponding net from the original t-PNID model. We formalize this idea below. Definition 12 (Type projection): Let \(N=(P_{N},T_{N},F_{N},\alpha,\beta)\) be a t-PNID and \(\Upsilon\subseteq\Lambda\) be a set of identifier types. The _type projection_ of \(\Upsilon\) on \(N\) is a t-PNID \(\pi_{\Upsilon}\left(N\right)=(P_{\Upsilon},T_{\Upsilon},F_{\Upsilon},\alpha_{ \Upsilon},\beta_{\Upsilon})\), where: * \(P_{\Upsilon}=\{p\in P\mid\Upsilon\subseteq\alpha(p)\}\)_;_ * \(T_{\Upsilon}=\{t\in T\mid(^{\bullet}t\cup t^{\bullet})\cap P\neq\emptyset\}\)_;_ * \(F_{\Upsilon}=F\cap((P_{\Upsilon}\times T_{\Upsilon})\cup(T_{\Upsilon}\times P_ {\Upsilon}))\)_;_ * \(\alpha_{\Upsilon}(p)=\Upsilon\)_, for each_ \(p\in P_{\Upsilon}\)_;_ * \(\beta_{\Upsilon}(f)=\left.\beta(f)\right|_{\mathtt{type}_{\Upsilon}^{-1}( \Upsilon)}\)_, for each_ \(f\in((P_{\Upsilon}\times T_{\Upsilon})\cup(T_{\Upsilon}\times P_{\Upsilon}))\)_._ \(\triangleleft\)__ With the next lemma we explore a property of typed Jackson nets that, in a nutshell, shows that t-JNs are closed under the type projection. This also indirectly witnesses that t-JNs provide a suitable formalism for specifying and manipulating systems with multiple communicating components. Lemma 1: _If \(N=(P_{N},T_{N},F_{N},\alpha,\beta)\) is a t-JN, then \(\pi_{\Upsilon}\left(N\right)\) is a t-JN as well, for any \(\Upsilon\subseteq\mathtt{type}_{\Lambda}(N)\). \(\triangleleft\)_ Proof: (sketch) Let us assume for simplicity that \(N\) is atomic. Then, using rules from Def. 11, \(N\) can be reduced to a single transition. Starting from this transition, one can construct a t-JN following the net graph construction from Def. 12 using the same rules (but the identifier introduction one), proviso that arc inscriptions are always of type \(\Upsilon\). Then, it is easy to check that the constructed net is indeed the type projection of \(\Upsilon\) on \(N\). We define next how t-PNIDs can be composed and show that t-JNs are not closed under the composition. Definition 13 (Composition): Let \(N=(P_{N},T_{N},F_{N},\alpha_{N},\beta_{N})\) and \(M=(P_{M},T_{M},F_{M},\alpha_{M},\beta_{M})\) be two t-PNIDs. Their _composition_ is defined by: \(N\uplus M=(P_{N}\cup P_{M},T_{N}\cup T_{M},F_{N}\cup F_{M},\alpha_{N}\cup \alpha_{M},\beta_{N}\cup\beta_{M})\) It is easy to see that the composition of two t-JNs does not automatically result in a t-JN. Consider nets in Fig. 4. It is easy to see that both \(N\) and \(M\) can be obtained by applying R2 from Def. 11. However, their composition cannot be reduced to a single transition by consecutively applying rules from Def. 11. Figure 4: Although both \(N\) and \(M\) are t-JNs, their composition is not A more surprising observation is that composing type projections of a t-JN may not result in a t-JN. Take for example the net from Figure 5. Both its projections on \(\{\lambda_{1}\}\) and \(\{\lambda_{2}\}\) are t-JNs. However, bringing them together using the composition operator results in a t-PNID that is not t-JN: indeed, since the "copies" of place \(p\) appear in three places, and all such copies have same pre- and post-sets (and only differ by their respective types), it is impossible to apply identifier elimination rule _R6_ from Def. 11. As one may observe from the above example, the only difference between \([p_{xy},\langle\lambda_{1},\lambda_{2}\rangle]\) and its copies \(p_{x}\) and \(p_{y}\) is in their respective types, whereas the identifiers carried by \(p_{x}\) and \(p_{y}\) are always contained in \(p_{xy}\), and thus both \(p_{x}\) and \(p_{y}\) can be seen as subsidiary with respect to \(p_{xy}\). We formalize this observation using the notion of _minor places_: a place \(p\) is minor to some place \(q\) if both \(p\) and \(q\) have identical pre- and post-sets, and the type of \(q\) subsumes the one of \(p\). Definition 14 (Minor places): Let \(N=(P_{N},T_{N},F_{N},\alpha,\beta)\) be a t-PNID. A place \(p\in P\) is _minor to_ a place \(q\in P\) iff the following holds: * \({}^{\bullet}p={}^{\bullet}q\), \(p{}^{\bullet}=q{}^{\bullet}\) and \(\alpha(p)\subset\alpha(q)\); * \(\beta((t,p))=\,\beta((t,q))|_{\texttt{type}^{-1}(\alpha(p))}\), for each \(t\in{}^{\bullet}p\); * \(\beta((p,t))=\,\beta((q,t))|_{\texttt{type}^{-1}(\alpha(p))}\), for each \(t\in p{}^{\bullet}\). \(\triangleleft\) We show next that minor places can be added or removed without altering the overall behavior of the net. Lemma 2: _Let \(N=(P,T,F,\alpha,\beta)\) be a t-PNID with initial marking \(m_{0}\) s.t. \(m_{0}(p)=m_{0}(q)=\emptyset\), for \(p,q\in P\), where \(p\) is minor to \(q\). Let \(N^{\prime}=(P\setminus\{p\},T,F\setminus(\{(p,t)|t\in{}^{\bullet}\}\cup\{(t,p)| t\in{}^{\bullet}p\}),\alpha,\beta)\) be a t-PNID obtained by eliminating from \(N\) place \(p\). Then \(\Gamma_{N,m_{0}}\sim^{r}\Gamma_{N^{\prime},m_{0}}\). \(\triangleleft\)_ Proof: (sketch) It is enough to define a relation \(Q\subseteq\mathcal{R}(N,m_{0})\times\mathcal{R}(N^{\prime},m_{0})\) s.t. \((m,m^{\prime})\in Q\) iff \(m(r)=m^{\prime}(r)\), for \(r\in P\setminus\{p\}\), and \(m(p)(\texttt{id})=m^{\prime}(q)(\texttt{id})\), for all \(\texttt{id}\in\texttt{C}(p)\), and \(|m(p)|=|m^{\prime}(q)|\). Then the lemma statement directly follows from the firing rule of t-PNIDs and that pre- and post-sets of \(p\) and \(q\) coincide. Let us now address the reconstructability property. In a nutshell, a net is reconstructable if composing all of its type projections returns the same net. This property is not that trivial to obtain. For example, let us consider singleton projections (that is, projections \(\pi_{\{\lambda\}}\left(N\right)\) obtained for each \(\lambda\in\mathtt{type}_{\Lambda}(N)\)) of the net in Fig. 6. It is easy to see that such projections "ignore" interactions between objects (or system components). Thus, the composition of the singleton projections \(\pi_{\{\lambda_{1}\}}\left(N\right)\) and \(\pi_{\{\lambda_{2}\}}\left(N\right)\) from Fig. 6 does not result in a model that merges \(p_{x}\) and \(p_{y}\) in one place as the composition operator cannot recognize component interactions between such projections. This is reflected in Fig. 5(d). To be able to reconstruct the original model from its projections (or at least do it approximately well), one needs to consider a projection reflecting component interactions. In the case of the net from Figure 5(a), its non-singleton projection \(\pi_{\{\lambda_{1},\lambda_{2}\}}\left(N\right)\) is depicted in Figure 6(a). Now, using this projection we can obtain a composition (see Figure 6(b)) that closely resembles \(N\). Notice that, in this composition, copies of the interaction place \(p\) appear three times as places \(p_{x}\), \(p_{y}\) and \(p_{xy}\), respectively. It is also easy to see that places \(p_{x}\) and \(p_{y}\) are minor to \(p_{xy}\), and \(\alpha(p)=\alpha(p_{xy})\) witnesses that \(\pi_{\{\lambda_{1},\lambda_{2}\}}\left(N\right)\) is the maximal projection defined over types of \(N\) s.t. the correct type of \(p\) is "reconstructed". This leads us to the following result stipulating the reconstructability property of typed Jackson nets. Theorem 3.1: _Let \(N=\left(P,T,F,\alpha,\beta\right)\) be a t-JN. Then \(\Gamma_{N,\emptyset}\sim^{r}\Gamma_{N^{\prime},\emptyset}\), where \(N^{\prime}=\underset{\emptyset\subset T\subseteq\mathtt{type}_{\Lambda}(N)} {\biguplus}\pi_{\Upsilon}\left(N\right)\)._ Figure 6: t-PNID \(N\) (5(a)), its singleton projections and their composition Proof: (sketch) The proof immediately follows from the next observation. Among all possible projections, for each place \(p\in P\) there exists a projection \(\pi_{\Upsilon}\left(N\right)\) such that \(\alpha(p)=\Upsilon\). This also means that \(\pi_{\Upsilon}\left(N\right)\) contains \(p\) and that all other projections \(\pi_{\Upsilon^{\prime}}\left(N\right)\) with \(\Upsilon^{\prime}\subset\Upsilon\) will at most include the minors of \(p\). Following Def. 13, it is easy to see that the composition of all the projections yields a t-JN identical to \(N\) modulo additional place minors introduced by some of the projections. Showing that the obtained net is bisimilar to \(N\) can be done by analogy with Lemma 2. Notice that the above result can be made stronger if all the additional minors (i.e., minors that were not present originally in \(N\)) are removed using reduction rules from Def. 11. For simplicity, given a t-PNID \(N\) with the set of places \(P\), we denote by \(\lfloor P\rfloor\) the set of its minor places. Corollary 1: _Let \(N\) be a t-JN and \(N^{\prime}\) is as in Thm. 3. Then \(\left(N,\emptyset\right)\leftrightsquigarrow(N^{\prime},\emptyset)\), if \(\lfloor P\rfloor=\lfloor P^{\prime}\rfloor\), where \(P\) and \(P^{\prime}\) are respectively the sets of places of \(N\) and \(N^{\prime}\)._ The above result can be obtained by complementing the proof of Thm. 3 with a step that applies finitely many t-JN reduction rules to all the minor places that are in \(N^{\prime}\) and not in \(N\). ## 5 A Framework for Rediscoverability In the previous section, we showed that t-JNs enjoy the reconstructability property: given a t-JN, a composition of _all_ its (proper) type projections yields a t-JN that is strongly bisimilar to the original one.6 Footnote 6: Such nets are also isomorphic if minor places of the composition are removed by consecutively applying the reduction rules from Def. 11. In this section, we propose a framework to rediscover systems of interacting processes that rely on this property. The framework builds upon a divide and conquer strategy [21]. The first step of the approach is to divide the event logs Figure 7: Adding the projection \(\pi_{\left\{\lambda_{1},\lambda_{2}\right\}}\left(N\right)\) reflecting interactions to the composition results in the original net \(N\) modulo places minor to \(p\) (such as \(p_{x}\) and \(p_{y}\)). over all possible projections. For this, we translate the notion of event logs to event logs of interacting systems, and show that if these event logs are generated by a t-JN, projections on these event logs have a special property: the projected event log can be replayed by the projected net. In other words, there is no distinction between the projection on the event log, or that the projected net generated the event log. This observation forms the basis of the proposed framework for rediscoverability. In the second step, we conquer the discoverability problem of the system of interacting processes by first discovering a model for each of the projections, and then composing these projections into the original system. If the event log and discovery algorithm guarantee the defined properties, composition yields rediscoverability. ### Event Logs and Execution Traces In process discovery, an event log is represented as a (multi)set of sequences of events (called traces), where each sequence represents an execution history of a process instance. Traditional process discovery assumes the process to be a WF-net. Consequently, each trace in an event log should correspond to a sequence of transition firings of the workflow net. If this is the case, the event log is said to be generated by the WF-net. We generalize this notion to marked Petri nets. Definition 15 (Event Log): Given a set of transitions \(T\), a set of traces \(L\subseteq T^{*}\) is called an _event log_. An event log \(L\)_ is generated by _a marked Petri net \((N,m)\) if \((N,m)[\sigma)\) for all \(\sigma\in L\), i.e., \(L\subseteq\mathcal{L}(N,m_{0})\). \(\triangleleft\)_ Each sequence in a single process event log assumes to start from the initial marking of the WF-net. A marked t-PNID, instead, represents a continuously executing system, for which, given a concrete identifier, exists a single observable execution that can be recorder in an event log. Thus, event logs are partial observations of a larger execution within the system: an event log for a certain type captures only the relevant events that contain identifiers of that type, and stores these in order of their execution. Since each transition firing consists of a transition and a binding, a t-PNID firing sequence induces an event log for each set of types \(\Upsilon\). Intuitively, this induced event log is constructed by a filtering process. For each possible identifier vector for \(\Upsilon\) we keep a firing sequence. Each transition firing is inspected, and if its binding satisfies an identifier vector of \(\Upsilon\), it is added to the corresponding sequence. \begin{table} \begin{tabular}{l| Definition 16 (Induced Event Log): Let \((N,m_{0})\) be a marked t-PNID. Given a non-empty set of types \(\Upsilon\subseteq\mathtt{type}_{\Lambda}(N)\), the \(\Upsilon\)-induced event \(\log\) of a firing sequence \(\eta\in\mathcal{L}(N,m_{0})\) is defined by: \(\mathit{Log}_{\Upsilon}(\eta)=\{\eta_{|i}\mid i\in(Id(\eta)\cap I(\Upsilon))^{| \Upsilon|}\}\),where \(\eta_{|i}\) is inductively defined by (1) \(\epsilon_{|i}=\epsilon\), (2) \((\langle(t,\psi)\rangle\cdot\eta)_{|i}=\langle(t,\psi)\rangle\cdot\eta_{|i}\) if \(\mathit{supp}(i)\subseteq\textsc{rng}(\psi)\), and (3) \((\langle(t,\psi)\rangle\cdot\eta)_{|i}=\eta_{|i}\) otherwise. \(\triangleleft\) Different event logs can be induced from a firing sequence. Consider, for example, the firing sequence of the net from Fig. 1 represented as table in Tbl. 1. As we cannot deduce the types for each of the variables from the firing sequences in Tbl. 1, we assume that there is a bijection between variables and types, i.e., that each variable is uniquely identified by its type, and vice-versa. Like that, we can create an induced \(\log\) for each variable, as the type and variable name are interchangeable. For example, the \(x\)-induced event \(\log\) is \(\mathit{Log}_{\{x\}}=\{\langle A,E,B\rangle\,,\langle A,C,D,B\rangle\}\), and the \(z\)-induced event \(\log\) is \(\mathit{Log}_{\{z\}}=\{\langle T,G,Z,V\rangle\,,\langle T,V\rangle\}\). Similarly, event logs can be also induced for combinations of types. In this example, the only non-empty induced event logs on combined types are \(\mathit{Log}_{\{y,z\}}=\{\langle G,Z\rangle\}\) and \(\mathit{Log}_{\{x,y\}}=\{\langle E\rangle\}\). As the firing sequence in Tbl. 1 shows, transition firings (and thus also events) only show bindings of variables to identifiers. For example, for firing \(G\) with binding \(y\mapsto o1\) and \(z\mapsto c1\), it is not possible to derive the token types of the consumed and produced tokens directly from the table. Therefore, we make the following assumptions for process discovery on t-PNIDs: 1. There are no "black" tokens: all places carry tokens with at least one type, and all types occur at most once in a place type, i.e., all places refer to at least one process instance. 2. There is a bijection between variables and types, i.e., for each type exactly one variable is used. 3. A Godel-like number \(\mathscr{G}\) is used to order the types in place types, i.e., for any place \(p\), we have \(\mathscr{G}(\alpha(p)(i))<\mathscr{G}(\alpha(p)(j))\) for \(1\leq i<j\leq|\alpha(p)|\) and \(p\in P\). ### Rediscoverability of Typed Jackson Nets Whereas traditional process discovery approaches relate events in an event \(\log\) to a single object: the process instance, object-centric approaches can relate events to many objects [11]. Most object-centric process discovery algorithms (e.g., [5, 17]) use a divide and conquer approach, where "flattening" is the default implementation to divide the event data in smaller event logs. The flattening operation creates a trace for each object in the data set, and combines the traces of objects of the same type in an event \(\log\). As we have shown in Section 4, singleton projections, i.e., those just considering types in isolation, are insufficient to reconstruct the t-JN that induced the object-centric event \(\log\). A similar observation is made for object-centric process discovery (cf. [3, 5, 7]): flattening the event data into event logs generates inaccurate models. Instead, reconstructability can only be achieved if all possible combinations of types are considered. Hence, for a divide and conquer strategy, the divide step should involve all possible combinations of types, i.e., each interaction between processes requires their own event log. In the remainder of this section, we show that if all combinations of types are considered, flattening is possible, and traditional process discovery algorithms can be used to rediscover a system of interacting processes. For a system of interacting processes, we consider execution traces, i.e., a firing sequence from the initial marking. Like that, event logs for specific types or combinations of types are induced from the firing sequence. The projection of the system on a type or combinations of types, results again in a t-JN. Similarly, if we project a firing sequence of a t-JN \(N\) on a set of types \(\Upsilon\), then this projection is a firing sequence of the \(\Upsilon\)-projection on \(N\). The property follows directly from the result that t-JN \(N\) is weakly simulated by its \(\Upsilon\)-projection. Lemma 3: _Let \(N\) be a t-JN, and let \(\Upsilon\subseteq\mathtt{type}_{\Lambda}(N)\). Then \(\widehat{\mathtt{H}}_{U}(\Gamma_{N,\emptyset})\prec^{r}\Gamma_{\pi_{\Upsilon}( N),\emptyset}\), with \(U=T_{N}\setminus T_{\Upsilon}\). \(\triangleleft\)_ Proof: (sketch) Let \(N_{\Upsilon}=\Upsilon_{\mid N}=(P_{\Upsilon},T_{\Upsilon},F_{\Upsilon},\alpha _{\Upsilon},\beta_{\Upsilon})\). We can define a relation \(Q\subseteq\mathbb{M}\left(N\right)\times\mathbb{M}\left(\pi_{\Upsilon}\left( N\right)\right)\) s.t. \(Q(m)(p)(a_{\mid I(\Upsilon)})=m(p)(a)\) if \(p\in P_{\Upsilon}\) and \(Q(m)(p)=m(p)\) otherwise. The rooted weak bisimulation of \(Q\) follows directly from the firing rule of t-PNIDs. As the lemma shows, projecting a firing sequence yields a firing sequence for the projected net. A direct consequence of the simulation relation is that, no matter whether we induce an event log from a firing sequence on the original net, or induce it from the projected firing sequence, the resulting event logs are the same. Corollary 2: _Let \((N,m_{0})\) be a marked t-PNID. Given a set of types \(\Upsilon\subseteq\mathtt{type}_{\Lambda}(N)\). Then \(\mathit{Log}_{\Upsilon}(\eta)=\mathit{Log}_{\Upsilon}(\pi_{\Upsilon}\left( \eta\right))\). \(\triangleleft\)_ Hence, it is not possible to observe whether an induced event log stems from the original model, or from its projection. Note that the projection may exhibit more behavior, so the reverse does not hold. In general, not any induced event log from the projection can be induced from the original model. In general, a projection does not need to be an atomic t-JN (that is, a t-JN that can be reduced by applying rules from Def. 11 to a single transition). However, if the projection is atomic, then its structure is a transition-bordered WF-net: a WF-net that, instead of having source and sink places, has a set of start and finish transitions, such that pre-sets (resp., post-sets) of start (resp., finish) transitions are empty. The closure of a transition-bordered WF-net is constructed by adding a new source place \(i\) so that each start transition consumes from \(i\), and a new sink place \(f\) so that each finish transition produces in \(f\). Lemma 4: _Let \(N\) be a t-JN and \(\pi_{\Upsilon}\left(N\right)=(P_{\Upsilon},T_{\Upsilon},F_{\Upsilon},\alpha_{ \Upsilon},\beta_{\Upsilon})\) for some \(\Upsilon\subseteq\mathtt{type}_{\Lambda}(N)\) such that \(\pi_{\Upsilon}\left(N\right)\) is atomic. Let \(\eta\in\mathcal{L}(N,\emptyset)\) be a firing sequence. Then \(\mathit{Log}_{\Upsilon}(\eta)\) is generated by \((N_{\Upsilon},\emptyset)\) with \(N_{\Upsilon}=(P_{\Upsilon}\cup\{i,f\},T_{\Upsilon},F_{\Upsilon}\{(i,t)\mid {}^{\bullet}t=\emptyset\}\cup\{(t,f)\mid t{}^{\bullet}=\emptyset\})\). \(\triangleleft\)_ Proof: (sketch) Let \(\sigma\in\mathit{Log}_{\Upsilon}(\eta)\). By construction, each firing sequence in \(\mathit{Log}_{\Upsilon}(\eta)\) has some corresponding identifier vector that generated the sequence. Assume \(\vec{v}\in\mathcal{I}^{|\Upsilon|}\) is such a vector for \(\sigma\). Observe that for any transition \(t\in T\) if \({}^{\bullet}t=\emptyset\), \(\mathit{Emit}(t)\cap\Upsilon\neq\emptyset\), and similarly, if \(t^{\bullet}=\emptyset\), \(\mathit{Collect}(t)\cap\Upsilon\neq\emptyset\). As \(N\) is identifier sound, only \({}^{\bullet}\sigma(1)=\emptyset\) and \(\sigma(|\sigma|)^{\bullet}=\emptyset\). Define relation \(R=\{(M,m)\mid\forall p\in P:M(p)(v)=m(p)\}\) and \(U=\{(t,\psi)\mid\upsilon\not\subseteq\textsc{rng}(\psi)\}\), i.e., \(U\) contains all transitions that do not belong to \(\sigma\). Then \(R\) is a weak simulation, i.e., \(\hat{\mathtt{H}}_{U}(\Gamma_{N,\emptyset})\preccurlyeq_{R}^{r}\Gamma_{N_{ \Upsilon},\emptyset}\) and thus \((N_{\Upsilon},\emptyset)[\sigma)\). Given a set of types \(\Upsilon\), if its projection is atomic, the projection can be transformed into a workflow net, and for any firing sequence of the original net, this WF-net can generate the \(\Upsilon\)-induced event log. Suppose we have a discovery algorithm _disc_ that can rediscover models, i.e., given an event log \(L\) that was generated by some model \(M\), then _disc_ returns the original model. Rediscoverability of an algorithm requires some property \(P_{\mathit{disc}}(M)\) on the generating model \(M\), and some property \(Q_{\mathit{disc}}(L,M)\) on the quality of event log \(L\) with respect to the generating model \(M\). In other words, \(P(M)\) and \(Q(L,M)\) are premises to conclude rediscoverability for discovery algorithm _disc_. For example, \(\alpha\)-miner [22] requires for \(P(M)\) that model \(M\) is well-structured, and for \(Q(L,M)\) that event log \(L\) is directly-follows complete with respect to model \(M\). Similarly, Inductive Miner [16] requires the generating model \(M\) to be a process tree without silent actions or self-loops (\(P(M)\)), and that event log \(L\) is directly-follows complete with respect to the original model \(M\) (\(Q(L,M)\)). Definition 17 (Rediscovery): An algorithm disc can _rediscover_ WF-net_\(W=(P,T,F,in,out)\) from event log \(L\subseteq T^{*}\) if \(P_{\mathit{disc}}(W)\) and \(Q_{\mathit{disc}}(L,W)\) imply \(\mathit{disc}(L)\leftrightsquigarrow W\). \(\triangleleft\) Thus, suppose there exists a discovery algorithm _disc_ that is - under conditions \(P\) and \(Q\) - able to reconstruct a workflow model given an event log. In Figure 8: Framework for rediscoverability of typed Jackson Nets. Model \(M\) generates an event log \(L\). Log projections \(L_{1}\ldots L_{n}\) are generated from projected nets \(M_{1}\ldots M_{n}\). Discovery algorithm _disc_ results in nets \(D_{1}\ldots D_{n}\), isomorphic to \(M_{1}\ldots M_{n}\), which can be composed in \(D^{\prime}\). \(D^{\prime}\) is isomorphic to \(M^{\prime}\) and thus to \(M\). other words, given an event log \(L\) generated by some model \(M\), \(\mathit{disc}\) returns a model that is isomorphic to the generating model. Now, suppose we have a firing sequence \(\eta\) of some t-JN \(N\), and some projection \(\Upsilon\). Then, if \(P(\pi_{\Upsilon}\left(N\right))\), and \(Q(\mathit{Log}_{\Upsilon}(\eta),\pi_{\Upsilon}\left(N\right))\), then \(\mathit{disc}\) returns a model that is isomorphic to the closure of \(\pi_{\Upsilon}\left(N\right)\), as \(\mathit{disc}\) only returns WF-nets. With \(\overline{\mathit{disc}}\) we denote the model where the source and sink places are removed, i.e., \(\overline{\mathit{disc}}\leftrightsquigarrow\pi_{\Upsilon}\left(N\right)\). Then, as shown in Fig. 8, if we discover for every possible combination of types, i.e., the subset-closed set of all type combinations, a model that is isomorphic to the type-projected model, then the composition results in a model that is bisimilar to the original model. Theorem 5.3 (Rediscoverability of typed Jackson Nets): _Let \(N\) be a t-JN, and let \(\eta\in\mathcal{L}(N,\emptyset)\) without minor places. Let disc be a discovery algorithm with properties \(P\) and \(Q\) that satisfy Def. 17. If for all \(\emptyset\subset\Upsilon\subseteq\mathtt{type}_{\Lambda}(N)\) the \(\Upsilon\)-projection is atomic and satisfies conditions \(P(\pi_{\Upsilon}\left(N\right))\) and \(Q(\mathit{Log}_{\Upsilon}(\eta)),\pi_{\Upsilon}\left(N\right))\), then \(\Gamma_{N,\emptyset}\leftrightsquigarrow\Gamma_{N^{\prime},\emptyset}\) with \(N^{\prime}=\biguplus_{\emptyset\subset\Upsilon\subseteq\mathtt{type}_{ \Lambda}(N)}\overline{\mathit{disc}}(\mathit{Log}_{\Upsilon}(\eta))\)._ Proof: (sketch) Let \(\emptyset\subset\Upsilon\subseteq\mathtt{type}_{\Lambda}(N)\) be a set of types in \(N\). Since \(P(\pi_{\Upsilon}\left(N\right))\) and \(Q(\mathit{Log}_{\Upsilon}(\eta)),\pi_{\Upsilon}\left(N\right))\)the closure of \(\pi_{\Upsilon}\left(N\right)\) and \(\mathit{disc}(\mathit{Log}_{\Upsilon}(\eta))\) are isomorphic. From the closure, places \(\mathit{in}\) and \(\mathit{out}\) exist with \({}^{\bullet}\mathit{in}=\emptyset=\mathit{out}^{\bullet}\). As the nets are isomorphic, we have \(\Upsilon_{|N}\leftrightsquigarrow\overline{\mathit{disc}}(\mathit{Log}_{ \Upsilon}(\eta))\). Combining the results gives \(\biguplus_{\emptyset\subset\Upsilon\subseteq\mathtt{type}_{\Lambda}(N)} \overline{\mathit{disc}}(\mathit{Log}_{\Upsilon}(\eta))\leftrightsquigarrow \bigcup_{\emptyset\subset\Upsilon\subseteq\mathtt{type}_{\Lambda}(N)} \pi_{\Upsilon}\left(N\right)\). The statement then follows directly from Cor. 1. ## 6 Conclusion In this paper, we studied typed Jackson Nets to model systems of interacting processes, a class of well-structured process models describing manipulations of object identifiers. As we show, this class of nets has an important property of reconstructability. In other words, the composition of the projections on all possible type combinations returns the model of the original system. Ignoring the interactions between processes results in less accurate, or even wrong, models. Similar problems occur in the discovery of systems of interacting processes, such as object-centric process discovery, where event logs are flattened for each object. This paper provides a formal foundation for the composition of block-structured nets, and uses this to develop a framework for the discovery of systems of interacting processes. We link the notion of event logs used for process discovery to system executions, and show that it is not possible to observe whether an event log is generated by a system of interacting processes, or by a projection of the system. These properties form the key ingredients of the framework. We show under what conditions a process discovery algorithm (that guarantees rediscoverability) can be used to discover the individual processes and their interactions, and how these can be combined to rediscover a model of interacting processes that is bisimilar to the original system that generated the event logs. Although typed Jackson Nets have less expressive power than formalisms like Object-centric Petri nets [5], proclets [10] or interacting artifacts [17], this paper shows the limitations and potential pitfalls of discovering interacting processes. This work aims to lay formal foundations for object-centric process discovery. As a next step, we plan to implement the framework and tune our algorithms to discover useful models from industrial datasets. **Acknowledgements.** Artem Polyvyanyy was in part supported by the Australian Research Council project DP220101516.
2301.10898
Double free boundary problem for defaultable corporate bond with credit rating migration risks and their asymptotic behaviors
In this work, a pricing model for a defaultable corporate bond with credit rating migration risk is established. The model turns out to be a free boundary problem with two free boundaries. The latter are the level sets of the solution but of different kinds. One is from the discontinuous second order term, the other from the obstacle. Existence, uniqueness, and regularity of the solution are obtained. We also prove that two free boundaries are $C^\infty$. The asymptotic behavior of the solution is also considered: we show that it converges to a traveling wave solution when time goes to infinity. Moreover, numerical results are presented.
Yuchao Dong, Jin Liang, Claude-Michel Brauner
2023-01-26T01:57:06Z
http://arxiv.org/abs/2301.10898v2
Double free boundary problem for defaultable corporate bond with credit rating migration risks and their asymptotic behaviors ###### Abstract In this work, a pricing model for a defaultable corporate bond with credit rating migration risk is established. The model turns out to be a free boundary problem with two free boundaries. The latter are the level sets of the solution but of different kinds. One is from the discontinuous second order term, the other from the obstacle. Existence, uniqueness, and regularity of the solution are obtained. We also prove that two free boundaries are \(C^{\infty}\). The asymptotic behavior of the solution is also considered: we show that it converges to a traveling wave solution when time goes to infinity. Moreover, numerical results are presented. keywords: Traveling wave; Free boundary problem; PDE with discontinuous leading order coefficient; Asymptotic behavior; Credit rating migration risk model + Footnote †: journal: ## 1 Introduction Due to the globalization and complexity of financial markets, the credit risks become more and more important and an unstable factor impacting the market, which might cause a crucial crisis. For example, in the 2008 financial tsunami and the 2010 European debt crisis, credit rating migration risk played a key role. The first step to managing the risks is modeling and measuring them. Thus, it has attracted more and more attention both in academics and in industry to understand these risks, especially default risk and credit rating migration risk. Most credit risk research falls into two kinds of framework, namely structure model and intensity one. The intensity model assumes that the risk is due to some exogenous factors, which are usually modeled by Markov chains, see [35]. In this way, the default and/or migration times are determined by an exogenous transition intensity; see Jarrow, Lando, and Turnbull [21; 20], Duffie and Singleton [11], to mention a few. In the implementation, intensity transition matrices are usually obtained from historical statistical data. However, it is well-known that companies' current financial status plays a crucial role in default and credit rating migrations. For example, the main reason which caused the 2010 European debt crisis was that the sovereign debts of several European countries reached an unsustainable level due to their poor economical situation. The crisis happened in these countries because of the downgrading of their credit ratings and the subsequent chain reactions. Therefore, Markov chain model alone cannot fully capture the credit risks. To include the endogenous factor, the structural model comes into consideration for credit risk modeling, which could be traced back to Merton [34] in 1974. In such a kind of models, the reason for credit rating migration and default is related to the firm's asset value and its obligation. For example, in Merton's model, it is assumed that the company's asset value follows a geometric Brownian motion and a default would happen if the asset value drops below the debt at maturity. Thus, the corporate bond, representing the company's obligation, is a contingent claim of the asset value. Later, Black and Cox [2] extended Merton's model to the so-called first passage-time model, where the default would happen whenever the asset value reached a given boundary; see also [25; 32; 26; 6; 38] for related works. Dai et al. [9] considered an optimal control problem in the case where a bank's asset is opaque. Using the structural model, Liang and Zeng [31] studied the pricing problem of the corporate bond with credit rating migration risk, where a predetermined migration threshold is given to divide asset value into high and low rating regions, in which the asset value follows different stochastic processes. Hu, Liang, and Wu [18] further developed this model, where the migration boundary is a free boundary governed by a ratio of the firm's asset value and debt. Some theoretical results and traveling wave properties are also obtained in [29]. Li, Zhang, and Hu [27] studied the numerical method for solving related variational inequality. Later, Fu, Chen, and Liang [16] provided more mathematical analysis and detailed description of the free migration boundary. More extension of this model is considered in [30; 42; 39; 40]. Recently, Chen and Liang [8] also considered the case where upgrade and downgrade boundaries are different. However, the reason behind the credit rating migration is the default possibility; hence, it is natural to consider a model involving both the credit rating migration and default risks. In [40], as the first step, a predetermined default boundary of asset level is considered. In this paper, we will let the default boundary also depend on the ratio between the stock price and bond value. Therefore, the model will contain two free boundaries. Both of these boundaries are the level sets of the solution but of different types. One is from discontinuous leading second order term as in previous credit rating migration works (for example, see [29]); the other is from a more traditional free boundary problem, i.e. obstacle problem. Using PDE techniques, existence, uniqueness, regularity, and asymptotic behavior of the solution are obtained, which from a theoretical perspective insure the rationality of the model. Numerical results support our theoretical approach. The stability of traveling wave equation will be studied in our future work [3] using the techniques of [1; 4; 5]. This paper is organized as follows. In Section 2, the model is established and the pricing problem is reduced to a system of two parabolic PDEs with two free boundaries. In Section 3, for the sake of both uniform estimates and asymptotic behavior, we consider a traveling wave solution to the original problem. In Section 4, we use a penalization method and simultaneously a regularization of the coefficient of the 2nd order term to approximate the free boundary problem by a smooth Cauchy problem depending on a small parameter \(\varepsilon>0\). A series of lemmas are proved in order to establish estimates which are independent of \(\varepsilon\). The key point is that the two approximating free boundaries can be separated by a positive distance independent of \(\varepsilon\). In Section 5, the main results are stated, including the existence, uniqueness, and regularity of the solution. In particular, we prove that two free boundaries are \(C^{\infty}\). The asymptotic behavior of the solution as time tends to infinity is examined in Section 6. Finally, a numerical method and some computational results are presented in Section 7. ## 2 The Model ### Assumptions Let \((\Omega,\mathcal{F},P)\) be a complete probability space. We assume that the firm issues a corporate bond, which is a contingent claim of its value. The stock price of the firm admits different dynamics for different credit ratings. **Assumption 2.1** (the firm asset with credit rating migration).: _Let \(S_{t}\) denote the firm's value in the risk neutral world. It satisfies_ \[dS_{t}=\left\{\begin{array}{ll}rS_{t}dt+\sigma_{H}S_{t}dW_{t},&\text{ in high rating region,}\\ rS_{t}dt+\sigma_{L}S_{t}dW_{t},&\text{ in low rating region,}\end{array}\right.\] _where \(r\) is the risk free interest rate, which is positive constant, and_ \[\sigma_{H}<\sigma_{L} \tag{2.1}\] _represent volatilities of the firm under the high and low credit grades respectively. They are also assumed to be positive constants. \(W_{t}\) is the Brownian motion which generates the filtration \(\{\mathcal{F}_{t}\}\)._ It is reasonable to assume (2.1), namely that the volatility in high rating region is lower than the one in the low rating region. The firm issues only one zero coupon corporate bond with face value \(F\). Let \(\Phi_{t}\) denote the discount value of the bond at time \(t\). Therefore, at the maturity time \(T\), an investor can get \(\Phi_{T}=\min\{S_{T},F\}\). For simplicity, we assume in the following sections \(F=1\). The rating criterion is based on the ratio between the stock price and liability. **Assumption 2.2** (the credit rating migration time).: _High and low rating regions are determined by the proportion between the debt and asset value. The credit rating migration time \(\tau_{1}\) and \(\tau_{2}\) are the first moments when the firm's grade is downgraded and upgraded respectively as follows:_ \[\tau_{1} =\inf\{t>0|\Phi_{0}/S_{0}<\gamma e^{-\delta T},\Phi_{t}/S_{t}\geqslant \gamma e^{-\delta(T-t)}\},\] \[\tau_{2} =\inf\{t>0|\Phi_{0}/S_{0}>\gamma e^{-\delta T},\Phi_{t}/S_{t} \leqslant\gamma e^{-\delta(T-t)}\},\] _where \(\Phi_{t}=\Phi_{t}(S_{t},t)\) is a contingent claim with respect to \(S_{t}\) and_ \[0<\gamma<1 \tag{2.2}\] _is a positive constant representing the threshold proportion of the debt and value of the firm's rating. Also_ \[\delta>0,\] _is the so-called credit discount rate. In this paper, we also make the assumption that_ \[\frac{1}{2}\sigma_{H}^{2}<\delta<\frac{1}{2}\sigma_{L}^{2}. \tag{2.3}\] Further, we assume that the bond will default when the stock price is too low, compared with the debt. **Assumption 2.3** (the defaultable corporate bond).: _The default time is also determined by the proportion of the debt and asset value. Here, we assume that the default happens whenever_ \[S_{t}e^{-\delta(T-t)}\leqslant\Phi_{t}.\] _The default time is defined as_ \[\tau=\inf\{t>0|\Phi_{0}>e^{-\delta T}S_{0},\Phi_{t}\geqslant e^{-\delta(T-t)} S_{t}\}.\] _At the default time, the contract is closed and the investor obtains the cash \(e^{-\delta(T-t)}S_{t}\)._ **Remark 2.4**.: _Condition (2.3) is also assumed in [29] to ensure the existence of the travelling wave equation. In finance, if \(\delta\) is too small or too large, it is possible that the company will always be low rating or high rating. To see this, assume that the stock price is_ \[S_{t}=e^{rt-\frac{1}{2}\int_{0}^{t}\sigma^{2}(u)du+\int_{0}^{t}\sigma(u)dW_{u}},\] _where \(\sigma(s)\) is the volatility of the stock taking values in \(\{\sigma_{H},\sigma_{L}\}\) depending on whether the stock is low rating or high rating. The present value of the bond is \(e^{-r(T-t)}\). Then, the company's discounted debt-to-asset ratio is_ \[e^{-\delta t}\frac{e^{-r(T-t)}}{S_{t}}=e^{-rT}e^{\int_{0}^{t}(\frac{1}{2}\sigma ^{2}(u)-\delta)du-\int_{0}^{t}\sigma(u)dW_{u}}.\] _If \(\delta<\frac{1}{2}\sigma_{H}^{2}\), the right hand side will go to \(\infty\) as \(t\to\infty\) with probability \(1\). This implies that the company will always be low rating in the end. On the other hand, if \(\delta>\frac{1}{2}\sigma_{L}^{2}\), the right hand side will go to \(0\) and, hence, the company will always be high rating._ ### The Cash Flow If the bond does not default, once the credit rating migrates before the maturity \(T\), a virtual substitute termination happens, i.e., the bond is virtually terminated and substituted by a new one with a new credit rating. There is a virtual cash flow of the bond. We denote by \(\Phi_{H}(S,t)\) and \(\Phi_{L}(S,t)\) the values of the bond in high and low grades respectively, which are functions of \(S\) and \(t\). Then, they are conditional expectations of the following \[\Phi_{H}(S,t)= E\Big{[}e^{-r(T-t)}\min(S_{T},F)\cdot\mathbf{1}_{\{T<\tau_{1} \wedge\tau\}}\] \[+S_{t}e^{-\delta(T-\tau)}e^{-r(\tau-t)}\mathbf{1}_{\{\tau<T\wedge \tau_{1}\}}\] \[+e^{-r(\tau_{1}-t)}\Phi_{L}(S_{\tau_{1}},\tau_{1})\cdot\mathbf{1 }_{\{\tau_{1}<T\wedge\tau\}}\Big{|}S_{t}=S>\frac{1}{\gamma e^{-\delta(T-t)}} \Phi_{H}(S,t)\Big{]}, \tag{2.4}\] \[\Phi_{L}(S,t)= E[e^{-r(T-t)}\min(S_{T},F)\cdot\mathbf{1}_{\{T<\tau_{2}\wedge \tau\}}\] \[+S_{t}e^{-\delta(T-\tau)}e^{-r(\tau-t)}\mathbf{1}_{\{\tau<T\wedge \tau_{2}\}}\] \[+e^{-r(\tau_{2}-t)}\Phi_{H}(S_{\tau_{2}},\tau_{2})\cdot\mathbf{1 }_{\{\tau_{2}<T\wedge\tau\}}\Big{|}\frac{1}{e^{-\delta(T-t)}}\Phi_{L}(S,t)<S_ {t}=S<\frac{1}{\gamma e^{-\delta(T-t)}}\Phi_{L}(S,t)\Big{]}, \tag{2.5}\] where \(\mathbf{1}_{\{event\}}=\left\{\begin{array}{ll}1,&\mbox{ if ``event'' happens,}\\ 0,&\mbox{ otherwise.}\end{array}\right.\) ### The PDE problem In the life time of the bond, by Feynman-Kac formula (see, e.g. [10]), it is not difficult to derive that the letting values \(\Phi_{H}\) and \(\Phi_{L}\) satisfy the following system of partial differential equations in their respective life regions: \[\frac{\partial\Phi_{H}}{\partial t}+\frac{1}{2}\sigma_{H}^{2}S^ {2}\frac{\partial^{2}\Phi_{H}}{\partial S^{2}}+rS\frac{\partial\Phi_{H}}{ \partial S}-r\Phi_{H}=0,\] \[S>\frac{1}{\gamma e^{-\delta(T-t)}}\Phi_{H},\;t>0, \tag{2.6}\] \[\frac{\partial\Phi_{L}}{\partial t}+\frac{1}{2}\sigma_{L}^{2}S^ {2}\frac{\partial^{2}\Phi_{L}}{\partial S^{2}}+rS\frac{\partial\Phi_{L}}{ \partial S}-r\Phi_{L}=0,\] \[\frac{1}{e^{-\delta(T-t)}}\Phi_{L}<S<\frac{1}{\gamma e^{-\delta(T -t)}}\Phi_{L},\;t>0. \tag{2.7}\] If the bond life last to maturity, \(\Phi_{H}\) and \(\Phi_{H}\) satisfy the terminal conditions: \[\Phi_{H}(S,T)=\Phi_{L}(S,T)=\min\{S,F\}.\] Define the function \(\Phi\) as \[\Phi(S,t)=\begin{cases}\Phi_{H}(S,t),\text{ in the high rating region;}\\ \Phi_{L}(S,t),\text{ in the low rating region;}\\ Se^{-\delta(T-t)},\text{ in the default region.}\end{cases}\] Then, it satisfies the following variational form \[\min\Big{\{}\frac{\partial\Phi}{\partial t}+\frac{1}{2}\sigma^{2}(\Phi,S,t)S^ {2}\frac{\partial^{2}\Phi}{\partial S^{2}}+rS\frac{\partial\Phi}{\partial S}- r\Phi,\,-\Phi(S,t)+Se^{-\delta(T-t)}\Big{\}}=0,\] with \[\sigma(\Phi,S,t)=\sigma_{H}\mathbf{1}_{\{\Phi<\gamma Se^{-\delta(T-t)}\}}+ \sigma_{L}\mathbf{1}_{\{\Phi\geqslant\gamma Se^{-\delta(T-t)}\}}.\] First, we make some transformation. Let \(\phi(x,t)=e^{rt}\Phi(e^{x},T-t)\). Then, \(\phi\) satisfies \[\min\Big{\{}-\frac{\partial\phi}{\partial t}+\frac{1}{2}\sigma^{2}(e^{-rt} \phi,e^{x},t)\frac{\partial^{2}\phi}{\partial x^{2}}+(r-\frac{1}{2}\sigma^{2}) \frac{\partial\phi}{\partial x},\,-\phi(s,t)+e^{x+(r-\delta)t}\Big{\}}=0.\] As already indicated in [29], it is more convenient to work in the moving coordinate frame \[\xi=x+ct,\;c=r-\delta,\;u(\xi,t)=\phi(x,t).\] Then, the equation reads \[\min\left\{-\frac{\partial u}{\partial t}+\frac{1}{2}\sigma^{2}(u)\frac{ \partial^{2}u}{\partial\xi^{2}}+(\delta-\frac{1}{2}\sigma^{2})\frac{\partial u }{\partial\xi},\,-u+e^{\xi}\right\}=0. \tag{2.8}\] Let us introduce the weight \(e^{-\xi}\) and make the further transformation \(v=e^{-\xi}u\); we define \[\mathcal{L}:=-\frac{\partial}{\partial t}+\frac{1}{2}\sigma^{2}(v)\Big{(} \frac{\partial^{2}}{\partial\xi^{2}}+\frac{\partial}{\partial\xi}\Big{)}+ \delta\Big{(}\frac{\partial}{\partial\xi}+1\Big{)}.\] Thus, \(v\) satisfies the following problem: \[\min\{\mathcal{L}v,\,1-v\}=0,\quad v(\xi,0)=\min\{1,e^{-\xi}\}, \tag{2.9}\] with \[\sigma(v)=\sigma_{H}\mathbf{1}_{\{v<\gamma\}}+\sigma_{L}\mathbf{1}_{\{v \geqslant\gamma\}}.\] Let us finally define the free boundaries which will play a crucial role throughout the paper, respectively the _default boundary_ \[\hat{\kappa}(t):=\inf\{\xi\,|\,v(\xi,t)<1\},\] and the _transit boundary_ \[\hat{\eta}(t):=\inf\{\xi\,|\,v(\xi,t)<\gamma\}.\] Our goal is not only to solve (2.9) but also to study the properties of these boundaries. If the solution is smooth enough, system (2.9) can be rewritten as the _free boundary problem_ \[\left\{\begin{aligned} &-\frac{\partial v}{\partial t}+\frac{1}{2} \sigma_{L}^{2}\Big{(}\frac{\partial^{2}v}{\partial\xi^{2}}+\frac{\partial v}{ \partial\xi}\Big{)}+\delta\Big{(}\frac{\partial v}{\partial\xi}+v\Big{)}=0, \quad\hat{\kappa}(t)<\xi<\hat{\eta}(t);\\ &-\frac{\partial v}{\partial t}+\frac{1}{2}\sigma_{H}^{2}\Big{(} \frac{\partial^{2}v}{\partial\xi^{2}}+\frac{\partial v}{\partial\xi}\Big{)}+ \delta\Big{(}\frac{\partial v}{\partial\xi}+v\Big{)}=0,\quad\xi>\hat{\eta}(t); \\ & v(\hat{\kappa}(t)+)=1,\quad\frac{\partial v}{\partial\xi}(\hat{ \kappa}(t)+)=0;\\ & v(\hat{\eta}(t)+)=v(\hat{\eta}(t)-)=\gamma,\quad\frac{\partial v }{\partial\xi}(\hat{\eta}(t)+)=\frac{\partial v}{\partial\xi}(\hat{\eta}(t)-).\end{aligned}\right. \tag{2.10}\] For convenience, we set \[c_{L}=\frac{2\delta}{\sigma_{L}^{2}},\qquad c_{H}=\frac{2\delta}{\sigma_{H}^{ 2}}.\] It follows from (2.3) that \(c_{L}<1\) and \(c_{H}>1\). ## 3 Traveling Wave Solution In this section, we will consider the steady state of (2.9), i.e. the traveling wave solution for the original problem. In addition to giving the asymptotic behavior of (2.9), the traveling wave equation is also useful for constructing sub-solutions. The traveling wave solution \(K\) satisfies \[\min\bigg{\{}\frac{1}{2}\sigma^{2}(K)\Big{(}\frac{dK}{d\xi^{2}}+\frac{dK}{d \xi}\Big{)}+\delta\Big{(}\frac{dK}{d\xi}+K\Big{)},\,1-K\bigg{\}}=0. \tag{3.1}\] Denoting the two free boundaries respectively by \(\kappa^{*}\) and \(\eta^{*}\), and assuming that the solution is sufficiently smooth, we may reformulate Equation (3.1) as the following free boundary problem \[\left\{\begin{aligned} &\frac{d^{2}K}{d\xi^{2}}+\frac{dK}{d\xi}+c_{ H}\Big{(}\frac{dK}{d\xi}+K\Big{)}=0,\quad\xi>\eta^{*},\\ &\frac{d^{2}K}{d\xi^{2}}+\frac{dK}{d\xi}+c_{L}\Big{(}\frac{dK}{d \xi}+K\Big{)}=0,\quad\kappa^{*}<\xi<\eta^{*},\\ & K(\kappa^{*}+)=1,\quad\frac{\partial K}{\partial\xi}(\kappa^{*} )=0,\\ & K(\eta^{*}+)=K(\eta^{*}-)=\gamma,\quad\frac{dK}{d\xi}(\eta^{*}+ )=\frac{dK}{d\xi}(\eta^{*}-),\\ & K(\xi)=1,\text{for }\xi<\kappa^{*},\,\text{and }\lim_{\xi\to+\infty}e^{\xi}K(\xi)=1,\end{aligned}\right. \tag{3.2}\] Note that we also add a growth condition at \(+\infty\) due to the financial nature of our problem. **Theorem 3.1**.: _System (3.2) has a unique solution \((K,\eta^{*},\kappa^{*})\) such that \(K\) belongs to \(C^{1}([\kappa^{*},+\infty))\) and the respective restrictions of \(K\) to \([\kappa^{*},\eta^{*}]\) and \([\eta^{*},+\infty]\) are \(C^{\infty}\)._ Proof.: It is elementary to solve the second order system in (3.2): \[K(\xi)=\begin{cases}e^{-\xi}+Be^{-c_{H}\xi},\;\xi>\eta^{*},\\ Ce^{-\xi}+De^{-c_{L}\xi},\;\kappa^{*}<\xi<\eta^{*}.\end{cases} \tag{3.3}\] From the boundary conditions at \(\kappa^{*}\), it comes \[Ce^{-\kappa^{*}}+De^{-c_{L}\kappa^{*}}=1,\text{ and }-Ce^{-\kappa^{*}}-c_{L}De^{-c_{L} \kappa^{*}}=0.\] This implies that \(C=-\frac{c_{L}}{1-c_{L}}e^{\kappa^{*}}\) and \(D=\frac{1}{1-c_{L}}e^{c_{L}\kappa^{*}}\). Then, from \(K(\eta^{*}-)=\gamma\), we have that \[-\frac{c_{L}}{1-c_{L}}e^{\kappa^{*}-\eta^{*}}+\frac{1}{1-c_{L}}e^{-c_{L}(\eta^ {*}-\kappa^{*})}=\gamma. \tag{3.4}\] Define the mapping \[\Psi(x):x\mapsto-\frac{c_{L}}{1-c_{L}}e^{-x}+\frac{1}{1-c_{L}}e^{-c_{L}x}, \tag{3.5}\] hence \(\Psi^{\prime}(x)=\frac{c_{L}}{1-c_{L}}(e^{-x}-e^{-c_{L}x})\). Since \(c_{L}<1\), we have that the mapping \(\Psi\) is decreasing on \([0,\infty)\). Since \(\Psi(0)=1\) and \(\lim_{x\to+\infty}\Psi(x)=0\), the transcendental equation (3.4) admits a unique positive solution \[\eta^{*}-\kappa^{*}=\Psi^{-1}(\gamma), \tag{3.6}\] The interface condition \(\left[\frac{dK}{d\xi}\right]_{\eta^{*}}=0\) yields that \[e^{-\eta^{*}}+c_{H}Be^{-c_{H}\eta^{*}}=-\frac{c_{L}}{1-c_{L}}e^{-(\eta^{*}- \kappa^{*})}+\frac{c_{L}}{1-c_{L}}e^{-c_{L}(\eta^{*}-\kappa^{*})}=\gamma-e^{-c _{L}(\eta^{*}-\kappa^{*})},\] where the last equality is due to (3.6). Combining with the condition \(\gamma=K(\eta^{*}+)=e^{-\eta^{*}}+Be^{-c_{H}\eta^{*}}\), we have that \[B=-\frac{1}{c_{H}-1}e^{-c_{L}(\eta^{*}-\kappa^{*})+c_{H}\eta^{*}}\text{ and }(c_{H}-1)e^{-\eta^{*}}=(c_{H}-1)\gamma+e^{-c_{L}(\eta^{*}-\kappa^{*})}.\] This implies that \[\eta^{*}=-\log\left(\gamma+\frac{1}{c_{H}-1}e^{-c_{L}\Psi^{-1}(\gamma)}\right). \tag{3.7}\] Thus, \(\kappa^{*}\), \(B,C\) and \(D\) are determined. Summarizing, it comes \[K(\xi)=\left\{\begin{array}{ll}e^{-\xi}+(\gamma-e^{-\eta^{*}})e^{-c_{H}(\xi -\eta^{*})},&\xi>\eta^{*},\\ -\frac{c_{L}}{1-c_{L}}e^{-(\xi-\kappa^{*})}+\frac{1}{1-c_{L}}e^{-c_{L}(\xi- \kappa^{*})},&\kappa^{*}<\xi<\eta^{*}.\end{array}\right. \tag{3.8}\] Some properties of \(K\) are needed in the sections below. We list them in the following proposition. **Proposition 3.2**.: _(i) for \(\xi>\kappa^{*}\), \(\frac{dK}{d\xi}<0\), \(K+\frac{dK}{d\xi}>0\), and \(\frac{d^{2}K}{d\xi^{2}}+\frac{dK}{d\xi}<0\) if \(\xi\neq\eta^{*}\); (ii) \(\gamma<K(\xi)<1\) if \(\xi\in(\kappa^{*},\eta^{*})\) and \(K(\xi)<\gamma<1\) if \(\xi>\eta^{*}\); (iii) for \(\xi\geqslant\kappa^{*}\), \(K(\xi)\leqslant\min\{1,e^{-\xi}\}\); (iv) \(\eta^{*}\) is a decreasing function of \(\gamma\). Moreover, \(\lim_{\gamma\to 0}\eta^{*}=+\infty\) and \(\lim_{\gamma\to 1}\eta^{*}=-\log\frac{c_{H}}{c_{H}-1}\)._ Proof.: (i) It is straightforward to compute \[\frac{dK}{d\xi}=\left\{\begin{array}{ll}-e^{-\xi}-c_{H}(\gamma-e^{-\eta^{*} })e^{-c_{H}(\xi-\eta^{*})},&\xi>\eta^{*},\\ \frac{c_{L}}{1-c_{L}}e^{-(\xi-\kappa^{*})}-\frac{c_{L}}{1-c_{L}}e^{-c_{L}(\xi -\kappa^{*})},&\kappa^{*}<\xi<\eta^{*}.\end{array}\right.\] Since \(c_{L}<1\), it holds that \(\frac{dK}{d\xi}<0\) for \(\kappa^{*}<\xi<\eta^{*}\). For \(\xi>\eta^{*}\), we rewrite \[\frac{dK}{d\xi}=-e^{-\eta^{*}}e^{-(\xi-\eta^{*})}-c_{H}(\gamma-e^{-\eta^{*}})e^{ -c_{H}(\xi-\eta^{*})}.\] With the notation from Theorem 3.1, we have that \[c_{H}Be^{-c_{H}\eta^{*}}=c_{H}(\gamma-e^{-\eta^{*}})\] and \[e^{-\eta^{*}}+c_{H}Be^{-c_{H}\eta^{*}}=-\frac{c_{L}}{1-c_{L}}e^{-(\eta^{*}- \kappa^{*})}+\frac{c_{L}}{1-c_{L}}e^{-c_{L}(\eta^{*}-\kappa^{*})}>0.\] Since \(c_{H}>1\), it holds that \(\frac{dK}{d\xi}<0\) for \(\xi>\eta^{*}\). Next, it comes \[K+\frac{dK}{d\xi}=\left\{\begin{array}{ll}(1-c_{H})(\gamma-e^{-\eta^{*}})e^{ -c_{H}(\xi-\eta^{*})},&\xi>\eta^{*},\\ e^{-c_{L}(\xi-\kappa^{*})},&\kappa^{*}<\xi<\eta^{*},\end{array}\right.\] and \[\frac{dK}{d\xi}+\frac{d^{2}K}{d\xi^{2}}=\left\{\begin{array}{ll}(c_{H}^{2}-c _{H})(\gamma-e^{-\eta^{*}})e^{-c_{H}(\xi-\eta^{*})},&\xi>\eta^{*},\\ -c_{L}e^{-c_{L}(\xi-\kappa^{*})},&\kappa^{*}<\xi<\eta^{*}.\end{array}\right.,\] Noting that \(e^{-\eta^{*}}=\gamma+\frac{1}{c_{H}-1}e^{-c_{L}\sqrt{1}(\gamma)}>\gamma\), \(c_{L}<1\) and \(c_{H}>1\), we achieve the desired results. (ii) It follows immediately from (i). (iii) We know from (ii) that \(K(\xi)\leq 1\). On the one hand, thanks to (3.7), \(\gamma-e^{-\eta^{*}}<0\) hence \(K(\xi)<e^{-\xi}\) if \(\xi>\eta^{*}\) (see (3.8)). On the other hand, note that \(K+\frac{dK}{d\xi}>0\) implies that \(\xi\mapsto e^{\xi}K(\xi)\) is increasing, which indicates that \(K(\xi)<e^{-\xi}\) for \(\kappa^{*}<\xi<\eta^{*}\). (iv) Since \(\Psi^{-1}\) is decreasing with respect to \(\gamma\) and \(c_{H}>1\), it follows from (3.7) that \(\eta^{*}\) is decreasing with respect to \(\gamma\). It also holds that \(\lim_{\gamma\to 0}\Psi^{-1}(\gamma)=+\infty\) and \(\lim_{\gamma\to 1}\Psi^{-1}(\gamma)=0\), hence the result. ## 4 Penalized and Regularized Cauchy Problem Problem (2.9) has singularities: at \(v=\gamma\) due to the indicator function in the definition of \(\sigma\); at \(v=1\) as in a usual obstacle problem; and at \(t=0\) because of the lack of regularity of the initial condition. To address these issues, we introduce \(H_{\varepsilon}\), \(\beta_{\varepsilon}\) and \(\psi_{\varepsilon}\) which depend upon a small positive parameter \(\varepsilon\). These smooth functions are chosen as the following. Let \(H(s)\) be the Heaviside function, i.e., \(H(s)=0\) for \(s<0\) and \(H(s)=1\) for \(s>0\). Then, \(\sigma(v)\) in (2.9) reads \[\sigma(v)=\sigma_{H}+(\sigma_{L}-\sigma_{H})H(v-\gamma).\] First, we approximate \(H\) by a \(C^{\infty}\) function \(H_{\varepsilon}\) such that \[H_{\varepsilon}(s)=0\,\mbox{ for }s<-\varepsilon,\,H_{\varepsilon}(s)=1\mbox{ for }s>0,\,0\leqslant H_{ \varepsilon}^{\prime}(s)\leqslant 2/\varepsilon\ \mbox{ for }-\infty<s<\infty.\] Second, let \(\beta_{\varepsilon}(y)\) be a smooth penalty function satisfying the following condition: \[\beta_{\varepsilon}(y)\in C^{\infty}(\mathbb{R}),\,\beta_{\varepsilon}(y) \geqslant 0,\,\beta_{\varepsilon}(y)=0\mbox{ if }y\leqslant-\varepsilon;\] \[\beta_{\varepsilon}(0)=C_{0}\geqslant 2\delta;\,\beta_{\varepsilon}^{\prime}(y) \geqslant 0;\,\,\beta_{\varepsilon}^{\prime\prime}(y)\geqslant 0;\] \[\lim_{\varepsilon\to 0}\beta_{\varepsilon}(y)=0\mbox{ if }y<0;\,\mbox{and}\,\,\lim_{ \varepsilon\to 0}\beta_{\varepsilon}(y)=+\infty\mbox{ if }y>0.\] Let \(\varepsilon_{\beta}>0\) be the unique solution of \(\beta_{\varepsilon}(-\frac{\varepsilon_{\beta}}{2})=\delta\). It is easy to see that \(\varepsilon_{\beta}\to 0\) when \(\varepsilon\to 0\). Finally, let us define \(\psi_{\varepsilon}(y):=1+\varepsilon_{\beta}\psi(\frac{y-1}{\varepsilon_{ \beta}})\), where \(\psi\in C^{\infty}\), \(\psi(y)=0\) for \(y\geqslant 1/2\); \(\psi(y)=y\) for \(y<-1/2\) and \(\psi(y)\leqslant y\), \(0\leqslant\psi^{\prime}(y)\leqslant 1\), \(\psi^{\prime\prime}(y)\leqslant 0\) for \(-1/2\leqslant y\leqslant 1/2\). From the construction of \(\psi_{\varepsilon}\), we have the following lemma. **Lemma 4.1**.: _(i) For \(y\geqslant 0\), \(0\leqslant y\psi_{\varepsilon}^{\prime}(y)\leqslant(1+\varepsilon_{\beta})\); (ii) \(0\leqslant\psi_{\varepsilon}(y)-y\psi_{\varepsilon}^{\prime}(y)\leqslant 1\)._ Proof.: (i) It is easy to see that \(y\psi_{\varepsilon}^{\prime}(y)=y\psi^{\prime}(\frac{y-1}{\varepsilon_{\beta}})\), hence positive for \(y\geqslant 0\). Note that \(\psi^{\prime}(\frac{y-1}{\varepsilon_{\beta}})=0\) for \(y\geqslant 1+\frac{\varepsilon_{\beta}}{2}\) and \(\psi^{\prime}(\frac{y-1}{\varepsilon_{\beta}})\leqslant 1\) for \(y\leqslant 1+\frac{\varepsilon_{\beta}}{2}\). Then, we shall have the second inequality. (ii) Differentiating \(\psi_{\varepsilon}(y)-y\psi_{\varepsilon}^{\prime}(y)\), we have that \[(\psi_{\varepsilon}(y)-y\psi_{\varepsilon}^{\prime}(y))^{\prime}=-\frac{y}{ \varepsilon_{\beta}}\psi^{\prime\prime}(\frac{y-1}{\varepsilon_{\beta}}). \tag{4.1}\] This implies that the minimum is achieved at \(y=0\). Thus, \[\psi_{\varepsilon}(y)-y\psi_{\varepsilon}^{\prime}(y)\geqslant\psi_{ \varepsilon}(0)=0.\] It is easy to verify that \(\psi_{\varepsilon}(y)-y\psi_{\varepsilon}^{\prime}(y)=1\) for \(y<1-\frac{\varepsilon_{\beta}}{2}\) or \(y>1+\frac{\varepsilon_{\beta}}{2}\). From (4.1), we see that \(\psi_{\varepsilon}(y)-y\psi_{\varepsilon}^{\prime}(y)\leqslant 1\) for any \(y\). Now, for \(\varepsilon\) small, we consider the following approximated Cauchy problem: \[\mathcal{L}_{\varepsilon}[v_{\varepsilon}]=-\frac{\partial v_{ \varepsilon}}{\partial t}+\frac{1}{2}\sigma_{\varepsilon}^{2}(v_{\varepsilon} )\Big{(}\frac{\partial^{2}v_{\varepsilon}}{\partial\xi^{2}}+\frac{\partial v _{\varepsilon}}{\partial\xi}\Big{)}+\delta\Big{(}\frac{\partial v_{ \varepsilon}}{\partial\xi}+v_{\varepsilon}\Big{)}=\beta_{\varepsilon}(v_{ \varepsilon}-1), \tag{4.2}\] where \((\xi,t)\in\Omega_{T}=\mathbb{R}\times(0,T]\), \(T>0\), and \[\sigma_{\varepsilon}(v_{\varepsilon})=\sigma_{H}+(\sigma_{L}- \sigma_{H})H_{\varepsilon}(v_{\varepsilon}-\gamma), \tag{4.3}\] together with the initial condition \[v_{\varepsilon}(\xi,0)=\psi_{\varepsilon}(e^{-\xi}),\quad\xi\in \mathbb{R}. \tag{4.4}\] Hence, from the definition of \(\psi_{\varepsilon}\) in the previous, we have that \(v_{\varepsilon}(\xi,0)=1\) for \(\xi\leqslant-\log(1+\frac{\varepsilon_{\beta}}{2})\); \(v_{\varepsilon}(\xi,0)=e^{-\xi}\) for \(\xi\geqslant-\log(1-\frac{\varepsilon_{\beta}}{2})\). We have the following existence result: **Theorem 4.2**.: _For \(\varepsilon>0\) fixed, problem (4.2)-(4.4) has a unique bounded classical solution \(v_{\varepsilon}\). Moreover, \(v_{\varepsilon}\in C^{\infty}(\mathbb{R}\times[0,T])\)._ Proof.: First, we turn Equation (4.2) into a quasilinear equation whose principal part is in divergence form: \[\frac{\partial v_{\varepsilon}}{\partial t}-\frac{\partial}{ \partial\xi}a\big{(}\xi,v_{\varepsilon},\frac{\partial v_{\varepsilon}}{ \partial\xi}\big{)}+A\big{(}\xi,v_{\varepsilon},\frac{\partial v_{ \varepsilon}}{\partial\xi}\big{)}=0, \tag{4.5}\] with \[a(\xi,v,p)=\frac{1}{2}\sigma_{\varepsilon}^{2}(v)p,\quad A(\xi,v,p)=\beta_{ \varepsilon}(v-1)-\delta v-\big{(}\frac{1}{2}\sigma_{\varepsilon}^{2}(v)+ \delta\big{)}p+\sigma_{\varepsilon}\sigma_{\varepsilon}^{\prime}(v)p^{2}.\] One can check that \(a\) and \(A\) satisfy the assumptions of [24, Chapter V, Theorem 8.1]. Thus, there exists a unique bounded solution \(v_{\varepsilon}\in C^{2+\alpha,1+\frac{\alpha}{2}}(\mathbb{R}\times[0,T])\) for any \(0<\alpha<1\).3 Then, \(\sigma_{\varepsilon}(v_{\varepsilon})\) and \(\beta_{\varepsilon}(v_{\varepsilon})\) belong to the same function class. Further Holder regularity follows from classical results for linear problems (see [24, Chapter IV, Theorem 5.1], [33, Theorem 5.1.10]), which yields that \(v_{\varepsilon}\in C^{4+\alpha,2+\frac{\alpha}{2}}(\mathbb{R}\times[0,T])\). The result follows by bootstrapping. Footnote 3: For usual parabolic Hölder spaces, see, e.g., [24, Chapter 1, Section 1],[33, Section 5.1]). **Remark 4.3**.: _From the definition of \(H_{\varepsilon}\) and \(\beta_{\varepsilon}\), it is easy to see that \(\sigma_{\varepsilon}(v_{\varepsilon})=\sigma_{L}\) when \(v_{\varepsilon}>\gamma\) and \(\beta_{\varepsilon}(v_{\varepsilon})=0\) when \(v_{\varepsilon}<1-\varepsilon\). Thus, when \(\varepsilon\) is small enough, at least one of these two equations holds._ ### Estimates on the approximating solution We now proceed to derive necessary estimates on \(v_{\varepsilon}\) independent of \(\varepsilon\), via the the maximum principle for parabolic equations in unbounded domains (see, e.g., (15, Chapter 2), (36, Chapter 7)). These properties will be inherited by the limit \(v\) when taking \(\varepsilon\to 0\) and, thus, are crucial for the analysis of the bond value and free boundaries. **Lemma 4.4**.: _For \(\varepsilon\) sufficiently small, it holds in \(\mathbb{R}\times[0,T]\):_ \[0\leqslant v_{\varepsilon}\leqslant\min(1,e^{-\xi}).\] Proof.: Recall that we have introduced a smooth cut-off function \(\psi\) in the beginning of this section. Define a function \(h\) as \(h(y):=\varepsilon\psi(\frac{y-\frac{1}{2}}{\varepsilon})+\frac{1}{2}\). Then, we see that \(h(y)=\frac{1}{2}\) for \(y\geqslant\frac{1}{2}(1+\varepsilon)\); \(h(y)=y\) for \(y\leqslant\frac{1}{2}(1-\varepsilon)\) and \(0\leqslant h^{\prime}(y)\leqslant 1\), \(h^{\prime\prime}(y)\leqslant 0\) for \(y\in\mathbb{R}\). Thus, it holds that \(h(y)\geqslant 0\) if and only if \(y\geqslant 0\). Furthermore, one can directly check that \(|\frac{yh^{\prime}(y)}{h(y)}|\) is bounded. Let \(w=h(v_{\varepsilon})\) and we have that \[\mathcal{L}_{\varepsilon}[w]=h^{\prime}(v_{\varepsilon})\beta_{\varepsilon}( v_{\varepsilon}-1)+\frac{1}{2}\sigma_{\varepsilon}^{2}(v_{\varepsilon})h^{ \prime\prime}(v_{\varepsilon})(\frac{\partial v_{\varepsilon}}{\partial\xi}) ^{2}+\delta(w-h^{\prime}(v_{\varepsilon})v_{\varepsilon}),\] which can be rewritten as \[\mathcal{L}_{\varepsilon}[w]-\delta(1-\frac{v_{\varepsilon}h^{\prime}(v_{ \varepsilon})}{h(v_{\varepsilon})})w=h^{\prime}(v_{\varepsilon})\beta_{ \varepsilon}(v_{\varepsilon}-1)+\frac{1}{2}\sigma_{\varepsilon}^{2}(v_{ \varepsilon})h^{\prime\prime}(v_{\varepsilon})(\frac{\partial v_{\varepsilon} }{\partial\xi})^{2}.\] Since \(\beta_{\varepsilon}(v_{\varepsilon}-1)=0\) when \(v_{\varepsilon}<1-\varepsilon\) and \(h^{\prime}(v_{\varepsilon})=0\) when \(v_{\varepsilon}>\frac{1}{2}(1+\varepsilon)\), we see that \(h^{\prime}(v_{\varepsilon})\beta_{\varepsilon}(v_{\varepsilon}-1)=0\) if \(\varepsilon\) is sufficiently small. Noting that \(h^{\prime\prime}\leqslant 0\), it holds that \(\mathcal{L}_{\varepsilon}[w]-\delta(1-\frac{v_{\varepsilon}h^{\prime}(v_{ \varepsilon})}{h(v_{\varepsilon})})w\leqslant 0\). As the coefficient of zeroth order term is bounded, one can apply maximum principle to get that \(w\geqslant 0\), which is equivalent to \(v_{\varepsilon}\geqslant 0\). Next, set \(w=v_{\varepsilon}-1\). Then, \(w\) verifies \[\mathcal{L}_{\varepsilon}[w]=\beta_{\varepsilon}(w)-\delta=\frac{\beta_{ \varepsilon}(w)-\beta_{\varepsilon}(0)}{w}w+\beta_{\varepsilon}(0)-\delta.\] From the definition of \(\beta_{\varepsilon}\), it holds that \(\beta_{\varepsilon}(0)=C_{0}\geqslant 2\delta\). Hence, this leads to \(w\leqslant 0\) according again to the maximum principle. Finally, Let \(w=v_{\varepsilon}-e^{-\xi}\). Then, it holds that \[\mathcal{L}_{\varepsilon}[w]=\beta_{\varepsilon}(v_{\varepsilon}-1)=\frac{ \beta_{\varepsilon}(v_{\varepsilon}-1)-\beta_{\varepsilon}(e^{-\xi}-1)}{w}w+ \beta_{\varepsilon}(e^{-\xi}-1).\] Noting that \(w(\xi,0)\leqslant 0\) and \(\beta_{\varepsilon}(e^{-\xi}-1)\geqslant 0\), we deduce that \(w\leqslant 0\) according to the maximum principle. **Lemma 4.5**.: _It holds in \(\Omega_{T}\):_ \[-(1+\varepsilon_{\beta})e^{\delta t}\leqslant\frac{\partial v_{\varepsilon}}{ \partial\xi}<0.\] Proof.: Differentiating (4.2), it comes \[\mathcal{L}_{\varepsilon}\Big{[}\frac{\partial v_{\varepsilon}}{\partial\xi} \Big{]}=-\sigma_{\varepsilon}(v_{\varepsilon})\,\sigma_{\varepsilon}^{\prime}( v_{\varepsilon})\frac{\partial v_{\varepsilon}}{\partial\xi}\,(\frac{\partial^{2}v_{ \varepsilon}}{\partial\xi^{2}}+\frac{\partial v_{\varepsilon}}{\partial\xi})+ \beta_{\varepsilon}^{\prime}(v_{\varepsilon}-1)\frac{\partial v_{\varepsilon}}{ \partial\xi}.\] At \(t=0\), \(\frac{\partial v_{\varepsilon}}{\partial\xi}=-e^{-\xi}\psi_{\varepsilon}^{ \prime}(e^{-\xi})\), which lies between \(-(1+\varepsilon_{\beta})\) and \(0\) from the proof of Lemma 4.1. By the maximum principle, one can deduce that \(-(1+\varepsilon_{\beta})e^{\delta t}\leqslant\frac{\partial v_{\varepsilon}}{ \partial\xi}\leqslant 0\). Furthermore, the strict inequality in \(\Omega_{T}\) holds due to strong maximum principle. **Lemma 4.6**.: _It holds in \(\Omega_{T}\):_ \[1\geqslant\frac{\partial v_{\varepsilon}}{\partial\xi}+v_{\varepsilon}>0.\] Proof.: By Lemma 4.4 and 4.5, we have the first inequality of the lemma. Then, set \(w=\frac{\partial v_{\varepsilon}}{\partial\xi}+v_{\varepsilon}\), \(w(\xi,0)=-e^{-\xi}\psi_{\varepsilon}^{\prime}(e^{-\xi})+\psi_{\varepsilon}(e^ {-\xi})\). It follows from Lemma 4.1 that \(w\geqslant 0\) at \(t=0\). Also, \(w\) verifies \[\mathcal{L}_{\varepsilon}[w]+\sigma_{\varepsilon}(v_{\varepsilon})\sigma_{ \varepsilon}^{\prime}(v_{\varepsilon})\,\frac{\partial v_{\varepsilon}}{ \partial\xi}\frac{\partial w}{\partial\xi}=\beta_{\varepsilon}^{\prime}(v_{ \varepsilon}-1)(w-v_{\varepsilon})+\beta_{\varepsilon}(v_{\varepsilon}-1). \tag{4.6}\] Using Taylor expansion of \(\beta_{\varepsilon}(-\varepsilon)\) at \(y\), one has that \[0=\beta_{\varepsilon}(-\varepsilon)=\beta_{\varepsilon}(y)-\beta_{ \varepsilon}^{\prime}(y)(y+\varepsilon)+\frac{1}{2}\beta_{\varepsilon}^{ \prime\prime}(\theta)(y+\varepsilon)^{2}.\] That is, \[\beta_{\varepsilon}(y)-(y+\varepsilon)\beta_{\varepsilon}^{\prime}(y) \leqslant 0.\] Replacing \(y\) by \(v_{\varepsilon}(\xi)-1\) in the above formula, we have, \[\beta_{\varepsilon}(v_{\varepsilon}-1)-(v_{\varepsilon}-1+\varepsilon)\beta_{ \varepsilon}^{\prime}(v_{\varepsilon}-1)\leqslant 0.\] Thus, (4.6) reads \[-\mathcal{L}_{\varepsilon}[w]-\sigma_{\varepsilon}(v_{\varepsilon} )\sigma_{\varepsilon}^{\prime}(v_{\varepsilon})\,\frac{\partial v_{\varepsilon }}{\partial\xi}\frac{\partial w}{\partial\xi}+\beta_{\varepsilon}^{\prime}(v_ {\varepsilon}-1)w\] \[=\beta_{\varepsilon}^{\prime}(v_{\varepsilon}-1)v_{\varepsilon} -\beta_{\varepsilon}(v_{\varepsilon}-1)\geqslant v_{\varepsilon}\beta_{ \varepsilon}^{\prime}(v_{\varepsilon}-1)-(v_{\varepsilon}-1+\varepsilon)\beta_ {\varepsilon}^{\prime}(v_{\varepsilon}-1)=(1-\varepsilon)\beta_{\varepsilon}^ {\prime}(v_{\varepsilon}-1)\geqslant 0.\] By the strong maximum principle, \(w>0\) in \(\Omega_{T}\). **Lemma 4.7**.: _It holds in \(\mathbb{R}\times[0,T]\):_ \[\frac{\partial^{2}v_{\varepsilon}}{\partial\xi^{2}}+\frac{\partial v_{ \varepsilon}}{\partial\xi}\leqslant 0.\] Proof.: At \(t=0\), \(\frac{\partial^{2}v_{\varepsilon}}{\partial\xi^{2}}+\frac{\partial v_{ \varepsilon}}{\partial\xi}=e^{-\xi}\psi_{\varepsilon}^{\prime\prime}(e^{-\xi})\) is non-positive. Now, consider the function \(w=\frac{\partial v_{\varepsilon}}{\partial t}-\delta\big{(}\frac{\partial v_{ \varepsilon}}{\partial\xi}+v_{\varepsilon}\big{)}\). According to Remark 4.3, two cases must be distinguished. **Case 1:**\(\beta_{\varepsilon}=0\) \[\mathcal{L}_{\varepsilon}[w]+\sigma_{\varepsilon}(v_{\varepsilon}) \sigma_{\varepsilon}^{\prime}(v_{\varepsilon})\Big{(}\frac{\partial^{2}v_{ \varepsilon}}{\partial\xi^{2}}+\frac{\partial v_{\varepsilon}}{\partial\xi} \Big{)}\Big{(}\frac{\partial v_{\varepsilon}}{\partial t}-\delta\frac{ \partial v_{\varepsilon}}{\partial\xi}\Big{)}\] \[=\mathcal{L}_{\varepsilon}[w]+\frac{2\sigma_{\varepsilon}^{ \prime}(v_{\varepsilon})}{\sigma_{\varepsilon}(v_{\varepsilon})}(w+\delta v_{ \varepsilon})w=0. \tag{4.7}\] **Case 2:**\(\sigma_{\varepsilon}=\sigma_{L}\) \[\mathcal{L}_{\varepsilon}[w]-\beta_{\varepsilon}^{\prime}w=\beta_{\varepsilon }^{\prime}\delta v_{\varepsilon}-\delta\beta_{\varepsilon}\geqslant\beta_{ \varepsilon}^{\prime}\delta v_{\varepsilon}-\delta(v_{\varepsilon}-1+ \varepsilon)\beta_{\varepsilon}^{\prime}=\delta(1-\varepsilon)\beta_{ \varepsilon}^{\prime}\geqslant 0, \tag{4.8}\] where \(\beta_{\varepsilon}^{\prime}=\beta^{\prime}(v_{\varepsilon}-1)\) and the first inequality is due to the convexity of \(\beta_{\varepsilon}\). Combining the two cases, we have \[\mathcal{L}_{\varepsilon}[w]+\Big{(}\frac{2\sigma_{\varepsilon}^{\prime}(v_{ \varepsilon})}{\sigma_{\varepsilon}(v_{\varepsilon})}(w+\delta v_{\varepsilon })1_{\{\beta_{\varepsilon}=0\}}-\beta_{\varepsilon}^{\prime}1_{\{\sigma_{ \varepsilon}=\sigma_{L}\}}\Big{)}w\geqslant 0.\] Then, by the maximum principle, \(w\leqslant 0\) **Lemma 4.8**.: _It holds in \(\mathbb{R}\times(0,T]\):_ \[\frac{\partial v_{\varepsilon}}{\partial t}<0.\] Proof.: Set \(w=\frac{\partial v_{\varepsilon}}{\partial t}\). Then, we see that \[\mathcal{L}_{\varepsilon}[w]=-\sigma_{\varepsilon}(v_{e})\sigma_{\varepsilon} ^{\prime}(v_{e})(\frac{\partial^{2}v_{\varepsilon}}{\partial\xi^{2}}+\frac{ \partial v_{\varepsilon}}{\partial\xi})w+\beta_{\varepsilon}^{\prime}(v_{ \varepsilon}-1)w. \tag{4.9}\] Because \(v_{\varepsilon}(\xi,0)=\psi_{\varepsilon}(e^{-\xi})\), we have that \[w(\xi,0)=\frac{1}{2}\sigma_{\varepsilon}^{2}(v_{\varepsilon})e^{-2\xi}\psi_{ \varepsilon}^{\prime\prime}(e^{-\xi})+\delta\big{(}\psi_{\varepsilon}(e^{- \xi})-e^{-\xi}\psi_{\varepsilon}^{\prime}(e^{-\xi})\big{)}-\beta_{\varepsilon }(\psi_{\varepsilon}(e^{-\xi})-1). \tag{4.10}\] Since \(\psi_{\varepsilon}^{\prime\prime}(\cdot)\leqslant 0\), the first term is negative. Then, it is easy to check that when \(e^{-\xi}<1-\frac{\varepsilon_{\beta}}{2}\), the second term is zero. Hence, \(w(\xi,0)\leqslant 0\) in this case. Now, it remains only to check the case \(e^{-\xi}\geqslant 1-\frac{\varepsilon_{\beta}}{2}\). From Lemma 4.1 and monotonicity of \(\beta_{\varepsilon}\), we have that \[\delta(\psi_{\varepsilon}(e^{-\xi})-e^{-\xi}\psi_{\varepsilon}^{\prime}(e^{- \xi}))-\beta_{\varepsilon}(\psi_{\varepsilon}(e^{-\xi})-1)\leqslant\delta- \beta_{\varepsilon}(-\frac{\varepsilon_{\beta}}{2}).\] According to our choice of \(\varepsilon_{\beta}\), we see that the above term is non-positive. Thus, we proved that \(w(\xi,0)\leqslant 0\), which yields the desired result thanks to the strong maximum principle. **Lemma 4.9**.: _There are positive constants \(c_{1},C_{2}\) and \(C_{3}\), independent of \(\varepsilon\), such that it holds in \(\mathbb{R}\times(0,T]\)_ \[\frac{\partial v_{\varepsilon}}{\partial t}\geqslant-C_{3}-\frac{C_{2}}{ \sqrt{t}}\exp\Big{(}-c_{1}\frac{\xi^{2}}{t}\Big{)}.\] Proof.: Since \(v_{\varepsilon}(0,0)=1>\gamma\), and by Holder continuity of the solution (see Theorem 4.2), there exists a \(\rho>0\), independent of \(\varepsilon\), such that \[v_{\varepsilon}(x,t)>(1+\gamma)/2\;\;\text{in}\;B_{\rho},\] where \[B_{\rho}=\left\{(\xi,t),\,|\xi|\leqslant\rho,\;0\leqslant t\leqslant\rho^{2} \right\}.\] Thus, for \(\varepsilon\) small enough such that \(\varepsilon<(1-\gamma)/2\), \(\sigma_{\varepsilon}\equiv\sigma_{L}\) in \(B_{\rho}\). We observe that, in \(B_{\rho}\), the problem is reminiscent of a vanilla American option, which has a lower estimate (see, e.g., [17]) \[\frac{\partial v_{\varepsilon}}{\partial t}\geqslant-C_{2}-\frac{C_{2}}{ \sqrt{t}}\exp\Big{(}-c_{1}\frac{\xi^{2}}{t}\Big{)}\text{ in }B_{\rho}. \tag{4.11}\] Let us refer to Lemma 4.8 for the notation \(w=\frac{\partial v_{\varepsilon}}{\partial t}\). From (4.1), it is easy to verify that \(w(\xi,0)\) is uniformly bounded from below on \(|\xi|\geqslant\rho\). Combining with (4.11), there exists \(C_{3}>0\) such that \(w(x,t)\geqslant-C_{3}\) on \(\{|\xi|\geqslant\rho,t=0\}\cup\{|\xi|=\rho,0\leqslant t\leqslant\rho^{2}\} \cup\{|\xi|\leqslant\rho,t=\rho^{2}\}\). The Maximum Principle (see Lemma 4.8) yields that \(w(\xi,t)\geqslant-C_{3}\) in \(\Omega_{T}\setminus B_{\rho}\). Together with (4.11), we get the desired result. As an immediate corollary, we have **Lemma 4.10**.: _There are positive constants \(C_{4},C_{5}\) and \(C_{6}\), independent of \(\varepsilon\), such that it holds in \(\mathbb{R}\times(0,T]\)_ \[-C_{4}-\frac{C_{5}}{\sqrt{t}}\exp\Big{(}-c_{1}\frac{\xi^{2}}{t}\Big{)} \leqslant\frac{\partial^{2}v_{\varepsilon}}{\partial\xi^{2}}\leqslant C_{6}.\] ### The approximating transit boundary Let us denote by \(\eta_{\varepsilon}(t)\) the approximating transit boundary, which is implicitely defined by the equation \[v_{\varepsilon}(\eta_{\varepsilon}(t),t)=\gamma. \tag{4.12}\] We will construct the curve \(t\mapsto\eta_{\varepsilon}(t)\) via the Implicit Function Theorem. To begin with, we give a lower bound for \(v_{\varepsilon}\). From Lemma 4.4, it holds that \(v_{\varepsilon}(\xi,t)\leqslant\gamma-\varepsilon\) when \(\xi\geqslant\log\frac{1}{\gamma-\varepsilon}\). This implies that \(\sigma_{\varepsilon}=\sigma_{H}\) when \(\xi\geqslant\log\frac{1}{\gamma-\varepsilon}\) and \(\eta_{\varepsilon}(t)\leqslant\log\frac{1}{\gamma}\). Then, we give a lower bound for \(v_{\varepsilon}\). **Lemma 4.11**.: _Let \((\tilde{K},\tilde{\eta}^{*},\tilde{\kappa}^{*})\) be the solution of (3.2) as constructed in Theorem 3.1 with \(\gamma\) replaced by \(\tilde{\gamma}\). Choose \(\tilde{\gamma}\) properly such that \(\tilde{\eta}^{*}=\log\frac{2}{\gamma}\). Then, we have that \(v_{\varepsilon}\geqslant\tilde{K}-(\varepsilon\vee\varepsilon_{\beta})e^{ \delta t}\) when \(\varepsilon<\frac{\gamma}{2}\)._ Proof.: From Proposition 3.2 (iv), \(\tilde{\gamma}\) is well defined. We can rewrite that \[\frac{1}{2}\sigma_{2}^{2}\Big{(}\frac{d^{2}\tilde{K}}{d\xi^{2}}+\frac{d\tilde {K}}{d\xi}\Big{)}+\delta\Big{(}\frac{d\tilde{K}}{d\xi}+\tilde{K}\Big{)}=\delta 1 _{\{\xi\leqslant\tilde{\kappa}^{*}\}},\] with \(\sigma_{2}:=\sigma_{H}1_{\{\xi\geqslant\tilde{\eta}^{*}\}}+\sigma_{L}1_{\{\xi <\tilde{\eta}^{*}\}}\). Let \(w=v_{\varepsilon}-(\tilde{K}-(\varepsilon\vee\varepsilon_{\beta})e^{\delta t})\). Then, it holds that \[\mathcal{L}_{\varepsilon}[w]=\beta_{\varepsilon}(v_{\varepsilon}-1)-\delta 1_{\{\xi\leqslant\tilde{\kappa}^{*}\}}+\frac{1}{2}( \sigma_{2}^{2}-\sigma_{\varepsilon}^{2})\Big{(}\frac{d^{2}\tilde{K}}{d\xi^{2} }+\frac{d\tilde{K}}{d\xi}\Big{)}.\] Since we choose \(\tilde{\eta}^{*}=\log\frac{2}{\gamma}\), it holds that \(\sigma_{\varepsilon}^{2}\leqslant\sigma_{2}^{2}\). Combining with the fact that \(\frac{d^{2}\tilde{K}}{d\xi^{2}}+\frac{d\tilde{K}}{d\xi}\leqslant 0\), we see that the last term on the right hand side is non-positive. Since \(\tilde{K}\leqslant\min\{1,e^{-\xi}\}\) (see Proposition 3.2 (iii)), \(\beta_{\varepsilon}(\tilde{K}-(\varepsilon\vee\varepsilon_{\beta})e^{\delta t }-1)\leqslant\beta_{\varepsilon}(-\varepsilon)=0\). Thus, \[\mathcal{L}_{\varepsilon}[w]-\frac{\beta_{\varepsilon}(v_{\varepsilon}-1)- \beta_{\varepsilon}(\tilde{K}-\varepsilon\vee\varepsilon_{\beta}-1)}{w}w \leqslant 0.\] At \(t=0\), \(v_{\varepsilon}(\xi,0)=\psi_{\varepsilon}(e^{-\xi})\). It is easy to see that \(\psi_{\varepsilon}(e^{-\xi})=e^{-\xi}\) for \(e^{-\xi}\leqslant 1-\frac{\varepsilon_{\beta}}{2}\) and \(\psi_{\varepsilon}(e^{-\xi})\geqslant 1-\frac{\varepsilon_{\beta}}{2}\) for \(e^{-\xi}\geqslant 1-\frac{\varepsilon_{\beta}}{2}\). For both cases, we have \(v_{\varepsilon}(\xi,0)\geqslant\tilde{K}(\xi)-\varepsilon\vee\varepsilon_{\beta}\). The desired result follows from the maximum principle. **Theorem 4.12**.: _For fixed \(\varepsilon>0\), there exists an decreasing smooth function \(\eta_{\varepsilon}(t)\) such that_ \[\eta_{\varepsilon}(0)=\log(\frac{1}{\gamma}),\quad\tilde{\kappa}^{*}<\eta_{ \varepsilon}(t)\leqslant\log\frac{1}{\gamma}, \tag{4.13}\] _and (4.12) holds for all \(t\in[0,T]\)._ Proof.: To begin with, we compute \[v_{\varepsilon}(-\log\gamma,0)=\psi_{\varepsilon}(e^{\log\gamma})=\psi_{ \varepsilon}(\gamma)=1+\varepsilon_{\beta}\psi\big{(}\frac{\gamma-1}{ \varepsilon_{\beta}}\big{)}.\] Because \(\gamma-1<0\), it is clear that \(\frac{\gamma-1}{\varepsilon_{\beta}}<-\frac{1}{2}\) if \(\varepsilon_{\beta}\) small enough, hence \(\psi\big{(}\frac{\gamma-1}{\varepsilon_{\beta}}\big{)}=\frac{\gamma-1}{ \varepsilon_{\beta}}\) and \(\psi_{\varepsilon}(\gamma)=\gamma\). Therefore, \(v_{\varepsilon}(-\log\gamma,0)=\gamma\). We remind that the function \(\xi\mapsto v_{\varepsilon}(\xi,0)\) is smooth and non-increasing; however, in some neighborhood of \(-\log\gamma\) such that \(\frac{\gamma-1}{\varepsilon_{\beta}}<-\frac{1}{2}\), the function \(v_{\varepsilon}(\xi,0)\) is decreasing which yields that the initial position of \(\eta_{\varepsilon}\) is well-defined by (4.13). Next, we compute \[\frac{\partial v_{\varepsilon}}{\partial\xi}(-\log\gamma,0)=-\gamma\psi^{ \prime}_{\varepsilon}(\gamma)=-\gamma<0,\] and (see the proof of Lemma 4.8) \[\frac{\partial v_{\varepsilon}}{\partial t}(-\log\gamma,0)=-\beta_{\varepsilon}( \gamma-1){=0}.\] It is now an exercise to apply the Implicit Function Theorem, which shows that there exist \(\delta_{i},\tau_{i}>0,i=1,2\), and a unique function \(\varphi_{\varepsilon}\in C^{\infty}([-\tau_{1},\tau_{2}])\) such that, if \((\xi,t)\in[-\log\gamma-\delta_{1},-\log\gamma+\delta_{2}]\times[-\tau_{1},\tau _{2}]\) verifies \(v_{\varepsilon}(\xi,t)=\gamma\), then \(\xi=\varphi_{\varepsilon}(t)\). Note that \(\varphi_{\varepsilon}\) is a decreasing function because \[\varphi_{\varepsilon}^{\prime}(t)=-\frac{\partial v_{\varepsilon}}{\partial t }(\varphi_{\varepsilon}(t),t)\left(\frac{\partial v_{\varepsilon}}{\partial \xi}(-\varphi_{\varepsilon}(t),t)\right)^{-1}<0.\] As by product, taking the restriction of \(\varphi_{\varepsilon}\) to \([0,\tau_{2}]\), we have constructed a (small) branch of the curve \(\eta_{\varepsilon}\), of class \(C^{\infty}\), such that (4.12) holds for all \(t\in[0,\tau_{2}]\), \(\eta_{\varepsilon}(0)=\gamma\). Lemma 4.11 implies that \(v_{\varepsilon}\geqslant 1-(\varepsilon\vee\varepsilon_{\beta})e^{\delta t}\) for \(\xi\leqslant\tilde{\kappa}^{*}\). Combining with the fact that \(\frac{\partial\eta_{\varepsilon}}{\partial\xi}<0\), we see that \(\tilde{\kappa}^{*}<\eta_{\varepsilon}(t)\leqslant\log\frac{1}{\gamma}\). In view of Lemmas 4.5 and 4.8, we may reiterate the Implicit Function Theorem and continue this branch up to a endpoint achieved at time \(T\). **Lemma 4.13**.: _For any \(T>0\), there exists a constant \(C_{T}>0\), independent of \(\varepsilon\), such that \(\sup_{t\in[0,T]}|\eta_{\varepsilon}^{\prime}(t)|\leqslant C_{T}\)._ Proof.: From the Implicit Function Theorem, it holds: \[\eta_{\varepsilon}^{\prime}(t)=-\frac{\partial v_{\varepsilon}}{\partial t}( \eta_{\varepsilon}(t),t)\left(\frac{\partial v_{\varepsilon}}{\partial\xi}(( \eta_{\varepsilon}(t),t)\right)^{-1}.\] Note that Lemmas 4.8 and 4.9 implies that \(\frac{\partial v_{\varepsilon}}{\partial t}\) is bounded. To prove the desired results, we only need to show that \(\frac{\partial v_{\varepsilon}}{\partial\xi}(\eta_{\varepsilon}(t),t) \leqslant-c\), for some positive \(c\). In Lemma 4.7, we proved that \(\frac{\partial^{2}v_{\varepsilon}}{\partial\xi^{2}}+\frac{\partial v_{ \varepsilon}}{\partial\xi}\leqslant 0\), which implies that \(e^{\xi}\frac{\partial v_{\varepsilon}}{\partial\xi}\) is non-increasing in \(\xi\). Since \(v_{\varepsilon}\) is smooth, there exists a point \(\hat{\eta}_{\varepsilon}(t)\in(\tilde{\kappa}^{*},\eta_{\varepsilon}(t))\) such that \[\frac{\partial v_{\varepsilon}}{\partial\xi}(\hat{\eta}_{\varepsilon}(t),t)= \frac{v_{\varepsilon}(\tilde{\kappa}^{*},t)-v_{\varepsilon}(\eta_{ \varepsilon}(t),t)}{\tilde{\kappa}^{*}-\eta_{\varepsilon}(t)}=-\frac{v_{ \varepsilon}(\tilde{\kappa}^{*},t)-v_{\varepsilon}(\eta_{\varepsilon}(t),t)}{ \eta_{\varepsilon}(t)-\tilde{\kappa}^{*}}.\] We have shown that \(v_{\varepsilon}(\tilde{\kappa}^{*},t)\geqslant\tilde{K}(\tilde{\kappa}^{*})-( \varepsilon\vee\varepsilon_{\beta})e^{\delta t}\) and \(\eta_{\varepsilon}(t)\leqslant\log\frac{1}{\gamma}\). This yields that \[\frac{\partial v_{\varepsilon}}{\partial\xi}(\hat{\eta}_{\varepsilon}(t)) \leqslant-\frac{1-(\varepsilon\vee\varepsilon_{\beta})e^{\delta t}-\gamma}{ \log\frac{1}{\gamma}-\tilde{\kappa}^{*}}.\] Since \(e^{\xi}\frac{\partial v_{\varepsilon}}{\partial\xi}\) is non-increasing, it holds that \[\frac{\partial v_{\varepsilon}}{\partial\xi}(\eta_{\varepsilon}(t),t) \leqslant-e^{\hat{\eta}_{\varepsilon}(t)-\eta_{\varepsilon}(t)}\frac{1-( \varepsilon\vee\varepsilon_{\beta})e^{\delta t}-\gamma}{\log\frac{1}{\gamma}- \tilde{\kappa}^{*}}\leqslant-e^{\tilde{\kappa}^{*}-\log\frac{1}{\gamma}} \frac{1-(\varepsilon\vee\varepsilon_{\beta})e^{\delta t}-\gamma}{\log\frac{1}{ \gamma}-\tilde{\kappa}^{*}}.\] This completes the proof. From Theorem 4.12 and Lemma 4.13, we see that the sequence \((\eta_{\varepsilon})_{\varepsilon>0}\) is bounded in \(C^{1}([0,T])\), therefore, extracting a subsequence if necessary, it converges uniformly to a function \(\hat{\eta}(t)\). **Corollary 4.14**.: _Extracting a subsequence if necessary, the sequence \(\eta_{\varepsilon}\) converges uniformly to a limit \(\hat{\eta}(t)\)._ ## 5 Main Results ### Existence and Uniqueness Lemmas 4.4-4.7 provide estimates on the approximated solution \(v_{\varepsilon}\). By taking a limit as \(\varepsilon\to 0\), we are able to derive the existence of a solution to (2.9)-(2.10). **Theorem 5.1**.: _(i) For any \(T>0\), there exists a sequence \(\varepsilon\to 0\) such that \(v_{\varepsilon}\to v\) a.e. in \(\mathbb{R}\times[0,T]\), \(\frac{\partial v_{\varepsilon}}{\partial\xi}\to\frac{\partial v}{\partial\xi}\) a.e. in \(\mathbb{R}\times[0,T]\), \(v_{\varepsilon}\to v\) in \(W^{1,0}_{\infty}(\mathbb{R}\times[0,T])\) weak-\(*\) and \(W^{2,1}_{\infty}((\mathbb{R}\times[0,T])\setminus\overline{Q}_{\rho})\) weak-\(*\), for any \(\rho>0\), where \(Q_{\rho}=(-\rho,\rho)\times(0,\rho^{2})\). Moreover, extracting a subsequence if necessary, \(\eta_{\varepsilon}\) converges uniformly to \(\hat{\eta}\);4 Footnote 4: For \(\Omega\subset\mathbb{R}\times[0,T]\), \(W^{2,1}_{p}(\Omega)\), \(1<p<\infty\), is the space of elements of \(L^{p}(\Omega)\) whose derivatives are also in \(L^{p}(\Omega)\), respectively up to second order in \(\xi\) and to first order in \(t\). \(W^{2,1}_{\infty}(\Omega)\) is the space of bounded functions whose derivatives are bounded, respectively up to second order in \(\xi\) and first order in \(t\). \(W^{1,0}_{\infty}(\Omega)\) denotes the space of bounded functions whose derivative w.r.t. \(\xi\) is also bounded._ _(ii) \(v\) is a solution of the original free boundary problem (2.9);_ _(iii) \(v\) satisfies the estimates of Lemmas 4.4-4.7, and the inequality_ \[\frac{\partial^{2}v}{\partial\xi^{2}}+\frac{\partial v}{\partial\xi}\leqslant 0 \,\,\text{a.e. in}\,\,\mathbb{R}\times[0,T])\setminus\overline{Q}_{\rho}, \tag{5.1}\] _as well as the following growth condition: there exists a constant \(B>0\) such that \(v(\xi,t)=1\) when \(\xi<-B\) and \(v(\xi,t)\leqslant e^{-\xi}\) when \(\xi>B\), \(0\leqslant t\leqslant T\)._ Proof.: Let \((\varepsilon_{n})_{n\geqslant 1}\) be a sequence converging to \(0\) when \(n\to+\infty\) and consider the corresponding solutions \((v_{\varepsilon_{n}})\) of (4.2) and (4.4). For simplicity, we denote \(v_{\varepsilon_{n}}\) by \(v_{n}\). According to Lemmas 4.4-4.10, we first observe that the sequence \((v_{n})\) is bounded in the spaces \(W^{1,0}_{\infty}(\mathbb{R}\times[0,T])\cap W^{2,1}_{\infty}((\mathbb{R} \times[0,T])\setminus\bar{Q}_{\rho})\). Second, the sequence \((v_{n})\) is bounded in the space \(W^{2,1}_{p,\text{loc}}(\mathbb{R}\times[0,T])\), \(1<p<2\). Next, let \((A_{m})_{m\geqslant 1}\) be a sequence of positive numbers such that \(A_{m}\to+\infty\) as \(m\to+\infty\). Let us consider the restriction \(v_{n}^{m}\) of \(v_{n}\) to the interval \([-A_{m},A_{m}]\). At fixed \(m\geqslant 1\), the sequence \((v_{n}^{m})\) is bounded in the space \(W^{2,1}_{p}([-A_{m},A_{m}])\times[0,T])\) for any \(1<p<2\). One can extract a subsequence, denoted by \((v_{n_{j}}^{m})\), which converges a.e. in \([-A_{m},A_{m}]\times[0,T]\) and weakly in \(W^{2,1}_{p}([-A_{m},A_{m}]\times[0,T])\), \(1<p<2\), as \(j\to+\infty\). By a standard diagonal extraction procedure, one can eventually extract a subsequence, say \((v_{n_{k}})\), such that \(v_{n_{k}}\) and \(\frac{\partial}{\partial\xi}v_{n_{k}}\) converge respectively to \(v\) and \(\frac{\partial}{\partial\xi}v\) almost everywhere in \(\mathbb{R}\times[0,T]\) as \(k\to+\infty\). After a new extraction, \(v_{n_{k}}\to v\) in \(W^{1,0}_{\infty}(\mathbb{R}\times[0,T])\) weak-\(*\) and \(W^{2,1}_{\infty}((\mathbb{R}\times[0,T])\setminus\overline{Q}_{\rho})\) weak-\(*\). It is not difficult to see that \(v\) satisfies the properties of Lemmas 4.4-4.7. Set \(f_{\varepsilon}=\frac{\partial^{2}v_{\varepsilon}}{\partial\xi^{2}}+\frac{ \partial v_{\varepsilon}}{\partial\xi}\), \(f_{\varepsilon}\leqslant 0\in\mathbb{R}\times[0,T]\) (see Lemma 4.7). According to the above results, \(f_{n^{\prime\prime}}\to f=\frac{\partial^{2}v}{\partial\xi^{2}}+\frac{ \partial v}{\partial\xi}\) in \(L^{\infty}((\mathbb{R}\times[0,T])\setminus\overline{Q}_{\rho})\) weak-\(*\) which is non-negative in the distribution sense, and, hence, (5.1) holds. Since the sequence \(\eta_{n^{\prime\prime}}\) is bounded in \(C^{1}([0,T])\) (see Lemma 4.13), a subsequence converges to some \(\tilde{\eta}\) in \(C^{0}([0,T])\). More specifically, in the proof of Lemma 4.13, we showed that \(\frac{\partial v_{\varepsilon}}{\partial\xi}(\eta_{\varepsilon}(t),t)\leqslant-c\), where the constant \(c\) is independent of \(\varepsilon\). With the estimate of \(\frac{\partial^{2}v_{\varepsilon}}{\partial\xi^{2}}\) in Lemma 4.10, we deduce that, for any \(t>0\), there exists a small constant \(\Upsilon\) independent of \(\varepsilon\) such that, for \(x<\Upsilon\), \[v_{n^{\prime\prime}}(\eta_{n^{\prime\prime}}(t)+x,t)-v_{n^{\prime\prime}}(\eta_ {n^{\prime\prime}}(t),t)\leqslant-\frac{c}{2}x,\] and \[v_{n^{\prime\prime}}(\eta_{n^{\prime\prime}}(t)-x,t)-v_{n^{\prime\prime}}(\eta_ {n^{\prime\prime}}(t),t)\geqslant\frac{c}{2}x.\] Taking the limit as \(n^{\prime\prime}\to\infty\) and combining with \(v(\xi,\cdot)\) non-increasing in \(\xi\), we see that \(v<\gamma\) if \(\xi>\tilde{\eta}(t)\) and \(v>\gamma\) if \(\xi<\tilde{\eta}(t)\). This yields that \(\tilde{\eta}=\hat{\eta}\) (see Corollary 4.14). Moreover, the convergence of \(\eta_{n^{\prime\prime}}\) to \(\hat{\eta}\) implies the almost everywhere convergence of \(\sigma_{n^{\prime\prime}}(v_{n^{\prime\prime}})\) to \(\sigma(v)\). Hence, we have that \(\mathcal{L}_{n^{\prime\prime}}[v_{n^{\prime\prime}}]\) converges to \(\mathcal{L}[v]\) in \(L^{\infty}((\mathbb{R}\times[0,T])\setminus\overline{Q}_{\rho})\) weak-\(*\). This implies that \(\mathcal{L}[v]\geqslant 0\). It is also easy to verify that \(\mathcal{L}[v]=0\) whenever \(v<1\). Thus, \(v\) is a solution to (2.9). Finally, let us check the growth condition as \(\xi\to\pm\infty\): according to Lemmas 4.4 and 4.11, \(v_{\varepsilon}\leqslant\min(1,e^{-\xi})\) and \(v_{\varepsilon}\geqslant\tilde{K}-(\varepsilon\vee\varepsilon_{\beta})e^{ \delta t}\), respectively. At the limit \(\varepsilon\to 0\), it holds almost everywhere \(v(\xi,t)\leqslant\min(1,e^{-\xi})\) and \(v(\xi,t)\geqslant\tilde{K}\), \(-\infty<\xi<+\infty,0\leqslant t\leqslant T\). Therefore, \(v(\xi,t)=1\) when \(\xi\leqslant\tilde{\kappa}^{*}\) and \(v(\xi,t)\leqslant e^{-\xi}\) when \(\xi\geqslant 0\). Then, the uniqueness of \(v\) given by Theorem 5.1 is a direct consequence of the following theorem: **Theorem 5.2**.: _Let \(v_{i}\in\Big{\{}\bigcap_{\rho>0}W^{2,1}_{\infty}((\mathbb{R}\times[0,T]) \setminus\bar{Q}_{\rho})\Big{\}}\cap W^{1,0}_{\infty}(\mathbb{R}\times[0,T])\) be a solution to (2.9) satisfying_ \[\frac{\partial^{2}v_{i}}{\partial\xi^{2}}+\frac{\partial v_{i}}{\partial\xi} \leqslant 0,\] _for \(i=1,2\). Suppose that there exists \(B_{i}>0\) such that \(v_{i}=1\) for \(\xi<-B_{i}\) and \(v_{i}\leqslant e^{-\xi}\) for \(\xi>B_{i}\), \(i=1,2\). Then, it holds \(v_{1}=v_{2}\)._ Proof.: Denote by \(F=\frac{1}{2}\left(\sigma^{2}(v_{1})-\sigma^{2}(v_{2})\right)\left(\frac{ \partial^{2}v_{1}}{\partial\xi^{2}}+\frac{\partial v_{1}}{\partial\xi}\right)\) and \(\mathcal{L}_{2}=-\frac{\partial}{\partial t}+\frac{1}{2}\sigma^{2}(v_{2}) \Big{(}\frac{\partial^{2}}{\partial\xi^{2}}+\frac{\partial}{\partial\xi}\Big{)} +\delta\Big{(}\frac{\partial}{\partial\xi}+1\Big{)}\). We rewrite that \[\min\{\mathcal{L}_{2}[v_{1}]+F,1-v_{1}\}=0.\] Let \(w=e^{-2\delta t}(v_{1}-v_{2})\). We prove that \(w\geqslant 0\). Due to the growth condition on \(v_{1}\) and \(v_{2}\), it holds that \(\lim_{\xi\to\pm\infty}w(\xi,t)=0\) for \(t\in[0,T]\). Therefore if this conclusion is not true, \(w\) will achieve a negative minimum at some point \((\xi^{*},t^{*})\). By the parabolic version of Bony's maximum principle, it holds that \[\limsup_{(\xi,t)\to(\xi^{*},t^{*})}ess\left\{\frac{\partial w}{\partial t}- \frac{1}{2}\sigma^{2}(v_{2})\frac{\partial^{2}w}{\partial\xi^{2}}-\left(\frac{ 1}{2}\sigma^{2}(v_{2})+\delta\right)\frac{\partial w}{\partial\xi}\right\}\leqslant 0\] This is equivalent to \[\limsup_{(\xi,t)\to(\xi^{*},t^{*})}ess\left\{\mathcal{L}_{2}[v_{1}-v_{2}] \right\}\geqslant-\delta(v_{1}-v_{2})>0.\] By the continuity of \(v_{i}\), we derive \(\sigma(v_{1})\leqslant\sigma(v_{2})\) in a small parabolic neighborhood of \((\xi^{*},t^{*})\). It follows that \[\limsup_{(\xi,t)\to(\xi^{*},t^{*})}F(\xi,t)\geqslant 0.\] In this neighborhood, we also have that \[\mathcal{L}_{2}[v_{1}]+F=0\text{ and }\mathcal{L}_{2}[v_{2}]\geqslant 0, \text{a.e.}.\] Therefore, \[\limsup_{(\xi,t)\to(\xi^{*},t^{*})}ess\left\{\mathcal{L}_{2}[v_{1}-v_{2}] \right\}\leqslant\limsup_{(\xi,t)\to(\xi^{*},t^{*})}-F(\xi,t)\leqslant 0,\] which is a contradiction. Thus, we proved that \(w\geqslant 0\). Similarly, the reverse inequality holds, which yields the uniqueness result. ### Properties of the free boundaries For the original problem (2.9), we already introduced formally the default boundary \(\hat{\kappa}\) and the transit boundary \(\hat{\eta}\), see System (2.10). The goal of this subsection is to to define the free boundaries rigorously and prove some basic properties. _The default boundary_ Let us remind that \(v_{\varepsilon}\geqslant\tilde{K}-(\varepsilon\vee\varepsilon_{\beta})e^{\hat{ \delta}t}\), see Lemma 4.11. Taking the limit as \(\varepsilon\to 0\), this implies that \(v\geqslant\tilde{K}\). Since \(\tilde{K}=1\) for \(\xi\leqslant\tilde{\kappa}^{*}\), it holds that the set \(\{\xi\,|\,v(\xi,t)<1\}\) is bounded from below. Now, we are in position to define \[\hat{\kappa}(t):=\inf\{\xi\,|\,v(\xi,t)<1\}. \tag{5.2}\] Then, \(v\leqslant e^{-\xi}\) indicates that \(\hat{\kappa}(t)\) is also bounded from above. Thus, we will have the following result. **Theorem 5.3**.: _For each \(t\in(0,T]\), \(\hat{\kappa}(t)\) is well-defined, i.e. we have \(-\infty<\hat{\kappa}(t)<\infty\). Moreover, \(v(\xi,t)=1\) for \(\xi\leqslant\hat{\kappa}(t)\) and \(v(\xi,t)<1\) whenever \(\xi>\hat{\kappa}(t)\)._ _The transit boundary_ We remind that \(\hat{\eta}\in C^{0}([0,T])\) is the limit of \(\eta_{\varepsilon}\) (see Theorem 5.1). Thus, we will have the following theorem. **Theorem 5.4**.: _The initial positions of the free boundaries are as follows:_ \[\hat{\eta}(0)=\log\frac{1}{\gamma},\quad\hat{\kappa}(0)=0.\] _Furthermore, \(\hat{\kappa}(t)\) and \(\hat{\eta}(t)\) are non-increasing with respect to \(t\)._ Proof.: On the one hand, we know that \(\eta_{\varepsilon}(0)=-\log\gamma\) and \(\eta_{\varepsilon}(t)\) decreasing, see Section 4.2. On the other hand, the properties of \(\hat{\kappa}(t)\) follow from Theorem 5.3 and the initial value of \(v\). In the following, we will prove the smoothness of the free boundaries. Note that the uniform lower bound in Lemma 4.5 implies that there exists a constant \(c\) such that \(v_{\varepsilon}(\xi,t)\leqslant\frac{1+\gamma}{2}\) whenever \(\eta_{\varepsilon}(t)-\xi\leqslant c\). Then, one can choose a smooth function \(\zeta\) such that \(\zeta(t)<\eta_{\varepsilon}(t)\) and \(\|\zeta-\eta_{\varepsilon}\|_{L^{\infty}[0,T]}\in[c/4,c/2]\) for sufficiently small \(\varepsilon\). Therefore, \(\zeta\) separates the default boundary \(\hat{\kappa}\) and the transit boundary \(\hat{\eta}\). So, we can discuss them one by one with cut-off functions being applied when necessary. We first study the default boundary. The proof is essentially the same as that in [41], where the authors proved the smoothness of free boundary in American option problem. Thus, we just give a sketch of the proof for readers' convenience. We make the change of variable \(\xi=\zeta(t)+x\) and set \(u(x,t)=v(\zeta(t)+x,t)\). For suitable \(a,b\in\mathbb{R}\), we have that \(\zeta(t)+a\leqslant\hat{\kappa}(t)\leqslant\zeta(t)+b<\hat{\eta}(t)\). It holds that \[\frac{\partial u}{\partial t}\in L^{\infty}(t_{1},t_{2};H^{1}(a,b)),\frac{ \partial^{2}u}{\partial t^{2}}\in L^{2}(t_{1},t_{2};L^{2}(a,b)),\] which implies the continuity of \(v_{t}\) at \(\xi=\hat{\kappa}(t)\). From the definition of \(\hat{\kappa}\), one can prove that \(\hat{\kappa}\) is continuous in \((0,T]\). Applying a result from Cannon et al. [7], we will have that \(\hat{\kappa}\in C^{1}((0,T])\). Then, we may use the theory of parabolic equations to improve the regularity of \(v(\xi,\tau)\) by bootstrapping. Repeating the procedure yields the following result. **Theorem 5.5**.: \(\hat{\kappa}\in C^{\infty}((0,T])\)_._ Next, we consider the smoothness of the transit free boundary \(\hat{\eta}(t)\). For this purpose, we need the following lemma of the parabolic diffraction problem. The proof is essentially similar to that in [28]; hence, we just give a sketch. **Lemma 5.6**.: _In the domain \(Q=\{a<x<b,\ 0<t<T\}\), where \(a<b\) are some constants, consider the following initial boundary problem_ \[\begin{cases}u_{t}-(K_{f}(u_{x}+u))_{x}+f_{1}(x,t)u_{x}+f_{2}(x,t)u=0,\\ u(a,t)=g_{a}(t),\ u(b,t)=g_{b}(t),\quad u(x,0)=\phi(x),\\ g_{a}(0)=\phi(a),\quad g_{b}(0)=\phi(b),\end{cases} \tag{5.3}\] _where \(g_{a},g_{b}\in C^{2}[0,T]\), \(K_{f}(\phi_{x}+\phi)(x)\in C^{1}[a,b]\), \(f_{i}(x,t)\in C([a,b]\times[0,T])\), \(i=1,2\), \(K_{f}=\left\{\begin{array}{ll}\mu_{1},&\mbox{ if }x>f(t),\\ \mu_{2},&\mbox{ if }x<f(t),\end{array}\right.\), \(a<f(t)<b\), for \(t\in[0,T]\), \(f(t)\in C^{0,1}(0,T)\),\(a<f(t)<b\), for \(t\in[0,T]\), and \(\mu_{1},\mu_{2}\) are positive constants. Then, the problem (5.3) admits a solution, and_ \[u(f(t)-,t)=u(f(t)+,t),\quad\mu_{2}(u+u_{x})(f(t)-,t)=\mu_{1}(u+u_{x})(f(t)+,t).\] _Moreover, there exists a positive constant \(C\) and \(0<\alpha<1\) depend only on the given data such that_ \[\|K_{f}(u+u_{x})\|_{C^{\alpha}(Q)}\leqslant C.\] Proof.: Make the transformation \(y=x-f(t)\), \(v=ue^{y}\), then problem (5.3) satisfies \[\begin{cases}v_{t}-(K_{0}(v_{y}))_{y}-f^{\prime}(t)v_{y}+f_{1}v_{y}+(f_{2}-f_{ 1})v=0,\\ v(a-f,t)=g_{a}(t)e^{a-f},\ v(b-f,t)=g_{b}(t)e^{b-f},\quad v(y,0)=\phi(y+f)e^{y}. \end{cases} \tag{5.4}\] where \(K_{0}=\mu_{1}\) if \(y>0\), \(\mu_{2}\) if \(y<0\). By well-known estimates for linear parabolic PDEs with discontinuous coefficients whose principal part is in divergence form (see [24, Chapter III, 5]), and the proof of [28, Theorem 1.1], the claim of this lemma follows. Now, we are in position to prove the smoothness of \(\hat{\eta}\). **Theorem 5.7**.: \(\hat{\eta}\in C^{\infty}((0,T])\)_._ Proof.: In a neighborhood of \(\hat{\eta}\), \(v\) satisfies the system \[\left\{\begin{aligned} &-\frac{\partial v}{\partial t}+\frac{1}{2} \sigma_{H}^{2}\Big{(}\frac{\partial^{2}v}{\partial\xi^{2}}+\frac{\partial v}{ \partial\xi}\Big{)}+\delta\Big{(}\frac{\partial v}{\partial\xi}+v\Big{)}=0, \quad\xi>\hat{\eta}(t),\\ &-\frac{\partial v}{\partial t}+\frac{1}{2}\sigma_{L}^{2}\Big{(} \frac{\partial^{2}v}{\partial\xi^{2}}+\frac{\partial v}{\partial\xi}\Big{)}+ \delta\Big{(}\frac{\partial v}{\partial\xi}+v\Big{)}=0,\quad\xi<\hat{\eta}(t), \\ & v(\hat{\eta}(t)+,t)=v(\hat{\eta}(t)-,t)=\gamma,\quad v_{\xi}( \hat{\eta}(t)+,t)=v_{\xi}(\hat{\eta}(t)-,t).\end{aligned}\right. \tag{5.5}\] Thus, it holds that \[\hat{\eta}^{\prime}(t)=-\frac{v_{t}(\hat{\eta}(t)+,t)}{v_{\xi}(\hat{\eta}(t)+, t)}=-\frac{v_{t}(\hat{\eta}(t)-,t)}{v_{\xi}(\hat{\eta}(t)-,t)}, \tag{5.6}\] which means that \[v_{t}(\hat{\eta}(t)+,t)=v_{t}(\hat{\eta}(t)-,t). \tag{5.7}\] Set \(w=v_{\xi}\). From (5.5) and (5.7), it turns out that \(w\) verifies the system \[\left\{\begin{aligned} &-\frac{\partial w}{\partial t}+\frac{1}{2} \sigma_{H}^{2}\Big{(}\frac{\partial^{2}w}{\partial\xi^{2}}+\frac{\partial w}{ \partial\xi}\Big{)}+\delta\Big{(}\frac{\partial w}{\partial\xi}+w\Big{)}=0, \quad\xi>\hat{\eta}(t),\\ &-\frac{\partial w}{\partial t}+\frac{1}{2}\sigma_{L}^{2}\Big{(} \frac{\partial^{2}w}{\partial\xi^{2}}+\frac{\partial w}{\partial\xi}\Big{)}+ \delta\Big{(}\frac{\partial w}{\partial\xi}+w\Big{)}=0,\quad\xi<\hat{\eta}(t), \\ & w(\hat{\eta}(t)+,t)=w(\hat{\eta}(t)-,t),\quad\sigma_{L}^{2}(w_{x} +w)(\hat{\eta}(t)+,t)=\sigma_{H}^{2}(w_{x}+w)(\hat{\eta}(t)-,t).\end{aligned}\right. \tag{5.8}\] According to the free boundary condition, \(w\) satisfies a typical Verigin problem, see [28, 37]. In particular, the \(C^{\infty}\) regularity of the free boundary was proved in [28]. Therefore, we may obtain the same result for our problem in a similar manner. To see this, note that the free boundary \(\hat{\eta}\) is Lipschitz continuous and satisfies \[\hat{\eta}^{\prime}(t) = -\frac{\frac{1}{2}\sigma_{H}^{2}\Big{(}w_{\xi}(\hat{\eta}(t)+,t)+ w(\hat{\eta}(t)+,t)\Big{)}+\delta\Big{(}w(\hat{\eta}(t)+,t)+\gamma\Big{)}}{w(\hat{ \eta}(t)+,t)}\] \[= -\frac{\frac{1}{2}\sigma_{H}^{2}\Big{(}w_{\xi}(\hat{\eta}(t)+,t)+ w(\hat{\eta}(t)+,t)\Big{)}+\delta\Big{(}w(\hat{\eta}(t)+,t)+\gamma\Big{)}}{w(\hat{ \eta}(t)+,t)}\] \[= -\frac{\frac{1}{2}\sigma_{H}^{2}\Big{(}w_{\xi}(\hat{\eta}(t)+,t)+ w(\hat{\eta}(t)+,t)\Big{)}+\delta\Big{(}w(\hat{\eta}(t)+,t)+\gamma\Big{)}}{w(\hat{ \eta}(t)+,t)}\] \[= -\frac{\frac{1}{2}\sigma_{H}^{2}\Big{(}w_{\xi}(\hat{\eta}(t)+,t)+ w(\hat{\eta}(t)+,t)\Big{)}+\delta\Big{(}w(\hat{\eta}(t)+,t)+\gamma\Big{)}}{w(\hat{ \eta}(t)+,t)}\] \[= -\frac{\frac{1}{2}\sigma_{H}^{2}\Big{(}w_{\xi}(\hat{\eta}(t)+,t)+ w(\hat{\eta}(t)+,t)\Big{)}+\delta\Big{(}w(\hat{\eta}(t)+,t)+ \[= -\frac{\frac{\frac{1}{2}\sigma_{L}^{2}\Big{(}w_{\xi}(\hat{\eta}(t)-,t)+w( \hat{\eta}(t)-,t)\Big{)}+\delta\Big{(}w(\hat{\eta}(t)-,t)+\gamma\Big{)}}{w(\hat{ \eta}(t)-,t)}}{,} \tag{5.9}\] which is a kind of Stefan condition, see [22; 23; 13] for references. Applying Lemma 5.6 to problem (5.8) (up to some simple transformation), \(w_{\xi}+w\in C^{\alpha}\) up to the free boundary. Furthermore, by Lemma 4.6, \(w\) has a negative upperbound. Then, the right hand side of (5.9) belongs to \(C^{\alpha}\). This implies in turn \(\hat{\eta}\in C^{1+\alpha}\). In this way, by an iteration process, one can further improve the regularity of \(\hat{\eta}\) and shows eventually that it belongs to \(C^{\infty}\). ## 6 Asymptotic Convergence In this section, we will prove that \(v\) converges to the traveling wave solution as \(t\) goes to \(+\infty\). Since \(\frac{\partial v}{\partial t}\) is non-positive, we see that, for any \(t\), \[0\geqslant\int_{0}^{t}\frac{\partial v}{\partial t}(\xi,s)ds=v(\xi,t)-v(\xi,0 )\geqslant\tilde{K}(\xi)-v(\xi,0).\] Note that for \(\xi<\tilde{\kappa}^{*}\), \(v(\xi,0)=\tilde{K}(\xi)=1\) and \(\tilde{K}(\xi),v(\xi,0)\leqslant e^{-\xi}\) which implies the integrability of \(\tilde{K}-v(\cdot,0)\) over \(\mathbb{R}\). Thus, we have that \[0\geqslant\int_{-\infty}^{\infty}\int_{0}^{t}\frac{\partial v}{\partial t}( \xi,s)dsd\xi\geqslant\int_{-\infty}^{\infty}(\tilde{K}(\xi)-v(\xi,0))d\xi.\] Letting \(t\) tend to infinity, we get that there exists a constant \(C>0\) such that \[0\geqslant\int_{-\infty}^{\infty}\int_{0}^{\infty}\frac{\partial v}{\partial t }(\xi,s)dsd\xi\geqslant-C. \tag{6.1}\] Now let \(v^{n}(\xi,t):=v(\xi,t+n)\) and consider \(v^{n}\) as a sequence of functions defined on \(\mathbb{R}\times[0,1]\). Lemmas 4.4-4.10 indicate that it is a bounded sequence in \(W^{2,1}_{\infty}(\mathbb{R}\times[0,1])\). As in the proof of Theorem 5.1, via a standard diagonal extraction procedure there exists a function \(\bar{K}\) and a subsequence \(n_{j}\) such that such that \(v_{n_{j}}\) and \(\frac{\partial}{\partial\xi}v_{n_{j}}\) converge respectively to \(\bar{K}\) and \(\frac{\partial}{\partial\xi}\bar{K}\) almost everywhere in \(\mathbb{R}\times[0,1]\). After a new extraction if necessary, \[\frac{\partial v^{n_{j}}}{\partial t}\to\frac{\partial\bar{K}}{\partial t}, \quad\frac{\partial^{2}v^{n_{j}}}{\partial\xi^{2}}\to\frac{\partial^{2}\bar{ K}}{\partial\xi^{2}}\ \ \mbox{in}\ L^{\infty}(\mathbb{R}\times[0,1])\ \mbox{weak-}*,\] Since non-positivity is preserved under weak-\(*\) convergence and \(\frac{\partial v}{\partial t}\leqslant 0\), one can deduce that \(\frac{\partial\bar{K}}{\partial t}\leqslant 0\). Since (6.1) implies that \(\int_{0}^{1}\int_{-\infty}^{\infty}v^{n_{j}}(\xi,t)d\xi dt=\int_{n_{j}}^{n_{j}+ 1}\int_{-\infty}^{\infty}v(\xi,t)d\xi dt\to 0\) as \(n_{j}\to 0\), we have that \(\int_{0}^{1}\int_{-\infty}^{\infty}\frac{\partial\bar{K}}{\partial t}d\xi dt=0\). Combining with the non-positivity of \(\frac{\partial\bar{K}}{\partial t}\), it follows that \(\frac{\partial\bar{K}}{\partial t}\equiv 0\) which means that \(\bar{K}\) is only a function of \(\xi\). Then, the following properties pass from \(v\) to \(\bar{K}\), \[\tilde{K}\leqslant\bar{K}\leqslant\min\{1,e^{-\xi}\},\ \ \frac{d\bar{K}}{d\xi} \leqslant 0,\ \mbox{and}\ \frac{d^{2}\bar{K}}{d\xi^{2}}+\frac{d\bar{K}}{d\xi}\leqslant 0.\] Since \(\hat{\eta}(\cdot)\) and \(\hat{\kappa}(\cdot)\) are also non-increasing with respect to \(t\), they also admit limits at \(\infty\), which are denoted as \(\bar{\eta}\) and \(\bar{\kappa}\) respectively. Then, one can verify that \(\bar{K}(\bar{\eta})=\gamma\) and \(\bar{K}(\bar{\kappa})=1\). For any interval \(I\) such that \(\bar{I}\subset(\bar{\kappa},\bar{\eta})\), there exists \(T\) such that \(\bar{I}\subset(\hat{\kappa}(t),\hat{\eta}(t))\) for any \(t>T\). In \(I\), it holds that \[-\frac{\partial v^{n}}{\partial t}+\frac{1}{2}\sigma_{L}^{2}\left(\frac{ \partial^{2}v^{n}}{\partial\xi^{2}}+\frac{\partial v^{n}}{\partial\xi}\right)+ \delta\left(\frac{\partial v^{n}}{\partial\xi}+v^{n}\right)=0.\] Taking subsequence \(n_{j}\), we derive that \[\frac{1}{2}\sigma_{L}^{2}\left(\frac{d^{2}\bar{K}}{d\xi^{2}}+\frac{d\bar{K}}{d \xi}\right)+\delta\left(\frac{d\bar{K}}{d\xi}+\bar{K}\right)=0,\text{ for }\xi\in I.\] Since \(I\) is arbitrary, it holds that \[\frac{1}{2}\sigma_{L}^{2}\left(\frac{d^{2}\bar{K}}{d\xi^{2}}+\frac{d\bar{K}}{d \xi}\right)+\delta\left(\frac{d\bar{K}}{d\xi}+\bar{K}\right)=0,\text{ for }\bar{\kappa}<\xi<\bar{\eta}.\] Similarly, we can also show that \[\frac{1}{2}\sigma_{H}^{2}\left(\frac{d^{2}\bar{K}}{d\xi^{2}}+\frac{d\bar{K}}{d \xi}\right)+\delta\left(\frac{d\bar{K}}{d\xi}+\bar{K}\right)=0,\text{ for }\xi>\bar{\eta}.\] Note that \(\bar{K}\leqslant\bar{K}\leqslant\min\{1,e^{-\xi}\}\) implies that \(\lim_{\xi\to\infty}e^{\xi}\bar{K}(\xi)=1\). Combining with the fact that \(\bar{K}\in C^{1+\alpha}\), we see that it is a solution to (3.2), i.e. \[\begin{cases}\frac{d^{2}\bar{K}}{d\xi^{2}}+\frac{d\bar{K}}{d\xi}+c_{H}(\frac{ d\bar{K}}{d\xi}+K)=0,\xi>\bar{\eta},\\ \frac{d^{2}\bar{K}}{d\xi^{2}}+\frac{d\bar{K}}{d\xi}+c_{L}(\frac{d\bar{K}}{d \xi}+K)=0,\bar{\kappa}<\xi<\bar{\eta},\\ \bar{K}(\bar{\kappa})=1,\frac{d\bar{K}}{d\xi}(\bar{\kappa})=0,\\ \bar{K}(\bar{\eta})=\bar{K}(\bar{\eta}^{*}-)=\gamma,\frac{d\bar{K}}{d\xi}( \bar{\eta}+)=\frac{d\bar{K}}{d\xi}(\bar{\eta}-),\\ \lim_{\xi\to\infty}e^{\xi}\bar{K}(\xi)=1.\end{cases}\] Then, interior estimate implies that \(\bar{K}\) is smooth in \((\bar{\kappa},\bar{\eta})\) and \((\bar{\eta},\infty)\). Now, from the uniqueness of the solution, we derive that \(\bar{K}=K\). Since any sub-sequential limit must be same, the full sequence must converge as \(n\) goes to \(\infty\). We have proved the local convergence of \(v\). But, noting that \(v(\xi,t)\equiv 1\) for \(\xi<\tilde{\kappa}^{*}\) and \(v(\xi,t)\leqslant e^{-\xi}\), the convergence is also uniform over \(\mathbb{R}\). Finally, we prove the following result. **Theorem 6.1**.: _As \(t\) goes to \(+\infty\), \(v(\cdot,t)\) converges uniformly to \(K\)._ ## 7 Numerical Results In this section, we will give some numerical results for illustration. As \(u\) represents the value of the bond, we will come back to (2.8) instead of (2.9) which will give us more clear financial meaning. ### Numerical Scheme As our problem is non-standard, we will introduce the numerical scheme first. To solve the free boundary problem, we use an explicit-implicit finite difference scheme combined with Newton iteration to solve the penalized equation. The first step is to discretize the equation. Let \(t_{i}=i\Delta t,i=0,1,...,M\), and \(\xi_{j}=j\Delta\xi,j=0,1,\pm 2,...,\pm\text{N}\). \(U_{i,j}\) will be the approximation of the solution \(u\) of (2.8) at mesh point \((t_{i},\xi_{j})\). Consider the approximating penalized equation \[\begin{cases}-\frac{\partial u}{\partial t}+\frac{1}{2}\sigma_{\varepsilon}^{2 }(u,\xi)(\frac{\partial^{2}u}{\partial\xi^{2}}-\frac{\partial u}{\partial \xi})+\delta\frac{\partial u}{\partial\xi}=\varepsilon^{-1}(u-e^{\xi})^{+}, \quad\xi\in[-N\Delta\xi,N\Delta\xi],t\geqslant 0;\\ u(\xi,0)=\min\{1,e^{\xi}\};\\ u(N\Delta\xi,t)=1,u(-N\Delta\xi,t)=0.\end{cases}\] Here \(\sigma_{\varepsilon}(u,\xi)=\sigma_{H}+(\sigma_{L}-\sigma_{H})H_{\varepsilon}(u- \gamma e^{\xi})\) with \(H_{\varepsilon}\) be a proper smooth function. For numerical convenience, we use the penalty function \(\varepsilon^{-1}(u-e^{\xi})^{+}\). In the numerical experiment, we choose \[H_{\varepsilon}(z)=\begin{cases}0,z\leqslant-\varepsilon;\\ 6\varepsilon^{-5}z^{5}+15e^{-4}z^{4}+10\varepsilon^{-3}z^{3}+1,-\varepsilon <z<0;\\ 1,z\geqslant 0,\end{cases}\] as proposed in [27]. Note that the left hand side is a nonlinear operator since coefficients depend on \(u\). In the numerical implement, we determine these coefficients with function value from previous time step. For illustration, let us perform discretization at \((t_{i},\xi_{j})\). Denote by \(\sigma_{i,j}:=\sigma_{\varepsilon}(U_{i,j},\xi_{j})\). The first order term is discretized by the upwind scheme, i.e. \[(\delta-\sigma_{i-1,j})\frac{\partial u}{\partial\xi}(t_{i},\xi_{j})\approx \begin{cases}(\delta-\sigma_{i-1,j})\frac{U_{i,j+1},U_{i,j}}{\Delta\xi},\text{ if }\delta-\sigma_{i-1,j}\geqslant 0;\\ (\delta-\sigma_{i-1,j})\frac{U_{i,j},U_{i,j-1}}{\Delta\xi},\text{ if }\delta- \sigma_{i-1,j}<0.\end{cases}\] We use the fully implicit approximation to the temporal term \[\frac{\partial u}{\partial t}(t_{i},\xi_{j})\approx\frac{U_{i,j}-U_{i-1,j}}{ \Delta t},\] and the usual discretization for the second order term \[\frac{\partial^{2}u}{\partial\xi^{2}}\approx\frac{U_{i,j+1}+U_{i,j-1}-2U_{i,j} }{(\Delta\xi)^{2}}.\] Thus, given function value \(U_{i-1,\cdot}\) at previous time step, current value \(U_{i,\cdot}\) is obtained by solving the following equation \[[A_{i}U_{i,\cdot}]_{j}=\varepsilon^{-1}(U_{i,j}-e^{\xi_{j}})^{+} \tag{7.1}\] for \(j=0,\pm 1,\pm 2,...,\pm N\). Here the matrix \(A_{i}\) is determined by \(U_{i-1,\cdot}\) and is a sparse \(M\)-matrix due to our discretization scheme. Now, we have to solve the nonlinear equation (7.1). We adopt the method used by [12] to value American options. For illustration, let us recall the classical Newton iteration for finding the root of a convex function \(f\). Given an initial guess, the point is updated as \[z_{n+1}=z_{n}-\frac{f(z_{n})}{f^{\prime}(z_{n})}\] which is equivalent to say that \(z_{n+1}\) solves \[f(z_{n})+f^{\prime}(z_{n})(z-z_{n})=0.\] It is easy to see that the left hand side of above equation is an first order approximation of \(f\) at \(z_{n}\). Similarly, we can solve (7.1) with Newton iteration. Denote \(U^{k}_{i,j}\) as the approximation at \((t_{i},\xi_{j})\) for \(k\)th iteration. Then, \(U^{k}_{i,\cdot}\) solves the linearized equation \[[A_{i}U^{k}_{i,\cdot}]_{j}=\varepsilon^{-1}(U^{k-1}_{i,j}-e^{\xi_{j}})^{+}+ \varepsilon^{-1}1_{\{U^{k-1}_{i,j}-e^{\xi_{j}}>0\}}(U^{k}_{i,j}-U^{k-1}_{i,j}). \tag{7.2}\] When the difference between \(U^{k}_{i,\cdot}\) and \(U^{k-1}_{i,\cdot}\) is small enough, we stop the iteration and set \(U_{i,\cdot}\) equals \(U^{k}_{i,\cdot}\). Moreover, the initial guess \(U^{0}_{i,\cdot}\) is chosen to be \(U_{i-1,\cdot}\). In summary, we have the following iterative algorithm. ``` 0:\(N,M,L,\Delta t,\Delta\xi\), smooth function \(H_{\varepsilon}(\cdot)\) and tolerance \(tol\) Initialize \(U_{0,j}=\min\{1,e^{\xi_{j}}\}\) for\(i=1,2,...,M\)do Construct the matrix \(A_{i}\) according to upwind scheme with \[\sigma_{i,j}:=\sigma_{\varepsilon}(U_{i,j},\xi_{j})\] Set \(U_{i,\cdot}^{0}=U_{i-1,\cdot}\) while True do Solve \[[A_{i}U_{i,\cdot}^{k}]_{j}=\varepsilon^{-1}(U_{i,j}^{k-1}-e^{\xi_{j}})^{+}+ \varepsilon^{-1}1_{\{U_{i,j}^{k-1}-e^{\xi_{j}}>0\}}(U_{i,j}^{k}-U_{i,j}^{k-1}).\] If \(\frac{\|U_{i,\cdot}^{k}-U_{i,\cdot}^{k-1}\|_{\infty}}{\max\{1,\|U_{i,\cdot}^{ k-1}\|_{\infty}\}}<tol\), Quit endwhile Set \(U_{i,\cdot}=U_{i,\cdot}^{k}\). endfor ``` **Algorithm 1** Explicit-Implicit Finite-difference Iterative Algorithm ### Numerical Results In the numerical experiment, we set the model parameters as \(\delta=0.03,\sigma_{l}=0.3,\sigma_{h}=0.2\) and \(\gamma=0.6\). For discretization, we have \(\Delta t=0.01,\Delta\xi=0.001\) and \(N=10^{3}\). We also choose \(\varepsilon=10^{-8}\), \(tol=10^{-4}\). Having numerically solved (3.6), we are able to plot the traveling equation for (2.8), which is \(e^{\xi}K(\xi)\). Next, we plot the numerical solution for (2.8) and compared it with the traveling wave equation in Figure 2. Figure 1: Typical traveling wave equation It seems that the solution will converge to the traveling wave equation as \(t\) goes to infinity as the theoretical result indicates. To numerically check this, we compute the solution for large time \(t\) and plot the error between the solution and the traveling wave equation. The result is shown in Figure 3. The error is defined as the supreme norm between the traveling wave equation \(K\) and the value function at time \(t\). We see that the error is monotone decreasing with respect to \(t\). The final error is about \(3.6\times 10^{-3}\) at time \(t=1500\). Finally, we plot the default and transit boundaries as a function of \(t\) and compare them with those of traveling wave equation. The result is shown in Figure 4. It is clear that the boundaries are decreasing with respect to \(t\) which is consistent with our previous theoretical analysis. We also see the convergence of two boundaries. Figure 3: Differences between the free-boundary problem and traveling wave equation. Figure 2: Solutions of the free boundary problem at time \(t=0,50,100,150\).
2303.06883
The mod 2 Seiberg-Witten invariants of spin structures and spin families
We completely determine the mod $2$ Seiberg-Witten invariants for any spin structure on any closed, oriented, smooth $4$-manifold $X$. Our computation confirms the validity of the simple type conjecture mod $2$ for spin structures. Our proof also works for families of spin $4$-manifolds and thus computes the mod $2$ Seiberg-Witten invariants for spin families. The proof of our main result uses $Pin(2)$-symmetry to define an enhancement of the mod $2$ Seiberg-Witten invariants. We prove a connected sum formula for the enhanced invariant using localisation in equivariant cohomology. Unlike the usual Seiberg-Witten invariant, the enhanced invariant does not vanish on taking connected sums and by exploiting this property, we are able to compute the enhanced invariant.
David Baraglia
2023-03-13T05:57:43Z
http://arxiv.org/abs/2303.06883v2
# The mod 2 Seiberg-Witten invariants of spin structures and spin families ###### Abstract. We completely determine the mod 2 Seiberg-Witten invariants for any spin structure on any closed, oriented, smooth 4-manifold \(X\). As a consequence it is shown that they depend only on the Betti numbers, signature and 4-fold cup products of elements of \(H^{1}(X)\). Our computation confirms the validity of the simple type conjecture mod 2 for spin structures. Our proof also works for families of spin 4-manifolds and thus computes the mod 2 Seiberg-Witten invariants for spin families. The proof of our main result uses \(Pin(2)\)-symmetry to define an enhancement of the mod 2 Seiberg-Witten invariants. We prove a connected sum formula for the enhanced invariant using localisation in equivariant cohomology. Unlike the usual Seiberg-Witten invariant, the enhanced invariant does not vanish on taking connected sums and by exploiting this property, we are able to compute the enhanced invariant. ## 1. Introduction The Seiberg-Witten invariant is an invariant of smooth 4-manifolds which has proven to be very effective in distinguishing smooth structures on homeomorphic 4-manifolds. In contrast it has been observed that in a number of cases, the mod 2 Seiberg-Witten invariant for spin-structures is a topological invariant [20, 25, 6, 18] and a similar "rigidity" phenomenon occurs for the families Seiberg-Witten invariants [15]. Thus the mod 2 Seiberg-Witten invariants of spin structures can not be used to distinguish smooth structures. Nevertheless, there are many good reasons for wanting to compute these invariants. Their vanishing can be used to obstruct the existence of symplectic structures. Their non-vanishing can be used to obstruct the existence of positive scalar curvature metrics and gives a lower bound for the genus of embedded surfaces through the adjunction inequality. Furthermore, using various operations such as blowup, rational blowdown [9] and Fintushel-Stern knot surgery [10], we can also calculate the mod 2 Seiberg-Witten invariants for various spin\({}^{c}\)-structures that are not spin. In this paper we completely determine the mod 2 Seiberg-Witten invariants for any spin structure and also the mod 2 families Seiberg-Witten invariants for spin families (under some mild assumptions on the family). Some consequences of these results and some related results are examined. ### Main results Let \(X\) be a closed, oriented, smooth 4-manifold with \(b_{+}(X)>1\) and \(\mathfrak{s}\) a spin\({}^{c}\)-structure. The mod 2 Seiberg-Witten invariant is a homomorphism \[SW_{X,\mathfrak{s}}:\mathbb{A}(X)\to\mathbb{Z}_{2}\] where \(\mathbb{A}(X)=\wedge^{*}H^{1}(X;\mathbb{Z})^{*}\otimes_{\mathbb{Z}}\mathbb{Z}[x]\) is the tensor product of the exterior algebra on \(H^{1}(X;\mathbb{Z})^{*}=Hom(H^{1}(X;\mathbb{Z}),\mathbb{Z})\) with the polynomial ring \(\mathbb{Z}[x]\cong H^{*}_{S^{1}}(pt;\mathbb{Z})\) on a single generator \(x\) (see eg. [23]). The Seiberg-Witten invariant is also defined when \(b_{+}(X)=1\) but depends in addition on the choice of a chamber. For our purposes it is convenient to recast this in the following form. Let \(Pic^{\mathfrak{s}}(X)\) denote the space of gauge equivalence classes of spin\({}^{c}\)-connections with curvature equal to a fixed \(2\)-form representing \(-2\pi ic(\mathfrak{s})\). This is a torsor over \(Pic(X)\), the group of flat unitary line bundles on \(X\) and hence there are canonical isomorphisms \[H^{*}(Pic^{\mathfrak{s}}(X);\mathbb{Z}_{2})\cong H^{*}(Pic(X);\mathbb{Z}_{2}) \cong\wedge^{*}H^{1}(X;\mathbb{Z})^{*}\otimes_{\mathbb{Z}}\mathbb{Z}_{2}.\] So \(SW_{X,\mathfrak{s}}\) can be viewed as a map \(H^{*}(Pic^{\mathfrak{s}}(X);\mathbb{Z}_{2})\otimes_{\mathbb{Z}_{2}}H^{*}_{S^{1 }}(pt;\mathbb{Z}_{2})\to\mathbb{Z}_{2}\), or as a map \(H^{*}_{S^{1}}(pt;\mathbb{Z}_{2})\to H^{*}(Pic^{\mathfrak{s}}(X);\mathbb{Z}_{2 })^{*}\). By Poincare duality, \(H^{*}(Pic^{\mathfrak{s}}(X);\mathbb{Z}_{2})^{*}\cong H^{*}(Pic^{\mathfrak{s}}( X);\mathbb{Z}_{2})\), so \(SW_{X,\mathfrak{s}}\) is equivalent to a map \[SW_{X,\mathfrak{s}}:H^{*}_{S^{1}}(pt;\mathbb{Z}_{2})\to H^{*}(Pic^{\mathfrak{s }}(X);\mathbb{Z}_{2}).\] This map has degree \(-d(X,\mathfrak{s})\), where we set \[d(X,\mathfrak{s})=\frac{c(\mathfrak{s})^{2}-\sigma(X)}{4}-b_{+}(X)-1.\] This is the form of the Seiberg-Witten invariants that most naturally emerges from from the Bauer-Furuta invariant and it is the form that we will use throughout this paper. Since \(H^{*}_{S^{1}}(pt;\mathbb{Z}_{2})\cong\mathbb{Z}_{2}[x]\), \(SW_{X,\mathfrak{s}}\) is determined by the classes \[SW_{X,\mathfrak{s}}(x^{m})\in H^{2m-d(X,\mathfrak{s})}(Pic^{\mathfrak{s}}(X) ;\mathbb{Z}_{2})\] where \(m\geq 0\). Let \(D_{\mathfrak{s}}\to Pic^{\mathfrak{s}}(X)\) denote the families index of the family of spin\({}^{c}\) Dirac operators parametrised by \(Pic^{\mathfrak{s}}(X)\). Let \(s_{j}(D_{\mathfrak{s}})\in H^{2j}(Pic^{\mathfrak{s}}(X);\mathbb{Z})\) denote the \(j\)-th Segre class of \(D_{\mathfrak{s}}\) (recall that the total Segre class \(s(V)=1+s_{1}(V)+s_{2}(V)+\cdots\) of a virtual bundle is defined by \(c(V)s(V)=1\), where \(c(V)=1+c_{1}(V)+c_{2}(V)+\cdots\) is the total Chern class). From the families index theorem (see Section 5 for details), it follows that \(s_{j}(D_{\mathfrak{s}})=0\) for odd \(j\) and \[s_{2j}(D_{\mathfrak{s}})=\frac{1}{j!}s_{2}(D_{\mathfrak{s}}),\] where \[s_{2}(D_{\mathfrak{s}})=\sum_{i_{1}<i_{2}<i_{3}<i_{4}}\langle y_{i_{1}}y_{i_{ 2}}y_{i_{3}}y_{i_{4}},[X]\rangle x_{i_{1}}x_{i_{2}}x_{i_{3}}x_{i_{4}}. \tag{1.1}\] Here \(y_{1},\ldots y_{b_{1}(X)}\) is a basis for \(H^{1}(X;\mathbb{Z})\) and \(x_{1},\ldots,x_{b_{1}(X)}\) is a corresponding dual basis for \(H^{1}(Pic^{\mathfrak{s}}(X);\mathbb{Z})\cong H^{1}(X;\mathbb{Z})^{*}\). Our first main result is a complete formula for \(SW_{X,\mathfrak{s}}\) for spin structures: **Theorem 1.1**.: _Let \(X\) be a compact, oriented, smooth \(4\)-manifold with \(b_{+}(X)>0\) and let \(\mathfrak{s}\) be a spin-structure on \(X\). If \(b_{+}(X)\neq 3\), then \(SW_{X,\mathfrak{s}}(x^{m})=0\) for all \(m\geq 0\). If \(b_{+}(X)=3\), then \(SW_{X,\mathfrak{s}}(x^{m})=0\) for all \(m>0\) and_ \[SW_{X,\mathfrak{s}}(1)=s_{2+\sigma(X)/8}(D_{\mathfrak{s}})\;(\mathrm{mod}\;2)\] _where we set \(s_{l}(D_{\mathfrak{s}})=0\) if \(l<0\)._ Recall that if \(X\) is a compact, oriented, smooth spin \(4\)-manifold with \(b_{+}(X)=3\), then \(\sigma(X)=0\) or \(-16\). When \(\sigma(X)=-16\), Theorem 1.1 gives \(SW_{X,\mathfrak{s}}(1)=1\ (\mathrm{mod}\ 2)\). The \(b_{1}(X)=0\) case of this result was proven by Morgan and Szabo [20]. When \(\sigma(X)=0\), Theorem 1.1 gives \(SW_{X,\mathfrak{s}}(1)=s_{2}(D_{\mathfrak{s}})\ (\mathrm{mod}\ 2)\). The \(b_{1}(X)=4\) case of this result was proven by Ruberman and Strle [25]. Other special cases of Theorem 1.1 were proven by Bauer [6] and Li [18]. In all cases where \(SW_{X,\mathfrak{s}}(x^{m})\) is not \(0\) or \(1\), it equals \(s_{2}(D_{\mathfrak{s}})\), which in light of Equation (1.1) can be expressed in terms of the \(4\)-fold cup product of elements of \(H^{1}(X;\mathbb{Z})\). So we obtain: **Corollary 1.2**.: _The mod \(2\) Seiberg-Witten invariants of a spin structure \(\mathfrak{s}\) on a compact, oriented, smooth \(4\)-manifold \(X\) depend only on the Betti numbers and signature of \(X\) and the \(4\)-fold cup product of elements of \(H^{1}(X;\mathbb{Z})\)._ For \(4\)-manifolds with \(b_{1}(X)=0\), there is only one possible value of \(m\) for which \(SW_{X,\mathfrak{s}}(x^{m})\) can be non-zero, which is \(m=d(X,\mathfrak{s})/2\), provided \(d(X,\mathfrak{s})\) is even and non-negative. When \(b_{1}(X)=0\), \(d(X,\mathfrak{s})\) is the dimension of the moduli space of solutions to the Seiberg-Witten equations, so \(m\) is half the dimension. In all cases where the Seiberg-Witten invariants have been computed and \(b_{+}(X)>1\), we have \(SW_{X,\mathfrak{s}}(x^{m})=0\) unless \(m=0\), that is, unless the dimension of the moduli space is zero. This has become known as the _simple type conjecture_. One can also formulate a simple type conjecture for \(4\)-manifolds with \(b_{1}(X)>0\). In this case one usually considers only the "pure" Seiberg-Witten invariant \[SW(X,\mathfrak{s})=\int_{Pic^{\mathfrak{s}}(X)}SW_{X,\mathfrak{s}}(x^{m})\in \mathbb{Z},\] where \(2m=d(X,\mathfrak{s})+b_{1}(X)\) is the dimension of the moduli space. The simple type conjecture then states that \(SW(X,\mathfrak{s})=0\) unless \(m=0\). From Theorem 1.1, we immediately see that the mod \(2\) simple type conjecture is true for spin structures: **Corollary 1.3**.: _Let \(X\) be a compact, oriented, smooth \(4\)-manifold with \(b_{+}(X)>0\) and let \(\mathfrak{s}\) be a spin structure on \(X\). Then \(SW_{X,\mathfrak{s}}(x^{m})=0\ (\mathrm{mod}\ 2)\), unless \(m=0\). In particular the mod \(2\) simple type conjecture holds for spin structures._ One application of non-vanishing Seiberg-Witten invariants is the adjunction inequality: **Theorem 1.4**.: _Let \(X\) be a compact, oriented, smooth, spin \(4\)-manifold with \(b_{+}(X)=3\) and \(\sigma(X)=0\). Suppose that the \(4\)-fold cup product \(\langle y_{1}y_{2}y_{3}y_{4},[X]\rangle\) is odd for some \(y_{1},y_{2},y_{3},y_{4}\in H^{1}(X;\mathbb{Z})\). Let \(\Sigma\subset X\) be a smooth, compact, oriented surface embedded in \(X\) and representing a non-torsion class \(a\in H^{2}(X;\mathbb{Z})\). Then the genus \(g\) of \(\Sigma\) satisfies_ \[2g-2\geq|a^{2}|.\] Proof.: By Theorem 1.1, the Seiberg-Witten invariant of any spin structure on \(X\) is non-zero (for either orientation of \(X\)). We choose the orientation on \(X\) for which the self-intersection of \(\Sigma\) is non-negative (and thus equals \(|a^{2}|\)). The adjunction inequality (eg. [16, Theorem 11]) gives \(2g-2\geq|a^{2}|\). Our techniques can also be used to compute the mod \(2\) families Seiberg-Witten invariants for spin families. Let \(B_{0}\) be a compact, smooth manifold and \(\pi:E\to\mathbb{Z}\) be a compact, smooth manifold. We will show that the mod \(2\) simple type conjecture holds for spin structures. \(B_{0}\) a smooth family of \(4\)-manifolds parametrised by \(B_{0}\). This means that \(E\) is a fibre bundle with fibres given by a compact, oriented smooth \(4\)-manifold \(X\), with transition functions given by orientation preserving diffeomorphisms of \(X\). Suppose that \(\mathfrak{s}_{E}\) is a spin\({}^{c}\)-structure on the vertical tangent bundle \(T(E/B_{0})=Ker((\pi)_{*}:TE\to TB_{0})\). One can consider the Seiberg-Witten equations of the family \(E\) with respect to the spin\({}^{c}\)-structure [1, 2, 17, 19, 21, 24, 26]. Let \(B=Pic^{\mathfrak{s}_{E}}(E/B_{0})\) denote the space of gauge equivalence classes of spin\({}^{c}\)-connections on the fibres of \(E\). This is a torus bundle over \(B\) whose fibre over \(b\in B_{0}\) is \(Pic^{\mathfrak{s}_{E}|\times_{b}}(X_{b})\), where \(X_{b}=\pi_{E}^{-1}(b)\) is the fibre of \(E\) over \(b\). If \(b_{1}(X)>0\), then for technical reasons we also need to assume there exists a section \(s:B_{0}\to E\)[2, Example 2.4]. A chamber for the family \(E\to B_{0}\) is defined to be a homotopy class of non-vanishing section \(\phi:B\to H^{+}\), where \(H^{+}\to B\) is the pullback to \(B\) of the fibre bundle on \(B_{0}\) whose fibre over \(b\in B_{0}\) is \(H^{+}(X_{b})\), the space of harmonic self-dual \(2\)-forms (with respect to some given family of metrics on the fibres of \(E\)). Then we obtain the mod \(2\) families Seiberg-Witten invariant which is a homomorphism \[SW^{\phi}_{E,\mathfrak{s}_{E}}:H^{*}_{S^{1}}(pt;\mathbb{Z}_{2})\to H^{*}(B; \mathbb{Z}_{2})\] of degree \(-d(X,\mathfrak{s})\), where \(X\) is a fibre of \(E\) and \(\mathfrak{s}\) is the restriction of \(\mathfrak{s}_{E}\) to \(X\). The following result completely determines the mod \(2\) families Seiberg-Witten invariants of spin families for even powers of \(x\), under some mild assumptions on the family. The general result, Theorem 6.7, is slightly complicated to state so here we give the result only for \(b_{1}(X)=0\). **Theorem 1.5**.: _Let \(E\to B_{0}\) be a spin family and suppose that \(b_{1}(X)=0\). Then for any chamber \(\phi\) we have_ \[SW^{\phi}_{E,\mathfrak{s}_{E}}(x^{2m})=w_{b_{+}(X)-3}(H^{+}(X))s_{2(m+1+\sigma (X)/16)}(D_{\mathfrak{s}_{E}}).\] In particular, Theorem 1.5 implies that the mod \(2\) invariants \(SW^{\phi}_{E,\mathfrak{s}_{E}}(x^{2m})\) depend only on \(b_{+}(X),\sigma(X)\) and the \(K\)-theory classes \([H^{+}]\in KO^{0}(B)\), \([D_{\mathfrak{s}_{E}}]\in K^{0}(B)\). This recovers and generalises the rigidity theorem of Kato-Konno-Nakamura [15], which deals with the case \(b_{1}(X)=0\), \(m=0\), \(b_{+}(X)\geq dim(B)+2\). In the course of proving Theorem 1.5, we also obtain some constraints that spin families must satisfy. We state the result here only for the case \(b_{1}(X)=0\). **Theorem 1.6**.: _Let \(E\to B_{0}\) be a spin family and suppose that \(b_{1}(X)=0\). Then_ \[w_{l}(H^{+}(X))s_{2(j+1+\sigma(X)/16)}(D)=0.\] _for all \(j\geq 0\) and \(b_{+}(X)-2\leq l\leq b_{+}(X)\)._ ### Outline of the proof of the main results We give an outline of the main steps in our computation of the Seiberg-Witten invariants: 1. For spin structures, the Seiberg-Witten equations posess an additional symmetry \(j\) known as _charge conjugation_. Since \(j^{2}=-1\), no irreducible solution to the Seiberg-Witten equations can be fixed by \(j\). If we could choose a \(j\)-invariant perturbation for which the Seiberg-Witten moduli space is smooth, then \(j\) could be used to pair off solutions, giving a mod \(2\) vanishing result. 2. Unfortunately, such a simple approach does not work as there are no non-zero \(j\)-invariant perturbations. One of the key ideas in this paper is to instead consider a \(j\)-invariant _family_ of perturbations. Such families exist, in fact we can take the parameter space of the family to be the unit sphere \(S(H^{+}(X))\) in \(H^{+}(X)\) with \(j\) acting as the antipodal map on \(S(H^{+}(X))\). 3. In light of point (2), it is possible to keep track of charge conjugation symmetry, but the price to pay is that we must now consider the Seiberg-Witten equations for a family. More precisely, we construct an enhancement of the usual Seiberg-Witten invariant \(SW_{X,\mathfrak{s}}\) which takes the form of a map \[SW_{X,\mathfrak{s}}^{Pin(2)}:H_{Pin(2)}^{*}(pt;\mathbb{Z}_{2})\to H_{ \mathbb{Z}_{2}}^{*-d(X,\mathfrak{s})}(Pic^{\mathfrak{s}}(X)\times S(H^{+}(X)) ;\mathbb{Z}_{2}).\] Moreover \(SW_{X,\mathfrak{s}}^{\phi}\) can be recovered from \(SW_{X,\mathfrak{s}}^{Pin(2)}\) in that we have a commutative square of the form 4. Now remains the task of computing \(SW_{X,\mathfrak{s}}^{Pin(2)}\). At first this might seem no easier than computing \(SW_{X,\mathfrak{s}}\). However, it turns out that \(SW_{X,\mathfrak{s}}^{Pin(2)}\), unlike the ordinary Seiberg-Witten invariants, behaves well under taking connected sums. Exploiting this property of \(SW_{X,\mathfrak{s}}^{Pin(2)}\) allows us to compute it, and in turn to compute \(SW_{X,\mathfrak{s}}\). 5. We need to prove a connected sum formula for \(SW_{X,\mathfrak{s}}^{Pin(2)}\). This is difficult if one chooses to work directly with moduli spaces, so instead we work throughout this paper with the Bauer-Furuta cohomotopy refinement of the Seiberg-Witten invariants. For spin structures, the Bauer-Furuta invariant has \(Pin(2)\)-symmetry and our enhanced Seiberg-Witten invariant \(SW_{X,\mathfrak{s}}^{Pin(2)}\) can be recovered from the \(Pin(2)\) Bauer-Furuta invariant. 6. The Bauer-Furuta invariant of a connected sum \(X\#Y\) is the smash product of the Bauer-Furuta invariants for \(X\) and \(Y\). Since the Bauer-Furuta invariants of \(X\) and \(Y\) are both equivariant, their smash product has \(S^{1}\times S^{1}\)-symmetry, or \(Pin(2)\times Pin(2)\) in the spin case. The usual \(S^{1}\) or \(Pin(2)\) symmetry group is obtained by restricting to the diagonal subgroup. However, it is beneficial to retain the larger symmetry group. Another key idea of this paper to use localisation in equivariant cohomology with respect to the additional circle group of symmetry. This leads to a product formula for \(SW_{X\#Y,\mathfrak{s}_{X}\#_{\mathfrak{s}Y}}^{Pin(2)}\) which ultimately allows us to compute \(SW_{X,\mathfrak{s}}^{Pin(2)}\) and \(SW_{X,\mathfrak{s}}\). ### Structure of the paper In Section 2 we recall the Bauer-Furuta invariant and how the Seiberg-Witten invariants of a 4-manifold or a family of 4-manifolds can be recovered from the Bauer-Furuta invariant. This leads us to consider more generally Seiberg-Witten type invariants associated to any \(S^{1}\)-equivariant cohomotopy class. In Section 3 we consider the Seiberg-Witten invariants in the case of spin structures. In this case there is an additional symmetry, charge conjugation, which leads us to consider \(Pin(2)\)-equivariant cohomotopy classes. We will see that in the \(Pin(2)\)-equivariant case, the mod \(2\) Seiberg-Witten invariants admit an enhancement that we call the \(Pin(2)\)-equivariant Seiberg-Witten invariants. In Section 4, we consider the Seiberg-Witten invariants or their \(Pin(2)\)-equivariant enhancement for the smash product of two cohomotopy classes. Such a cohomotopy class has an additional circle of symmetry and by applying localisation in equivariant cohomology with respect to this extra symmetry, we arrive at a product formula for the Seiberg-Witten invariants or their \(Pin(2)\)-equivariant enhancement of a smash product. In Section 5 we apply the product formula to arrive at a formula for the \(Pin(2)\)-equivariant Seiberg-Witten invariants of spin \(4\)-manifolds. From this we obtain the mod \(2\) Seiberg-Witten invariants of any spin structure. Finally, in Section 6 we use the same approach to compute the Seiberg-Witten invariants for spin families. ### Acknowledgements We thank Hokuto Konno for comments on a draft of this paper. ## 2. Monopole maps and Seiberg-Witten invariants In this section we recall how the Seiberg-Witten invariants of a \(4\)-manifold can be recovered from the Bauer-Furuta cohomotopy refinement. We will use a more general framework that is suitable for contructing the Seiberg-Witten invariants for families of \(4\)-manifolds. We will be concerned with maps of sphere bundles over a base space \(B\) which is assumed to be a compact manifold. If \(V\) is a complex vector bundle over \(B\) and \(U\) a real vector bundle, we let \(S^{V,U}\) denote the fibrewise compactification of \(V\oplus U\), or equivalently the unit sphere bundle of \(V\oplus U\oplus\mathbb{R}\). We let \(S^{1}\) act on \(V\) by scalar multiplication and trivially on \(U\). This determines an \(S^{1}\)-action on \(S^{V,U}\). We let \(s_{V,U}:B\to S^{V,U}\) denote the section at infinity. Consider an \(S^{1}\)-equivariant map of sphere bundles \[f:S^{V,U}\to S^{V^{\prime},U^{\prime}},\] where \(V,V^{\prime}\) are complex vector bundles on \(B\) and \(U,U^{\prime}\) are real vector bundles. Assume that \(f\) sends \(s_{V,U}\) to \(s_{V^{\prime},U^{\prime}}\) and that the restriction of \(f\) to \(S^{U}\) is homotopic to the map \(S^{U}\to S^{U^{\prime}}\) induced by an inclusion \(U\to U^{\prime}\) of vector bundles. We will refer to such a map of sphere bundles as a _monopole map_. Suppose that \(X\) is a compact, oriented, smooth \(4\)-manifold and that \(\mathfrak{s}\) is a spin\({}^{c}\)-structure on \(X\). By taking a finite dimensional approximation of the Seiberg-Witten equations as in [4], we obtain a monopole map \(f:S^{V,U}\to S^{V^{\prime},U^{\prime}}\) over \(B=Pic^{\mathfrak{s}}(X)\), the space of gauge equivalence classes of spin\({}^{c}\)-connections with curvature equal to a fixed \(2\)-form representing \(-2\pi ic(\mathfrak{s})\). This is a torsor over \(Pic(X)\), the group of flat unitary line bundles on \(X\) and hence is a torus of dimension \(b_{1}(X)\). The Bauer-Furuta invariant of \((X,\mathfrak{s})\) is the (twisted, equivariant) stable cohomotopy class of \(f\). We introduce some notation associated to a monopole map \(f:S^{V,U}\to S^{V^{\prime},U^{\prime}}\). Let \(a,a^{\prime}\) denote the complex ranks of \(V,V^{\prime}\) and let \(b,b^{\prime}\) denote the real ranks of \(U,U^{\prime}\). Further, set \(d=a-a^{\prime}\) and \(b_{+}=b^{\prime}-b\). Let \(D\) denote the virtual vector bundle \(D=V-V^{\prime}\) and let \(H^{+}=U^{\prime}-U\). Since \(f|_{U}\) is homotopy equivalent to an inclusion, we have an isomorphism \(H^{+}\cong U^{\prime}/U\), in particular \(H^{+}\) is a genuine vector bundle. In the case that \(f\) is the Bauer-Furuta monopole map for a \(4\)-manifold \(X\) and \(\operatorname{spin}^{c}\)-structure \(\mathfrak{s}\), \(D\to Pic^{\mathfrak{s}}\) is the families index of the family of Dirac operators parametrised by \(Pic^{\mathfrak{s}}(X)\) and \(H^{+}\) is the trivial bundle with fibre \(H^{+}(X)\), the space of harmonic self-dual \(2\)-forms on \(X\) for some Riemannian metric. In [2], we constructed cohomological invariants associated to a monopole map \(f:S^{V,U}\to S^{V^{\prime},U^{\prime}}\), which in the case of the Bauer-Furuta monople map for \((X,\mathfrak{s})\) recovers the Seiberg-Witten invariant. More generally, for a family of \(4\)-manifolds this procedure recovers the families Seiberg-Witten invariants. We recall the construction of these invariants. Since our interest is in the mod \(2\) Seiberg-Witten invariants, we will work throughout with \(\mathbb{Z}_{2}\)-coefficients. This has the benefit that we do not have to consider orientations. Since the restriction of \(f\) to \(S^{U}\) is homotopic to an inclusion, we can identify \(U\) with a subbundle of \(U^{\prime}\) and we can further assume that \(f|_{S^{U}}\) is given by the inclusion map. The cohomological invariants of \(f\) depend on a choice of a _chamber_ for \(f\), which by definition is a homotopy class of section \(\phi:B\to U^{\prime}\setminus U\). Equivalently as chamber can be regarded as a homotopy class of non-vanishing section of \(H^{+}=U^{\prime}/U\). Such a chamber determines a lift \(\tau^{\phi}_{V^{\prime},U^{\prime}}\in H^{2a^{\prime}+b^{\prime}}_{S^{1}}(S^{ V^{\prime},U^{\prime}},S^{U})\) of the Thom class \(\tau_{V^{\prime},U^{\prime}}\in H^{2a^{\prime}+b^{\prime}}_{S^{1}}(S^{V^{ \prime},U^{\prime}},s_{V^{\prime},U^{\prime}})\) as follows. Let \(N_{\phi}\) denote an \(S^{1}\)-invariant tubular neighbourhood of \(\phi(B)\) in \(S^{V^{\prime},U^{\prime}}\). The Thom class \(\tau_{N_{\phi}}\) of \(N_{\phi}\to\phi(B)\) is valued in \(H^{2a^{\prime}+b^{\prime}}_{S^{1}}(N_{\phi},N_{\phi}\setminus\phi(B))\) which by excision is isomorphic to \(H^{2a^{\prime}+b^{\prime}}_{S^{1}}(S^{V^{\prime},U^{\prime}},S^{V^{\prime},U^ {\prime}}\setminus\phi(B))\). Then since \(\phi(B)\) is disjoint from \(S^{U}\) we can map \(\tau_{N_{\phi}}\) to a class in \(H^{2a^{\prime}+b^{\prime}}_{S^{1}}(S^{V^{\prime},U^{\prime}},S^{U})\), which is \(\tau^{\phi}_{V^{\prime},U^{\prime}}\). Pulling back the lifted Thom class by \(f\) gives \(f^{*}(\tau^{\phi}_{V^{\prime},U^{\prime}})\in H^{2a^{\prime}+b^{\prime}}_{S^{1 }}(S^{V,U},S^{U})\). Let \(N^{U}\) denote a tubular neighbourhood of \(S^{U}\) in \(S^{V,U}\) and let \(\widetilde{Y}^{V,U}=S^{V,U}\setminus N^{U}\) be the complement (alternatively one can construct \(\widetilde{Y}^{V,U}\) as the real blowup of \(S^{V,U}\) along \(S^{U}\)). Then \(\widetilde{Y}^{V,U}\) is a manifold with boundary. Furthermore, the \(S^{1}\)-action is free and we set \(Y^{V,U}=\widetilde{Y}^{V,U}/S^{1}\). By excision and homotopy invariance we have isomorphisms \[H^{*}_{S^{1}}(S^{V,U},S^{U})\cong H^{*}_{S^{1}}(S^{V,U},N^{U})\cong H^{*}_{S^ {1}}(\widetilde{Y}^{V,U},\partial\widetilde{Y}^{V,U})\cong H^{*}(Y^{V,U}, \partial Y^{V,U}).\] Hence we can regard \(f^{*}(\tau^{\phi}_{V,U})\) as an element of \(H^{2a^{\prime}+b^{\prime}}(Y^{V,U},\partial Y^{V,U})\). Let \(\pi_{V,U}:Y^{V,U}\to B\) be the projection map. We have a pushforward map \[(\pi_{V,U})_{*}:H^{*}(Y^{V,U},\partial Y^{V,U})\to H^{*-(2a+b-1)}(B),\] which is obtained from the corresponding pushforward map in homology using Poincare-Lefschetz duality. Now we define the _Seiberg-Witten invariant of \(f\) with respect to the chamber \(\phi\)_ to be the homomorphism \[SW^{\phi}_{f}:H^{*}_{S^{1}}(pt)\to H^{*-2d+b_{+}+1}(B)\] given by \[SW^{\phi}_{f}(\theta)=(\pi_{V,U})_{*}(\theta f^{*}(\tau^{\phi}_{V^{\prime},U^ {\prime}})).\] Sometimes it is useful to consider a slightly more general invariant \(SW^{\phi}_{f}:H^{*}_{S^{1}}(\widetilde{Y}^{V,U})\to H^{*-2d+b_{+}+1}(B)\) by allowing \(\theta\) to be an element of \(H^{*}_{S^{1}}(\widetilde{Y}^{V,U})\). However, this is not a stable invariant of \(f\) since the space \(\widetilde{Y}^{V,U}\) depends on \(V\) and \(U\) As a special case, we can take any element in \(H^{*}_{S^{1}}(B)\) and pull it back to \(H^{*}_{S^{1}}(\widetilde{Y}^{V,U})\) and in this case we do get a stable invariant. Recall that \(H^{*}_{S^{1}}(pt)\cong\mathbb{Z}_{2}[x]\), where \(deg(x)=2\). Therefore \(SW^{\phi}_{f}\) is completely determined by the collection of cohomology classes \(SW^{\phi}_{f}(x^{m})\in H^{2m-2d+b_{+}+1}(B)\), where \(m\geq 0\). In the case that \(f\) is the monopole map associated to a \(4\)-manifold \(X\) with spin\({}^{c}\)-structure \(\mathfrak{s}\), we have \(d=(c(\mathfrak{s})^{2}-\sigma(X))/8\), \(b_{+}=b_{+}(X)\) and \(B=Pic^{\mathfrak{s}}(X)\). Since \(Pic^{\mathfrak{s}}(X)\) is a torsor for \(T_{X}=H^{1}(X;\mathbb{R})/H^{1}(X;\mathbb{Z})\), we have a canonical isomorphism \(H^{*}(Pic^{\mathfrak{s}}(X))\cong H^{*}(T_{X})\). So the Seiberg-Witten invariant takes the form \(SW^{\phi}_{X,\mathfrak{s}}\) : \(H^{*}_{S^{1}}(pt)\to H^{*-2d+b_{+}+1}(T_{X})\) and is equivalent to the collection of cohomology classes \(SW^{\phi}_{f}(x^{m})\in H^{2m-2d+b_{+}+1}(T_{X})\). ## 3. Spin structures and \(Pin(2)\)-symmetry For a spin-structure \(\mathfrak{s}\), the Seiberg-Witten equations have an additional symmetry known as charge conjugation, which we denote by \(j\). The corresponding monopole map is \(Pin(2)\)-equivariant, where \(Pin(2)=S^{1}\cup jS^{1}\) with relations \(je^{i\theta}=e^{-i\theta}j\), \(j^{2}=-1\). This motivates us to consider \(Pin(2)\)-equivariant monopole maps more generally. Let \(B\) be a compact manifold. Assume that \(B\) is equipped with an involution \(\iota:B\to B\) and let \(Pin(2)\) act on \(B\), where \(S^{1}\subset Pin(2)\) acts trivially and \(j\) acts as \(\iota\). Let \(E\to B\) be a complex vector bundle on \(B\). Suppose that \(E\) is equipped with an antilinear endomorphism \(J:E\to E\) covering \(\iota\) and satisfying \(J^{2}=-1\). Then we make \(E\) into a \(Pin(2)\)-equivariant vector bundle over \(B\) by letting \(S^{1}\subset Pin(2)\) act by scalar multiplication and \(j\) act by \(J\). Let \(F\to B\) be a real vector bundle and suppose \(F\) is equipped with an involutive endomorphism \(J:F\to F\) covering \(\iota\). Then we make \(F\) into a \(Pin(2)\)-equivariant vector bundle over \(B\) by letting \(S^{1}\subset Pin(2)\) act trivially and let \(j\) act by \(J\). Consider now a \(Pin(2)\)-equivariant map \(f:S^{V,U}\to S^{V^{\prime},U^{\prime}}\), where \(V,V^{\prime}\) are complex vector bundles equipped with anti-linear endomorphisms covering \(\iota\) and squaring to \(-1\) and \(U,U^{\prime}\) are real vector bundles equipped with involutive endomorphisms covering \(\iota\). Assume that \(f\) sends \(s_{V,U}\) to \(s_{V^{\prime},U^{\prime}}\) and that \(f|_{S^{U}}\) is \(Pin(2)\)-homotopic to the map \(S^{U}\to S^{U^{\prime}}\) induced by an inclusion of vector bundles \(U\to U^{\prime}\). We will refer to such a map as a \(Pin(2)\)_-equivariant monopole map_. In the case that \(f\) is the monopole map for a \(4\)-manifold \(X\) and spin-structure \(\mathfrak{s}\), recall that \(B=Pic^{\mathfrak{s}}(X)\). A spin connection defines an origin in \(Pic^{\mathfrak{s}}(X)\) and hence gives an identification \(Pic^{\mathfrak{s}}(X)\cong H^{1}(X;\mathbb{R})/H^{1}(X;\mathbb{Z})\). The involution \(\iota:B\to B\) acts on \(H^{1}(X;\mathbb{R})/H^{1}(X;\mathbb{Z})\) as \(-1\) (i.e. the inverse map of the group structure). Furthermore, \(U,U^{\prime}\) are trivial vector bundles over \(B\) and \(j\) is the map \(-\iota^{*}\), that is, a combination of pullback by \(\iota\) and multiplication by \(-1\). The construction of cohomological invariants given in Section 2 can be repeated for \(Pin(2)\)-monopole maps but now keeping track of the additional symmetry. A chamber in the \(Pin(2)\) sense is a \(Pin(2)\)-equivariant homotopy class \(\phi:B\to U^{\prime}\setminus U\). This determines a lifted Thom class \(\tau^{\phi}_{V^{\prime},U^{\prime}}\in H^{2a^{\prime}+b^{\prime}}_{Pin(2)}(S^ {V^{\prime},U^{\prime}},S^{U})\) which pulls back to \(f^{*}(\tau^{\phi}_{V^{\prime},U^{\prime}})\in H^{2a^{\prime}+b^{\prime}}_{Pin(2)}(S^ {V,U},S^{U})\). As before we have isomorphisms \[H^{*}_{Pin(2)}(S^{V,U},S^{U})\cong H^{*}_{Pin(2)}(\widetilde{Y}^{V,U},\partial \widetilde{Y}^{V,U})\cong H^{*}_{\mathbb{Z}_{2}}(Y^{V,U},\partial Y^{V,U}).\] Now the projection map \(\pi_{V,U}:Y^{V,U}\to B\) is \(\mathbb{Z}_{2}\)-equivariant and hence it determines a push-forward map \((\pi_{V,U})_{*}:H^{*}_{\mathbb{Z}_{2}}(Y^{V,U},\partial Y^{V,U})\to H^{*-(2a+b- 1)}_{\mathbb{Z}_{2}}(B)\) in equivariant cohomology. This requires some justification since Poincare-Lefschetz duality does not hold in equivariant cohomology. Consider more generally a compact Lie group \(G\) and a \(G\)-equivariant fibre bundle \(\pi:E\to B\), where the fibre \(F\) is a compact \(n\)-manifold with boundary. The transition functions for the fibre bundle are homeomorphisms so they must send the boundary of \(F\) to itself. Hence we also have a fibre bundle \(\partial E\to B\) whose fibres are the boundaries of the fibres of \(E\). Replacing \(E\) and \(B\) by their Borel models \(E_{G}=E\times_{G}EG\), \(B_{G}=B\times_{G}BG\), we have a fibre bundle \(\pi:E_{G}\to B_{G}\), with fibre \(F\). Let \(\partial E_{G}\) denote the Borel model for \(\partial E\) and let \(E^{\prime}_{G}\) be the space obtained by collapsing the boundary of each fibre to a disctint point. This is a fibre bundle over \(B_{G}\) with fibre \(F/\partial F\). Consider the Leray-Serre spectral sequence \(E^{p,q}_{r}\) for \(E^{\prime}_{G}\to B_{G}\). Since \(H^{k}(F/\partial F)=0\) for \(k>n\), we have \(E^{p,q}_{r}=0\) for \(q>n\) and hence there is a map \[H^{m}(E^{\prime}_{G})\to E^{m-n,n}_{2}=H^{m-n}(B_{G};H^{n}(F,\partial F)).\] Here \(H^{n}(F,\partial F)\) is to be understood as a local system on \(B\). However, since we are working with \(\mathbb{Z}_{2}\)-coefficients and \(F\) is a compact \(n\)-manifold, we have \(H^{n}(F,\partial F)\cong\mathbb{Z}_{2}\), the trivial local system with coefficient group \(\mathbb{Z}_{2}\). So we have a well defined map \(H^{m}(E^{\prime}_{G})\to H^{m-n}(B_{G})\). From the definition of \(E^{\prime}_{G}\), there is a quotient map \(E^{\prime}_{G}\to E_{G}/\partial E_{G}\) and hence a pullback map \(H^{m}(E_{G},\partial E_{G})\to H^{m}(E^{\prime}_{G})\). Composing, we get a map \[\pi_{*}:H^{m}(E_{G},\partial E_{G})\to H^{m}(E^{\prime}_{G})\to H^{m-n}(B_{G})\] or equivalently, a map in equivariant cohomology \[\pi_{*}:H^{m}_{G}(E,\partial E)\to H^{m}_{G}(E)\to H^{m-n}_{G}(B).\] Thus, to any \(Pin(2)\)-monopole map \(f\) and chamber \(\phi\), we may define the \(Pin(2)\)-equivariant Seiberg-Witten invariant of \(f\) with respect to \(\phi\) to be the map \[SW^{\phi}_{Pin(2),f}:H^{*}_{Pin(2)}(pt)\to H^{*-2d+b_{+}+1}_{\mathbb{Z}_{2}}(B)\] given by \[SW^{\phi}_{Pin(2),f}(\theta)=(\pi_{V,U})_{*}(\theta f^{*}(\tau^{\phi}_{V^{ \prime},U^{\prime}})).\] Forgetting the additional symmetry recovers the usual Seiberg-Witten invariant in the sense that we have a commutative diagram (3.1) However there may be some loss of information in passing to the \(Pin(2)\)-equivariant Seiberg-Witten invariant as the map \(H^{*}_{Pin(2)}(pt)\to H^{*}_{S^{1}}(pt)\) is not surjective. In fact, we have \(H^{*}_{Pin(2)}(pt)\cong\mathbb{Z}_{2}[u,q]/(u^{3})\) where \(deg(u)=1\), \(deg(q)=4\) (eg. [3, SS5]) and the map \(H^{*}_{Pin(2)}(pt)\to H^{*}_{S^{1}}(pt)\) sends \(u\) to zero and \(q\) to \(x^{2}\). The map \(SW^{\phi}_{Pin(2),f}\) is a morphism of \(H^{*}_{\mathbb{Z}_{2}}(pt)\)-modules. Recall that \(H^{*}_{\mathbb{Z}_{2}}(pt)\cong\mathbb{Z}_{2}[u]\), where \(deg(u)=1\). So we have, \(SW^{\phi}_{Pin(2),f}(u\theta)=uSW^{\phi}_{Pin(2),f}(\theta)\). As in the \(S^{1}\)-equivariant case, it is sometimes convenient to regard the domain of \(SW^{\phi}_{Pin(2),f}\) to be either \(H^{*}_{Pin(2)}(\widetilde{Y}^{V,U})\) or \(H^{*}_{Pin(2)}(B)\) In the case that \(f\) is the Seiberg-Witten monopole map for a \(4\)-manifold \(X\) with spin-structure \(\mathfrak{s}\), we run into an immediate problem. There are no \(Pin(2)\)-equivariant chambers because the action of \(j\) on \(Pic^{\mathfrak{s}}(X)\) always has fixed points, while \(j\) acts on \(H^{+}(X)\) as \(-1\). So it would appear that we can not take advantage of the \(Pin(2)\)-symmetry in this situation. Fortunately there is a simple way to circumvent this difficulty, which we previously made use of in [1, SS9.4]. The idea is to replace \(Pic^{\mathfrak{s}}(X)\) with \(B_{X,\mathfrak{s}}=Pic^{\mathfrak{s}}(X)\times S(H^{+}(X))\), where \(S(H^{+}(X))\) is the unit sphere in \(H^{+}(X)\). We define \(\iota:B_{X,\mathfrak{s}}\to B_{X,\mathfrak{s}}\) to be the product of \(-1\) on \(Pic^{\mathfrak{s}}(X)\) with the antipodal map on \(S(H^{+}(X))\). Then we simply pullback the monopole map from \(Pic^{\mathfrak{s}}(X)\) to \(B_{X,\mathfrak{s}}\). Then we have a tautological chamber \(\phi^{taut}:B_{X,\mathfrak{s}}\to H^{+}(X)\setminus\{0\}\) which given by the projection \(B_{X,\mathfrak{s}}\to S(H^{+}(X))\), followed by the inclusion \(S(H^{+}(X))\to H^{+}(X)\setminus\{0\}\). Hence to any \(4\)-manifold \(X\) with \(b_{+}(X)>0\) and with spin structure \(\mathfrak{s}\), we obtain a \(Pin(2)\)-equivariant Seiberg-Witten invariant \[SW^{\phi^{taut}}_{Pin(2),f}:H^{*}_{Pin(2)}(pt)\to H^{*}_{\mathbb{Z}_{2}}(B_{X,\mathfrak{s}}).\] To simplify notation we write \(SW^{Pin(2)}_{X,\mathfrak{s}}\) for \(SW^{\phi^{taut}}_{Pin(2),f}\). Recall that \(H^{*}_{\mathbb{Z}_{2}}(pt)\cong\mathbb{Z}_{2}[u]\), where \(deg(u)=1\). **Proposition 3.1**.: _We have an isomorphism of \(\mathbb{Z}_{2}[u]\)-algebras:_ \[H^{*}_{\mathbb{Z}_{2}}(B_{X,\mathfrak{s}})\cong H^{*}(Pic^{\mathfrak{s}}(X)) [u]/(u^{b_{+}(X)}).\] Proof.: Set \(n=b_{+}(X)-1\). Since the antipodal map acts freely on \(S(H^{+}(X))\cong S^{n}\), it follows that \(\iota\) acts freely and that the quotient \(B_{X,\mathfrak{s}}/\langle\iota\rangle\) has the structure of a fibre bundle over \(\mathbb{RP}^{n}\) with fibre \(Pic^{\mathfrak{s}}(X)\). Furthermore, the distinguished point of \(Pic^{\mathfrak{s}}(X)\) determined by the spin-connection is fixed by \(-1\) and hence defines a section \(\mathbb{RP}^{n}\to B_{X,\mathfrak{s}}/\langle\iota\rangle\). Let \(E_{2}^{p,q}\) denote the Leray-Serre spectral sequence for \(p:B_{X,\mathfrak{s}}/\langle\iota\rangle\to\mathbb{RP}^{n}\). The existence of a section implies that \(p^{*}:H^{*}(\mathbb{RP}^{n})\to H^{*}(B_{X,\mathfrak{s}}/\langle\iota\rangle)\) is injective. Hence there are no differentials into \(E_{r}^{p,0}\) for any \(p\) or \(r\). This implies that the pullback map \(H^{1}(B_{X,\mathfrak{s}}/\langle\iota\rangle)\to H^{1}(Pic^{\mathfrak{s}}(X))\) is surjective. But \(Pic^{\mathfrak{s}}(X)\) is a torus so \(H^{*}(Pic^{\mathfrak{s}})\) is generated by \(H^{1}(Pic^{\mathfrak{s}}(X))\), hence the pullback map \(H^{k}(B_{X,\mathfrak{s}}/\langle\iota\rangle)\to H^{k}(Pic^{\mathfrak{s}}(X))\) is surjective for all \(k\). The result now follows from the Leray-Hirsch theorem, the fact that \(H^{*}(\mathbb{RP}^{n})\cong\mathbb{Z}_{2}[u]/(u^{n+1})\) and the isomorphism \(H^{*}_{\mathbb{Z}_{2}}(B_{X,\mathfrak{s}})\cong H^{*}(B_{X,\mathfrak{s}}/ \langle\iota\rangle)\). Let us write \(SW^{\mathbb{Z},\phi}_{X,\mathfrak{s}}\) to distinguish the integral Seiberg-Witten invariant from the mod \(2\) Seiberg-Witten invariant \(SW^{\phi}_{X,\mathfrak{s}}\). **Lemma 3.2**.: _If \(b_{+}(X)=1\) and \(\mathfrak{s}\) is a spin-structure, then \(SW^{\mathbb{Z},\phi}_{X,\mathfrak{s}}\) does not depend on the chamber \(\phi\)._ Proof.: Since \(X\) is spin and \(b_{+}(X)=1\), we must have \(b_{1}(X)=1\) (by Donaldson's Theorem B in the simply connected case [8], or the \(10/8\) inequality more generally [12]). The wall-crossing formula (eg. [2]) implies that \(SW^{\mathbb{Z},\phi}_{X,\mathfrak{s}}(x^{m})-SW^{\mathbb{Z},-\phi}_{X,\mathfrak{ s}}(x^{m})=\pm s_{m+1}(D)\), where \(s_{j}(D)\) denotes the \(j\)-th Segre class of the index bundle \(D\to Pic^{\mathfrak{s}}(X)\). Since \(b_{+}(X)=1\) and \(c(\mathfrak{s})=0\), the calculation in [2, SS5.3] shows that \(s_{j}(D)=0\) for all \(j>0\), hence the result follows. By Lemma 3.2, for a spin structure, \(SW^{\mathbb{Z},\phi}_{X,\mathfrak{s}}\) and \(SW^{\phi}_{X,\mathfrak{s}}\) do not depend on \(\phi\) even when \(b_{+}(X)=1\) and so we will denote these invariants as \(SW^{\mathbb{Z}}_{X,\mathfrak{s}}\) and \(SW_{X,\mathfrak{s}}\). **Lemma 3.3**.: _Let \(X\) be a compact, oriented, smooth \(4\)-manifold with \(b_{+}(X)>0\) and \(\mathfrak{s}\) a spin-structure. If \(m\) is odd then \(SW^{\mathbb{Z}}_{X,\mathfrak{s}}(x^{m})=0\). If \(m\) is even then \(SW_{X,\mathfrak{s}}(x^{m})=SW^{Pin(2)}_{X,\mathfrak{s}}(q^{m/2})|_{u=0}\), where for a class \(\alpha\in H^{*}(Pic^{\mathfrak{s}}(X))[u]/(u^{b_{+}(X)})\), \(\alpha|_{u=0}\) denotes the class in \(H^{*}(Pic^{\mathfrak{s}}(X))\) obtained from \(\alpha\) by setting \(u=0\)._ Proof.: Let \(\iota:Pic^{\mathfrak{s}}(X)\to Pic^{\mathfrak{s}}(X)\) denote the inversion map. The charge conjugation symmetry of the Seiberg-Witten equations implies that \(SW^{\mathbb{Z}}_{X,\mathfrak{s}}(x^{m})=(-1)^{\sigma}\iota^{*}SW^{\mathbb{Z}} _{X,\mathfrak{s}}(x^{m})\), where \(\sigma=m+d+b_{+}(X)+1\). However \(SW^{\mathbb{Z}}_{X,\mathfrak{s}}(x^{m})\) has degree \(2m-2d+b_{+}(X)+1\) so \(\iota^{*}\) acts as \((-1)^{2m-2d+b_{+}(X)+1}\). So the formula simplifies to \(SW^{\mathbb{Z}}_{X,\mathfrak{s}}(x^{m})=(-1)^{m+d}SW^{\mathbb{Z}}_{X,\mathfrak{ s}}(x^{m})\). Furthermore, \(d=-\sigma(X)/8\) is even as \(\sigma(X)\) is a multiple of \(16\). So \(SW^{\mathbb{Z}}_{X,\mathfrak{s}}(x^{m})=(-1)^{m}SW^{\mathbb{Z}}_{X,\mathfrak{s }}(x^{m})\) and hence \(SW^{\mathbb{Z}}_{X,\mathfrak{s}}(x^{m})=0\) if \(m\) is odd, as \(H^{*}(Pic^{\mathfrak{s}}(X);\mathbb{Z})\) has no torsion. Now suppose \(m\) is even. Let \(f:S^{V,U}\to S^{V^{\prime},U^{\prime}}\) be the Seiberg-Witten monopole map over \(Pic^{\mathfrak{s}}(X)\). Pull this back to \(B_{X,\mathfrak{s}}=Pic^{\mathfrak{s}}(X)\times S(H^{+}(X))\). Adapting the commutative diagram (3.1) to this setting, we have a commutative diagram Then since \(q^{m/2}\in H^{*}_{Pin(2)}(B_{X,\mathfrak{s}})\) gets sent to \(x^{m}\in H^{*}_{S^{1}}(Pic^{\mathfrak{s}}(X))\), commutativity of the diagram gives \(SW_{X,\mathfrak{s}}(x^{m})=SW^{Pin(2)}_{X,\mathfrak{s}}(q^{m/2})|_{u=0}\). By this lemma, the task of computing the mod \(2\) Seiberg-Witten invariants for spin structures is reduced to calculating \(SW^{Pin(2)}_{X,\mathfrak{s}}|_{u=0}\). In fact, we will compute the whole invariant \(SW^{Pin(2)}_{X,\mathfrak{s}}\), not just its evalution at \(u=0\). However, before carrying this out we already obtain a strong vanishing theorem which implies that \(SW_{X,\mathfrak{s}}\) for a spin structure is usually zero mod \(2\). **Theorem 3.4**.: _Let \(X\) be a compact, oriented, smooth \(4\)-manifold with \(b_{+}(X)>0\) and \(\mathfrak{s}\) a spin-structure. If \(b_{+}(X)>3\), then \(SW_{X,\mathfrak{s}}(\theta)=0\) for all \(\theta\in H^{*}_{S^{1}}(pt)\)._ Proof.: Assume that \(b_{+}(X)>3\). By Lemma 3.3, it suffices to show that \(SW^{Pin(2)}_{X,\mathfrak{s}}(q^{m})|_{u=0}=0\) for all \(m\geq 0\). Recall that \(SW^{Pin(2)}_{X,\mathfrak{s}}\) is a map \[SW^{Pin(2)}_{X,\mathfrak{s}}:H^{*}_{Pin(2)}(pt)\to H^{*-2d+b_{+}(X)+1}_{ \mathbb{Z}_{2}}(B_{X,\mathfrak{s}})\cong H^{*-2d+b_{+}(X)+1}(Pic^{\mathfrak{s} })[u]/(u^{b_{+}(X)}).\] Recall also that \(u^{3}=0\) in \(H^{*}_{Pin(2)}(pt)\). Hence \[u^{3}SW^{Pin(2)}_{X,\mathfrak{s}}(q^{m})=SW^{Pin(2)}_{X,\mathfrak{s}}(u^{3}\theta )=0.\] This means that \(SW^{Pin(2)}_{X,\mathfrak{s}}(q^{m})\) is divisible by \(u^{b_{+}(X)-3}\) and hence \(SW^{Pin(2)}_{X,\mathfrak{s}}(q^{m})|_{u=0}=0\). Theorem 3.4 is a generalisation of the main result of [6] (see also [18]), which corresponds to the case that \(-\sigma(X)/4-b_{+}(X)-1+b_{1}(X)=0\), or equivalently the case that the moduli space of the Seiberg-Witten equations is zero-dimensional. ## 4. A product formula for Seiberg-Witten invariants Suppose that we have two \(S^{1}\)-equivariant monopole maps \[f_{i}:S^{V_{i},U_{i}}\to S^{V^{\prime}_{i},U^{\prime}_{i}},\quad i=1,2\] over a common base \(B\). Let \(f=f_{1}\wedge_{B}f_{2}:S^{V,U}\to S^{V^{\prime},U^{\prime}}\) be the fibrewise smash product of \(f_{1}\) and \(f_{2}\), where \(V=V_{1}\oplus V_{2}\) etc. It is \(S^{1}\)-equivariant where \(S^{1}\) acts on both factors. Our goal in this Section is to compute \(SW_{f}\) in terms of \(SW_{f_{1}}\) and \(SW_{f_{2}}\). Let \(\phi:B\to H^{+}\setminus\{0\}\) be a chamber for \(f\). Since \(H^{+}=H^{+}_{1}\oplus H^{+}_{2}\) we can write \(\phi=(\phi_{1},\phi_{2})\), where \(\phi_{1}\) and \(\phi_{2}\) do not simultaneously vanish. Perturbing \(\phi\) slightly, we may assume that \(\phi_{1}\) and \(\phi_{2}\) meet the zero sections of \(H^{+}_{1},H^{+}_{2}\) transversally. Let \(Z_{1},Z_{2}\subset B\) be the zero loci of \(\phi_{1},\phi_{2}\). So \(Z_{1},Z_{2}\) are disjoint and \(Z_{i}\) is Poincare dual to the Euler class \(e(H^{+}_{i})\in H^{b^{+}_{i}}(B)\), where \(b^{+}_{i}\) denotes the rank of \(H^{+}_{i}\). The key observation is that the map \(f\) is \(S^{1}\times S^{1}\)-equivariant, where the \(i\)-th copy of \(S^{1}\) acts as scalar multiplication on \(V_{i}\) and \(V^{\prime}_{i}\). Keeping track of this extra symmetry, we will be able to compute \(SW^{\phi}_{f}\) using localisation in equivariant cohomology. Carrying out all constructions with respect to this larger group, we get a lifted Thom class \[\tau^{\phi}_{V,U}\in H^{*}_{S^{1}\times S^{1}}(S^{V^{\prime},U^{\prime}},S^{U }),\] which pulls back to \[f^{*}(\tau^{\phi}_{V,U})\in H^{*}_{S^{1}\times S^{1}}(\widetilde{Y}^{V,U}, \partial\widetilde{Y}^{V,U})\cong H^{*}_{S^{1}}(Y^{V,U},\partial^{V,U})\] where on the right, we have identified the quotient of \(S^{1}\times S^{1}\) by the diagonal subgroup \(\Delta S^{1}\) with \(S^{1}\) via \[(S^{1}\times S^{1})/\Delta S^{1}\cong S^{1},\quad(a,b)\mapsto ab^{-1}.\] The quotient map \(S^{1}\times S^{1}\to S^{1}\) is split surjective with splitting map \(S^{1}\to S^{1}\times S^{1}\) given by \(a\mapsto(a,1)\). Hence we can identify the quotient group with the subgroup \(S^{1}\times\{1\}\). The projection map \(\pi_{V,U}:Y^{V,U}\to B\) is \(S^{1}\)-equivariant, where \(S^{1}\) acts trivially on \(B\), hence, as explained in Section 3, defines a push-forward map \[(\pi_{V,U})_{*}:H^{*}_{S^{1}}(Y^{V,U},\partial Y^{V,U})\to H^{*-(2a+b-1)}_{S^ {1}}(B).\] It follows that we can define an enhancement of \(SW_{f}^{\phi}\) valued in \(S^{1}\)-equivariant cohomology: \[SW_{S^{1}\times S^{1},f}^{\phi}:H_{S^{1}\times S^{1}}^{*}(pt) \to H_{S^{1}}^{*-2d+b_{+}+1}(B),\] \[SW_{S^{1}\times S^{1},f}^{\phi}(\theta) =(\pi_{V,U})_{*}(\theta f^{*}(\tau_{V,U}^{\phi})).\] Furthermore, the map is compatible with \(SW_{f}^{\phi}\) in the sense that we have a commutative diagram where the vertical maps are the forgetful maps in equivariant cohomology obtained by restricting to the subgroups \(\Delta S^{1}\subset S^{1}\times S^{1}\) and \(\{1\}\subset S^{1}\). Moreover, since the map \(H_{S^{1}\times S^{1}}^{*}(pt)\to H_{S^{1}}^{*}(pt)\) is surjective, we see that \(SW_{S^{1}\times S^{1},f}^{\phi}\) completely determines \(SW_{f}^{\phi}\). Let us establish notation for various subgroups of \(S^{1}\times S^{1}\). Write \(S^{1}_{i}\) for the subgroup given by the \(i\)-th copy of \(S^{1}\) and \(\Delta S^{1}\) for the diagonal copy of \(S^{1}\). If we write \(S^{1}\) without any further decoration, it will be understood as the quotient group \((S^{1}\times S^{1})/\Delta S^{1}\). We have \(H_{\Delta S^{1}}^{*}(pt)\cong\mathbb{Z}_{2}[x]\) and \(H_{S^{1}\times S^{1}}^{*}(pt)\cong\mathbb{Z}_{2}[x_{1},x_{2}]\), where \(x_{i}\) corresponds to the \(i\)-th copy of \(S^{1}\). More precisely, \(x_{i}\) is the pullback of the generator of \(H_{S^{1}_{i}}^{2}(pt)\). The restriction map \(H_{S^{1}\times S^{1}}^{*}(pt)\to H_{\Delta S^{1}}^{*}\) is the map \(\mathbb{Z}_{2}[x_{1},x_{2}]\to\mathbb{Z}_{2}[x]\) which sends \(x_{1}\) and \(x_{2}\) to \(x\). When thinking of \(S^{1}\) as the quotient \((S^{1}\times S^{1})/\Delta S^{1}\), we write \(H_{S^{1}}^{*}(pt)=\mathbb{Z}_{2}[y]\). Since the quotient map \(S^{1}\times S^{1}\to S^{1}\) is given by \((a,b)\to ab^{-1}\), it follows that the pullback of \(y\) equals \(x_{1}-x_{2}\). Let \((Y^{V,U})^{S^{1}}\) denote the fixed point set of the \(S^{1}\)-action on \(Y^{V,U}\) and \(\iota:(Y^{V,U})^{S^{1}}\to Y^{V,U}\) the inclusion. Then \((Y^{V,U})^{S^{1}}\) is a manifold with boundary and the boundary of \((V^{V,U})^{S^{1}}\) is the fixed point set of the \(S^{1}\)-action on \(\partial Y^{V,U}\). It is easily seen that \((Y^{V,U})^{S^{1}}=F_{1}\cup F_{2}\), where \(F_{1}=Y^{V_{2},U_{1}\oplus U_{2}}\) and \(F_{2}=Y^{V_{1},U_{1}\oplus U_{2}}\). Let \(\widetilde{F}_{i}\) denote the preimage of \(F_{i}\) in \(\widetilde{Y}^{V,U}\). Then \(\widetilde{F}_{i}\) is the fixed point set of \(S^{1}_{i}\) acting on \(\widetilde{Y}^{V,U}\). The normal bundle of \(\widetilde{F}_{i}\) in \(\widetilde{Y}^{V,U}\) is the pullback of \(V_{i}\) to \(\widetilde{F}_{i}\). Turning this around, the normal bundle of \(F_{i}\) is obtained by taking the normal bundle of \(\widetilde{F}_{i}\) and quotienting by the action of \(\Delta S^{1}\). Since \(\Delta S^{1}\) acts on \(V_{i}\) with weight \(+1\), we see that the normal bundle of \(F_{i}\) is \(N_{i}=V_{i}\otimes L\), where \(L\to Y^{V,U}\) is the line bundle associated to the circle bundle \(\widetilde{Y}^{V,U}\to Y^{V,U}\). Let \(c=c_{1,S^{1}\times S^{1}}(L)\) denote the \(S^{1}\times S^{1}\)-equivariant Chern class of \(L\). The image of \(c\) in \(\Delta S^{1}\)-equivariant cohomology is \(x\). If we restrict \(L\) to \(F_{1}\), then \(S^{1}_{1}\) acts trivially, hence \(c|_{F_{1}}=x_{2}\). Similarly, \(c|_{F_{2}}=x_{1}\). The localisation theorem [7, III (3.8)] says that the pullback \[\iota^{*}:H_{S^{1}}^{*}(Y^{V,U},\partial Y^{V,U})\to H_{S^{1}}^{*}((Y^{V,U})^{ S^{1}},\partial(Y^{V,U})^{S^{1}})\] is an isomorphism after localising with respect to \(y\). Similarly, the pushforward map \(\iota_{*}:H_{S^{1}}^{*}((Y^{V,U})^{S^{1}},\partial(Y^{V,U})^{S^{1}})\to H_{S^{1} }^{*}(Y^{V,U},\partial Y^{V,U})\) is an isomorphism after localising with respect to \(y\). At this point we should remark that since \(F_{1}\) and will typically have different dimensions, the pushforward does not respect degrees, only the degree mod \(2\). In any case, since the map is an isomorphism in the localised rings, there exists a class \(\mu\in y^{-1}H^{*}_{S^{1}}((Y^{V,U})^{S^{1}},\partial(Y^{V,U})^{S^{1}})\) of mixed degree such that \(\iota_{*}(\mu)=1\). Since \(\iota^{*}\iota_{*}(\mu)=e_{S^{1}}(N)\mu\), where \(N\) denotes the normal bundle of \((Y^{V,U})^{S^{1}}\) in \(Y^{V,U}\) and \(e_{S^{1}}(N)\) is the \(S^{1}\)-equivariant Euler class, we must have \(\mu=e_{S^{1}}(N)^{-1}\). We will make this more precise below. Let \(N_{i}\) denote \(N|_{F_{i}}\). We have already shown that \(N_{i}=V_{i}\otimes L\). Identify the quotient group \((S^{1}\times S^{1})/\Delta S^{1}\) with \(S^{1}_{1}\). This acts with weight \(+1\) on \(V_{1}\). It acts with weight \(-1\) on \(V_{2}\) because \((a,1)\sim(1,a^{-1})\) modulo \(\Delta S^{1}\). Hence \[e_{S^{1}}(N_{1}) =(y+x_{2})^{a_{1}}+(y+x_{2})^{a_{1}-1}c_{1}(V_{1})+\cdots+c_{a_{1} }(V_{1}),\] \[e_{S^{1}}(N_{2}) =(-y+x_{1})^{a_{2}}+(-y+x_{1})^{a_{2}-1}c_{1}(V_{2})+\cdots+c_{a_ {2}}(V_{2}).\] Recall that \(y=x_{1}-x_{2}\). Hence \(y+x_{2}=x_{1}\) and \(-y+x_{1}=x_{2}\), so the above expressions can be written as \[e_{S^{1}}(N_{1}) =x_{1}^{a_{1}}+x_{1}^{a_{1}-1}c_{1}(V_{1})+\cdots+c_{a_{1}}(V_{1}),\] \[e_{S^{1}}(N_{2}) =x_{2}^{a_{2}}+x_{2}^{a_{2}-1}c_{1}(V_{2})+\cdots+c_{a_{2}}(V_{2}).\] However, writing the Euler classes this way makes it less clear how to invert them. For this purpose, it is better to write \(e_{S^{1}}(N_{1}),e_{S^{1}}(N_{2})\) in the form \[e_{S^{1}}(N_{1}) =y^{a_{1}}+y^{a_{1}-1}c_{1}(V_{1}\otimes L)+\cdots,\] \[e_{S^{1}}(N_{2}) =(-y)^{a_{2}}+(-y)^{a_{2}-1}c_{1}(V_{2}\otimes L)+\cdots.\] We then have \[e_{S^{1}}(N_{1})^{-1} =y^{-a_{1}}+y^{-a_{1}-1}s_{1}(V_{1}\otimes L)+\cdots,\] \[e_{S^{1}}(N_{2})^{-1} =(-y)^{-a_{2}}+(-y)^{-a_{2}-1}s_{1}(V_{2}\otimes L)+\cdots\] where \(s_{j}(V_{1}\otimes L),s_{j}(V_{2}\otimes L)\) are the Segre classes of \(V_{1}\otimes L,V_{2}\otimes L\). Since \(F_{i}\) is finite dimensional, these are zero for all large enough \(j\) and hence the above expressions for \(e_{S^{1}}(N_{i})^{-1}\) have only finitely many terms. For a complex vector bundle \(E\) of rank \(r\), we have. \[c_{j}(E\otimes L)=\sum_{l=0}^{j}c_{l}(E)c_{1}(L)^{j-l}{r-l\choose j-l},\quad s _{j}(E\otimes L)=\sum_{l=0}^{j}s_{l}(E)c_{1}(L)^{j-l}{-r-l\choose j-l}\] In fact, the same expressions hold even when \(E\) is a virtual vector bundle. Applying these expressions to \(D_{1},D_{2}\), we find \[e_{S^{1}}(N_{1})^{-1} =\sum_{j\geq 0}y^{-a_{1}-j}\sum_{l=0}^{j}x_{1}^{j-l}s_{l}(D_{1}){- d_{1}-l\choose j-l}\in H^{*}(F_{1})[y,y^{-1}],\] \[e_{S^{1}}(N_{2})^{-1} =\sum_{j\geq 0}(-y)^{-a_{1}-j}\sum_{l=0}^{j}x_{1}^{j-l}s_{l}(D_{1}) {-d_{2}-l\choose j-l}\in H^{*}(F_{2})[y,y^{-1}].\] Let \(\pi_{1}:F_{1}\to B\), \(\pi_{2}:F_{2}\to B\) be the projections to \(B\). The localisation theorem then gives \[SW^{\phi}_{S^{1}\times S^{1},f}(\theta)=(\pi_{1})_{*}(e_{S^{1}}(N_{1})^{-1} \theta f^{*}(\tau^{\phi}_{V,U}))+(\pi_{2})_{*}(e_{S^{1}}(N_{2})^{-1}\theta f^{* }(\tau^{\phi}_{V,U})). \tag{4.1}\] Let \(j_{i}:S^{U_{i}}\to S^{V^{\prime}_{i},U^{\prime}_{i}}\) denote the restriction of \(f_{i}\) to \(S^{U_{i}}\). By the assumption that \(f_{1},f_{2}\) are monopole maps, we can assume that \(j_{1},j_{2}\) are given by inclusions \(U_{i}\to U^{\prime}_{i}\). Now to compute \((\pi_{1})_{*}(e_{S^{1}}(N_{1})^{-1}\theta f^{*}(\tau^{\phi}_{V,U}))\), note that we are restricting to \(S^{0}\subseteq S^{V_{1}}\), so \(f\) can be replaced by \(j_{1}\wedge f_{2}\). Let \(\iota_{1}:S^{0}\to S^{V_{1},H^{+}_{1}}\) be the inclusion map. Then \(j_{1}\wedge f_{2}\) is a suspension of \(\iota_{1}\wedge f_{2}:S^{V_{2},U_{2}}\to S^{V^{\prime}_{1},H^{+}_{1}}\wedge S^ {V^{\prime}_{2},U^{\prime}_{2}}\), so they have the same Seiberg-Witten invariants (as shown in [2, Proposition 3.8]). It remains to compute the Seiberg-Witten invariants of \(\iota_{1}\wedge f_{2}\) (and similarly \(f_{1}\wedge\iota_{2}\)). Recall the chamber \(\phi=(\phi_{1},\phi_{2})\) and recall that \(Z_{1}\) is the zero locus of \(\phi_{1}\). Recall also that \(\phi_{2}\) is non-vanishing on \(Z_{1}\). After a small perturbation, we may assume that \(\phi_{2}|_{Z_{1}}:Z_{1}\to S^{V^{\prime}_{2},U^{\prime}_{2}}\) is transverse to \(f_{2}|_{Z_{1}}:S^{V_{2},U_{2}}\to S^{V^{\prime}_{2},U^{\prime}_{2}}\) (the fact that this can be done for \(S^{1}\)-equivariant monopole maps is explained in [2, Pages 522-523]. The same argument also works in the \(Pin(2)\)-equivariant case, because the stabiliser of any point in \((f_{2}|_{Z_{1}})^{-1}(\phi_{2}|_{Z_{1}})\) is trivial). Let \(\widetilde{\mathcal{M}}_{2}\to Z_{1}\) denote the pre-image \((f_{2}|_{Z_{1}})^{-1}(\phi_{2}|_{Z_{1}})\) and \(\mathcal{M}_{2}=\widetilde{\mathcal{M}}_{2}/S^{1}\) the quotient. Then \((\iota_{1}\wedge f_{2})^{-1}(\phi(B))=\widetilde{\mathcal{M}}_{2}\). This is a smooth manifold, however it is not cut out transversally. The technique of obstruction bundles (eg. [11, Section 3]) can be used to overcome this difficulty. The obstruction bundle on \(\widetilde{\mathcal{M}}_{2}\) is \(V^{\prime}_{1}\). This descends to the bundle \(V^{\prime}_{1}\otimes L\) on \(\mathcal{M}_{2}\). Hence the Seiberg-Witten invariants of \(f_{2}|_{Z_{1}}\) and \(\iota_{1}\wedge f_{2}\) are related by: \[SW^{\phi}_{\iota_{1}\wedge f_{2}}(\theta)=(j_{1})_{*}SW^{\phi_{2}}_{f_{2}|_{Z_ {1}}}(e(V^{\prime}_{1}\otimes L)\theta)\] where \(j_{1}:Z_{1}\to B\) is the inclusion map. To apply this to the localisation formula, we need the \(S^{1}\times S^{1}\)-equivariant extension of this formula, \[SW^{\phi}_{S^{1}\times S^{1},\iota_{1}\wedge f_{2}}(\theta)=(j_{1})_{*}SW^{ \phi_{2}}_{S^{1}\times S^{1},f_{2}|_{Z_{1}}}(e_{S^{1}}(V^{\prime}_{1}\otimes L )\theta).\] Some care is required in interpreting the right hand side of this equation. First of all, we can identify \(S^{1}\times S^{1}\) with the product \(S^{1}_{1}\times\Delta S^{1}\) via the isomorphism \((a,b)\mapsto(ab^{-1},b)\). Next, the argument \(e_{S^{1}}(V^{\prime}_{1}\otimes L)\theta\) should be thought of as an element of \[y^{-1}H^{*}_{S^{1}\times S^{1}}(\widetilde{Y}^{V_{2},U_{2}}|_{Z_{1}})\cong y^{ -1}H^{*}_{S^{1}_{1}\times\Delta S^{1}}(\widetilde{Y}^{V_{2},U_{2}}|_{Z_{1}}) \cong\mathbb{Z}_{2}[y,y^{-1}]\otimes_{\mathbb{Z}_{2}}H^{*}_{\Delta S^{1}}( \widetilde{Y}^{V_{2},U_{2}}|_{Z_{1}}).\] Let \(\psi_{2}:y^{-1}H^{*}_{S^{1}\times S^{1}}(\widetilde{Y}^{V_{2},U_{2}}|_{Z_{1}}) \cong\mathbb{Z}_{2}[y,y^{-1}]\otimes_{\mathbb{Z}_{2}}H^{*}_{\Delta S^{1}}( \widetilde{Y}^{V_{2},U_{2}}|_{Z_{1}})\) denote this isomorphism. Note in particular that \(\psi_{2}(x_{1})=y+x\), \(\psi_{2}(x_{2})=x\), where \(x\) is the generator of \(H^{2}_{\Delta S^{1}}(pt)\) pulled back to \(H^{2}_{\Delta S^{1}}(\widetilde{Y}^{V_{2},U_{2}}|_{Z_{1}})\). From this, it follows that we have a commutative diagram One similarly defines \(\psi_{1}:y^{-1}H^{*}_{S^{1}\times S^{1}}(\widetilde{Y}^{V_{1},U_{1}}|_{Z_{2}}) \cong\mathbb{Z}_{2}[y,y^{-1}]\otimes_{\mathbb{Z}_{2}}H^{*}_{\Delta S^{1}}( \widetilde{Y}^{V_{1},U_{1}}|_{Z_{2}})\) where \(\psi_{1}(x_{1})=x\), \(\psi(x_{2})=-y+x\). Exchanging the roles of \(f_{1}\) and \(f_{2}\) gives a similar formula relating the Seiberg-Witten invariants of \(f_{1}|_{Z_{2}}\) and \(f_{1}\wedge t_{2}\). Substituting into (4.1) and noting that \(e_{S^{1}}(V_{i}\otimes L)^{-1}e_{S^{1}}(V^{\prime}_{i}\otimes L)=e_{S^{1}}(D_{ i}\otimes L)^{-1}\) gives: **Theorem 4.1**.: _For all \(\theta\in H^{*}_{S^{1}\times S^{1}}(B)\), we have_ \[SW^{\phi}_{S^{1}_{1}\times S^{1}_{2},f_{1}\wedge f_{2}}(\theta) =(j_{1})_{*}\left(id\otimes SW^{\phi_{2}}_{f_{2}|_{Z_{2}}}\right) (\psi_{2}(e_{S^{1}}(D_{1}\otimes L)^{-1}\theta))\] \[\qquad+(j_{2})_{*}\left(id\otimes SW^{\phi_{1}}_{f_{1}|_{Z_{1}}} \right)(\psi_{1}(e_{S^{1}}(D_{2}\otimes L)^{-1}\theta)).\] Some explanation of how to use the formula is required. Here \[e_{S^{1}}(D_{1}\otimes L)^{-1} =y^{-d_{1}}+y^{-d_{1}-1}s_{1}(D_{1}\otimes L)+\cdots,\] \[e_{S^{1}}(D_{2}\otimes L)^{-1} =(-y)^{-d_{2}}+(-y)^{-d_{2}-1}s_{1}(D_{2}\otimes L)+\cdots,\] where \(s_{j}(D_{i}\otimes L)\) is the \(j\)-th Segre class of \(D_{i}\otimes L\) and \(L|_{\mathcal{M}_{i}}\) is the line bundle corresponding to \(\widetilde{\mathcal{M}}_{i}\to\mathcal{M}_{i}\). Furthermore, since \(\psi_{i}(c_{1}(L)|_{F_{i}})=\psi_{i}(x_{i})=x\), the Segre classes can be expanded as \[\psi_{i}(s_{j}(D_{i}\otimes L))=\sum_{l=0}^{j}s_{l}(D_{i})x^{j-l}\binom{-d_{i} -l}{j-l}.\] We now consider adapting Theorem 4.1 to the case of a smash product of \(Pin(2)\)-equivariant monopole maps \(f_{1},f_{2}\) over a common base \(B\). We assume that the two involutions on \(B\) corresponding to \(f_{1}\) to \(f_{2}\) commute. In this case the smash product \(f=f_{1}\wedge_{B}f_{2}\) has \(Pin(2)\times Pin(2)\)-symmetry. Since we are ultimately interested in the diagonal copy of \(Pin(2)\), but want to retain the extra circle symmetry we consider the index \(2\) subgroup \(G\subset Pin(2)\times Pin(2)\) generated by \(S^{1}\times S^{1}\) and \((j,j)\). The diagonal circle \(\Delta S^{1}\subset S^{1}\times S^{1}\) is a normal subgroup of \(G\) and \(G/\Delta S^{1}\cong O(2)\). Carrying out the construction of the Seiberg-Witten invariant of \(f\), but with respect to the larger group \(G\) gives a map \[SW^{\phi}_{G,f}:H^{*}_{G}(pt)\to H^{*-2d+b_{+}+1}_{O(2)}(B)\] compatible with the \(Pin(2)\)-equivariant Seiberg-Witten invariant in the sense that we have a commutative square Let \(p_{i}:G\to Pin(2)\) be the inclusion \(G\to Pin(2)\times Pin(2)\), followed by projection to the \(i\)-th factor and let \(p:G\to O(2)\) be the quotient map \(G\to G/\Delta S^{1}\cong O(2)\). For \(i=1,2\), set \(q_{i}=p_{i}^{*}(q)\in H^{4}_{G}(pt)\). Recall that \(H^{*}_{O(2)}(pt)\cong\mathbb{Z}_{2}[u,y]\) where \(deg(u)=1\), \(deg(y)=2\). Abusing notation, we also write \(y\in H^{2}_{G}(pt)\) for the class \(p^{*}(y)\). **Proposition 4.2**.: _We have \(H^{*}_{G}(pt)\cong\mathbb{Z}_{2}[u,y,q_{1}]/(u^{3})\). Furthermore we have \(q_{2}+q_{1}=y^{2}+yu^{2}\)._ Proof.: We have a short exact sequence \(1\to S^{1}_{1}\to G\xrightarrow{p_{1}}Pin(2)\to 1\), where \(S^{1}_{1}\) denotes the subgroup \(S^{1}\times\{1\}\subset S^{1}\times S^{1}\subset G\). The Lyndon-Hochschild-Serre spectral sequence for \(H^{*}_{G}(pt)\) has \(E^{*,*}_{2}=H^{*}_{Pin(2)}(H^{*}_{S^{1}_{1}}(pt))\cong H^{*}_{Pin(2)}[y^{\prime}]\), where \(deg(y^{\prime})=2\) is the generator of \(H^{2}_{S^{1}_{1}}\). The composition \(S^{1}_{1}\to G\xrightarrow{p_{1}}O(2)\) is easily seen to be the inclusion of the circle subgroup of \(O(2)\). This implies that \(p^{*}(y)\in H^{2}_{G}(pt)\) restricts to \(y^{\prime}\). This implies that the spectral sequence degenerates and \(H^{*}_{G}(pt)\cong H^{*}_{Pin(2)}[y]\cong\mathbb{Z}_{2}[u,y,q_{1}]/(u^{3})\). It remains to prove the relation \(q_{1}+q_{2}=y^{2}+yu^{2}\). Since \(H^{4}_{G}(pt)\) is spanned by \(q_{1},y^{2},yu^{2}\), we have that \(q_{2}\) is a linear combination of \(q_{1},y^{2},yu^{2}\). Consider the subgroup \(S^{1}\times S^{1}\subset G\) which has cohomology \(\mathbb{Z}_{2}[x_{1},x_{2}]\). The restriction map \(H^{*}_{G}(pt)\to H^{*}_{S^{1}\times S^{1}}\) sends \(q_{i}\) to \(x_{i}^{2}\), \(y\) to \(x_{1}+x_{2}\) and \(u\) to zero. This shows that \(q_{2}\) must be either \(q_{1}+y^{2}\) or \(q_{1}+y^{2}+yu^{2}\). Next, we note that \(Pin(2)\) acts freely on \(S^{3}\cong SU(2)\) via the inclusion \(Pin(2)\to SU(2)\) and the left action of \(SU(2)\) on itself. The quotient space is \(\mathbb{RP}^{2}\), because the quotient of \(S^{3}\) by the \(S^{1}\)-subgroup of \(Pin(2)\) is \(S^{3}/S^{1}\cong S^{2}\) and the remaining action of \(Pin(2)/S^{1}\cong\mathbb{Z}_{2}\) acts on \(S^{2}\) as the antipodal map. We also have that \(G\) acts freely on \(S^{3}\times S^{3}\) via the inclusion \(G\to Pin(2)\times Pin(2)\) and the obvious product action of \(Pin(2)\times Pin(2)\) on \(S^{3}\times S^{3}\). Let \(M=(S^{3}\times S^{3})/G\) be the quotient. Clearly \(M\) is the quotient of \(S^{2}\times S^{2}\) by the involution which acts as the antipodal map on both factors. Projecting to either factor of \(S^{2}\) gives two fibrations \[S^{2}\to M\xrightarrow{\pi_{i}}\mathbb{RP}^{2},\quad i=1,2.\] Both fibrations admit a section \(s_{i}:\mathbb{RP}^{2}\to M\) which the image under the quotient map \((S^{2}\times S^{2})\to M\) of the diagonal copy of \(S^{2}\). From this it follows easily that the Leray-Serre spectral sequences for both fibrations degenerate at \(E_{2}\). Then \(H^{*}_{G}(S^{3}\times S^{3})\cong H^{*}(M)\) is a \(6\)-dimensional space over \(\mathbb{Z}_{2}\) with basis \(1,u,u^{2},c,cu,cu^{2}\), where \(c\in H^{2}(M)\) restricts non-trivially to the fibres of \(\pi_{1}:M\to\mathbb{RP}^{2}\). The diagonal \(S^{2}\to S^{2}\times S^{2}\) has normal bundle \(TS^{2}\) and taking the quotient by the antipodal map on both factors, it follows that the normal bundle of the sections \(s_{1},s_{2}\) are both equal to \(T\mathbb{RP}^{2}\). Since \(w_{2}(T\mathbb{RP}^{2})=u^{2}\), it follows that the mod \(2\) self-intersection of \(s_{i}\) is odd, but this can only happen if \(c^{2}\neq 0\), so \(c^{2}=cu^{2}\). Let \(\mathbb{H}_{1}\) be the standard representation of the \(i\)-th copy of \(SU(2)\) in the product \(SU(2)\times SU(2)\). By restriction, this defines a representation of \(G\) and we have that \(q_{i}=w_{4}(\mathbb{H}_{i})\). By taking the pullback of \(\mathbb{H}_{i}\) to \(S^{3}\times S^{3}\) and quotienting by \(G\), we have that \(\mathbb{H}_{i}\) descends to a rank \(4\) vector bundle \(\widetilde{\mathbb{H}}_{i}\to M\). In fact, \(\widetilde{\mathbb{H}}_{i}\) is the pullback under \(\pi_{i}:M\to\mathbb{RP}^{2}\) of a rank \(4\) vector bundle on \(\mathbb{RP}^{2}\), because the \(G\)-action on \(\mathbb{H}_{i}\) factors through \(p_{i}:G\to Pin(2)\). Now since \(\mathbb{RP}^{2}\) is \(2\)-dimensional, we must have that \(w_{4}(\widetilde{\mathbb{H}}_{i})=0\). So the pullback of \(q_{1},q_{2}\) to \(H^{4}_{G}(S^{3}\times S^{3})\cong H^{4}(M)\) are both zero. Let \(R\) denote the standard \(2\)-dimensional real representation of \(O(2)\). We have that \(y=w_{2}(R)\). By taking the pullback of \(R\) to \(S^{3}\times S^{3}\) and quotienting by \(G\), we obtain a vector bundle \(\widetilde{R}\to M\) on \(M\). The restriction of \(\widetilde{R}\) to any fibre of \(\pi_{2}\) is isomorphic to \(\mathcal{O}(1)\to S^{2}\). In particular, this means that \(y=w_{2}(\widetilde{R})\) equals either \(c\) or \(c+u^{2}\). In either case, we then have \(y^{2}=cu^{2}=yu^{2}\). So \(y^{2}+yu^{2}=0\), but \(y^{2}\neq 0\) in \(H^{*}(M)\). Then since \(q_{1}+q_{2}=0\) in \(H^{*}(M)\), we see that we must have \(q_{1}+q_{2}=y^{2}+yu^{2}\). The above proposition implies in particular that \(H^{*}_{G}(pt)\to H^{*}_{Pin(2)}(pt)\) is surjective, hence \(SW^{\phi}_{Pin(2),f}\) is completely determined by \(SW^{\phi}_{G,f}\). Recall that we have homomorphims \(p_{i}:G\to Pin(2)\) for \(i=1,2\) and also \(p:G\to O(2)\). **Lemma 4.3**.: _Let \(M\) be a space on which \(Pin(2)\) acts. Regard \(M\) as a \(G\)-space via \(p_{i}:G\to Pin(2)\). Then we have an isomorphism_ \[p^{*}\otimes p^{*}_{i}:H^{*}_{O(2)}(pt)\otimes_{H^{*}_{\mathbb{Z}_{2}}(pt)}H^{* }_{Pin(2)}(M)\cong H^{*}_{G}(M).\] Proof.: By symmetry, it suffices to prove this for \(i=2\). Since \(S^{1}_{1}\) acts trivially on \(M\), we get a fibration \(BS^{1}_{1}\to M_{G}\to M_{Pin(2)}\) and a Leray-Serre spectral sequence for \(H^{*}_{G}(M)\) which has \(E_{2}=H^{*}_{Pin(2)}(M)[y]\), where \(deg(y)=2\). The composition \(S^{1}_{1}\to G\to O(2)\) is the inclusion \(S^{1}_{1}\to O(2)\). Next, we observe that \(H^{*}_{O(2)}(pt)\cong\mathbb{Z}_{2}[u,y]\), where \(deg(u)=1\), \(deg(y)=2\). Moreover the pullback of \(y\) to \(H^{2}_{S^{1}}(pt)\) is a generator. Since \(y\in H^{2}_{O(2)}(pt)\) can be pulled back to a class in \(H^{2}_{G}(M)\) whose restriction to the fibre is a generator of \(H^{2}(BS^{1})\), it follows that the spectral sequence degenerates at \(E_{2}\) and we have an isomorphism \[H^{*}_{O(2)}(pt)\otimes_{H^{*}_{\mathbb{Z}_{2}}(pt)}H^{*}_{Pin(2)}(M)\cong H^{ *}_{Pin(2)}(M)[y]\cong H^{*}_{G}(M)\] and that this isomorphism is realised by the map \(p^{*}\otimes p^{*}_{i}\). Since \(H^{*}_{\mathbb{Z}_{2}}(pt)\cong\mathbb{Z}_{2}[u]\) and \(H^{*}_{O(2)}(pt)\cong\mathbb{Z}_{2}[u,y]\), where \(deg(y)=2\), Lemma 4.3 yields an isomorphism \[\psi_{i}:y^{-1}H^{*}_{G}(B)\cong\mathbb{Z}_{2}[y,y^{-1}]\otimes_{\mathbb{Z}_{ 2}}H^{*}_{Pin(2)}(B)\cong H^{*}_{Pin(2)}(B)[y,y^{-1}].\] Furthermore, \(\psi_{i}\) is a morphism of \(H^{*}_{\mathbb{Z}_{2}}(pt)\)-modules. We have \(\psi_{i}(q_{i})=q\). Furthermore, Proposition 4.2 implies that \(\psi_{1}(q_{2})=q+y^{2}+yu^{2}\) and \(\psi_{2}(q_{1})=q+y^{2}+yu^{2}\). To simplify notation, we define \[\mu=y^{2}+yu^{2}\] so that \(\psi_{1}(q_{2})=\psi_{2}(q_{1})=q+\mu\). Since \(y^{4}=\mu^{2}\), there is essentially no difference between localising with respect to \(y\) or with respect to \(\mu\). Repeating the localisation argument in \(G\)-equivariant cohomology gives: **Theorem 4.4**.: _For all \(\theta\in H^{*}_{G}(B)\), we have_ \[SW^{\phi}_{G,f_{1}\wedge f_{2}}(\theta) =(j_{1})_{*}\left(id\otimes SW^{\phi_{2}}_{Pin(2),f_{2}|_{2_{2}} }\right)(\psi_{2}(e_{G}(D_{1})^{-1}\theta))\] \[\qquad+(j_{2})_{*}\left(id\otimes SW^{\phi_{1}}_{Pin(2),f_{1}|_{ 2_{1}}}\right)(\psi_{1}(e_{G}(D_{2})^{-1}\theta)).\] The Euler classes on the right hand side of the formula should be understood as follows. First, \(V_{i},V^{\prime}_{i}\) are \(Pin(2)\)-equivariant bundles over \(B\). Then \(V_{i},V^{\prime}_{i}\) can be regarded as a \(G\)-equivariant vector bundles via the homomorphism \(p_{i}:G\to Pin(2)\). Then \(e_{G}(V_{i}),e_{G}(V^{\prime}_{i})\) are the images of \(e_{Pin(2)}(V_{i}),e_{Pin(2)}(V^{\prime}_{i})\) under the map \(p_{i}:H^{*}_{Pin(2)}(B)\to H^{*}_{G}(B)\) induced by \(p_{i}\). We have that \(e_{G}(V_{1})\) is invertible in \(y^{-1}H^{*}_{G}(\widetilde{Y}^{V_{1},U_{1}}|_{Z_{2}})\) and so \(e_{G}(D_{1})^{-1}=e_{G}(V_{1})^{-1}e_{G}(V_{1})\) is defined. Similarly \(e_{G}(D_{2})^{-1}\) is defined. We will mainly be interested in applying the product formula in situations where \(B\) satisfies the following assumptions: 1. \(B\) is a fibre bundle \(B\to B_{0}\) such that \(j:B\to B\) covers an involution \(j_{0}:B_{0}\to B_{0}\). 2. \(j_{0}\) does not act freely. 3. The bundles \(V_{i},V^{\prime}_{i}\) are pullbacks from \(B_{0}\) and \(j_{0}\) lifts to an antilinear endomorphism on \(V_{i},V^{\prime}_{i}\) squaring to \(-1\). 4. The map \(u:H^{*}_{\mathbb{Z}_{2}}(B_{0})\to H^{*+1}_{\mathbb{Z}_{2}}(B_{0})\) is injective. For instance, in the case of the \(Pin(2)\)-monopole map of a \(4\)-manifold \(X\) with spin-structure \(\mathfrak{s}\), we have \(B=B_{X,\mathfrak{s}}=Pic^{\mathfrak{s}}(X)\times S(H^{+}(X))\). Then the above assumptions are satisfied if we take \(B_{0}=Pic^{\mathfrak{s}}(X)\) and \(B\to B_{0}\) the projection. Note that condition (4) actually implies condition (2), for if the action of \(j_{0}\) were free, then \(H^{*}_{\mathbb{Z}_{2}}(B_{0})\) would be zero in degrees above \(dim(B_{0})\). **Lemma 4.5**.: _Let \(j_{0}:B_{0}\to B_{0}\) be an involution satisfying condition (4) above. Then \(j_{0}\) acts trivially on \(H^{*}(B_{0})\) and the Leray-Serre spectral sequence for the Borel fibration \((B_{0})_{\mathbb{Z}_{2}}\to B\mathbb{Z}_{2}\) degenerates at \(E_{2}\), giving an isomorphism \(H^{*}_{\mathbb{Z}_{2}}(B_{0})\cong H^{*}(B_{0})[u]\)._ Proof.: Suppose that \(A\) is a finite dimensional representation of \(\mathbb{Z}_{2}\) over \(\mathbb{Z}_{2}\). Any such \(A\) is a direct sum of copies of the trivial representation \(\mathbb{Z}_{2}\) and the regular representation \(R=\mathbb{Z}_{2}^{2}\). Since \(H^{*}(\mathbb{Z}_{2};\mathbb{Z}_{2})\cong\mathbb{Z}_{2}[u]\), \(H^{0}(\mathbb{Z}_{2};R)\cong R^{\mathbb{Z}_{2}}\cong\mathbb{Z}_{2}\) and \(H^{p}(\mathbb{Z}_{2};R)=0\) for \(p>0\), it follows that \(u:H^{p}(\mathbb{Z}_{2};A)\to H^{p+1}(\mathbb{Z}_{2};A)\) is surjective for all \(p\) and an isomorphism for \(p>0\). It also follows that \(u:H^{0}(\mathbb{Z}_{2};A)\to H^{1}(\mathbb{Z}_{2};A)\) is injective if and only if \(A^{\mathbb{Z}_{2}}=A\). Now consider the Leray-Serre spectral sequence for the Borel fibration \((B_{0})_{\mathbb{Z}_{2}}\to B\mathbb{Z}_{2}\). This has \(E_{2}^{p,q}=H^{p}(\mathbb{Z}_{2};H^{q}(B_{0}))\). Injectivity of \(u:H^{*}_{\mathbb{Z}_{2}}(B_{0})\to H^{*+1}_{\mathbb{Z}_{2}}(B_{0})\) implies that the map \(H^{*}_{\mathbb{Z}_{2}}(pt)\to H^{*}_{\mathbb{Z}_{2}}(B_{0})\) is an injection. Hence for all \(r\) there are no differentials mapping into the \(q=0\) row of \(E_{r}\). Consider the map \(u:E_{\infty}^{0,1}\to E_{\infty}^{1,1}\). If this map is not injective, then \(u:H^{1}_{\mathbb{Z}_{2}}(B_{0})\to H^{2}_{\mathbb{Z}_{2}}(B_{0})\) will also not be injective. So \(u:E_{\infty}^{0,1}\to E_{\infty}^{1,1}\) is injective. But \(E_{\infty}^{0,1}\cong H^{0}(\mathbb{Z}_{2};H^{1}(B_{0}))\), \(E_{\infty}^{1,1}\cong H^{1}(\mathbb{Z}_{2};H^{i}(B_{0}))\). For this map to be injective, the action of \(\mathbb{Z}_{2}\) on \(H^{1}(B_{0})\) must be trivial. This means \(E_{2}^{p,1}\cong H^{1}(B_{0})\) for all \(p\). Injectivity of \(u:H^{*}_{\mathbb{Z}_{2}}(B_{0})\to H^{*+1}_{\mathbb{Z}_{2}}(B_{0})\) then implies that there can be no differentials mapping into the \(q=1\) row of \(E_{r}\) for any \(r\). Continuing row by row in the same manner, we see that the action of \(\mathbb{Z}_{2}\) on \(H^{q}(B_{0})\) is trivial for all \(q\) and that there are no differentials in the spectral sequence. This gives the result. **Lemma 4.6**.: _Let \(j_{0}:B_{0}\to B_{0}\) be an involution and suppose that conditions (2) and (4) in the above list are satisfied. Then there is a natural map \(H^{*}_{\mathbb{Z}_{2}}(B_{0})\to H^{*}_{Pin(2)}(B_{0})\) which makes \(H^{*}_{Pin(2)}(B_{0})\) into a \(H^{*}_{\mathbb{Z}}(B_{0})\) module, and with respect to this module structure we have an isomorphism \(H^{*}_{Pin(2)}(B_{0})\cong H^{*}_{\mathbb{Z}_{2}}(B_{0})[q]/(u^{3})\)._ Proof.: First of all, Lemma 4.5 gives an isomorphism \(H^{*}_{\mathbb{Z}_{2}}(B_{0})\cong H^{*}(B_{0})[u]\). Next since \(S^{1}\subset Pin(2)\) acts trivially on \(B_{0}\), we see that the Borel model \((B_{0})_{Pin(2)}\) is given by \[(B_{0})_{Pin(2)}=(B_{0}\times EPin(2))/Pin(2)\cong(B_{0}\times EPin(2)/S^{1})/ \mathbb{Z}_{2}.\] Then since \(EPin(2)/S^{1}\cong BS^{1}\), we get a fibration \(BS^{1}\to(B_{0})_{Pin(2)}\to(B_{0})_{\mathbb{Z}_{2}}\). In particular, this makes \(H^{*}_{Pin(2)}(B_{0})\) into a \(H^{*}_{\mathbb{Z}_{2}}(B_{0})\)-module. Furthermore the associated Leray-Serre spectral sequence has \(E_{2}\cong H^{*}_{\mathbb{Z}_{2}}(B_{0})[x]\cong H^{*}(B_{0})[u,x]\), where \(deg(x)=2\). By condition (2), there exists a fixed point \(b\in B_{0}\). Since \(x\) has even degree, the spectral sequence has no differentials for even \(r\) and so \(E_{2}\cong E_{3}\). Restricting the fibration \((B_{0})_{Pin(2)}\to(B_{0})_{\mathbb{Z}_{2}}\) to \(\{b\}\subseteq B_{0}\), we see that \(d_{3}(x)=u^{3}+\cdots\), where \(\cdots\) denotes terms involving lower powers of \(u\). But we also know that \(u^{3}=0\) and since there are no other differentials mapping to \(E^{3,0}_{r}\), the only way this can happen is if there are no lower powers of \(u\) in \(d_{3}(x)\). So \(d_{3}(x)=u^{3}\). It follows that \(E_{5}\cong H^{*}(B_{0})[u,q]/(u^{3})\cong H^{*}_{\mathbb{Z}_{2}}(B_{0})[q]/(u^ {3})\), where \(q=x^{2}\). There can be no further differentials since \(q\) can be identified with the pullback of \(q\in H^{4}_{Pin(2)}(pt)\). Suppose \(B_{0}\) satisfies conditions (2) and (4). Let \(E\to B_{0}\) be any complex vector bundle on \(B_{0}\) and suppose there is an antilinear lift of \(j_{0}\) to \(E\) squaring to \(-1\). This makes \(E\) into a \(Pin(2)\)-equivariant vector bundle. Suppose \(E\) has complex rank \(a\). The fibre of \(E\) over a fixed point of \(j_{0}\) has a quaternionic structure, so \(a\) is even. Using Lemma 4.6, it follows that \(e_{Pin(2)}(E)\) can be uniquely expressed in the form \[e_{Pin(2)}(E)=\alpha_{0}(E)q^{a/2}+\alpha_{1}(E)q^{a/2-1}+\cdots+\alpha_{a/2}( E),\] where \(\alpha_{j}(E)\in H^{4j}_{\mathbb{Z}_{2}}(B_{0})/(u^{3})\). Consider the restriction to \(S^{1}\subset Pin(2)\). Since \(S^{1}\) acts trivially on \(B_{0}\), we have \(H^{*}_{S^{1}}(B_{0})\cong H^{*}(B_{0})[x]\), where \(deg(x)=2\). Under the forgetful map \(H^{*}_{Pin(2)}(B_{0})\to H^{*}_{S^{1}}(B_{0})\), \(u\) is sent to zero and \(q\) is sent to \(x^{2}\). On the other hand, we have \[e_{S^{1}}(E)=x^{a}+c_{1}(E)x^{a-1}+\cdots+c_{a}(E).\] This implies that the image of \(\alpha_{j}(E)\) under the map \(H^{*}_{\mathbb{Z}_{2}}(B_{0})/(u^{3})\to H^{*}(B_{0})\) is \(c_{2j}(E)\). It also implies that the odd Chern classes of \(E\) are zero mod \(2\). Because of this, we will denote \(\alpha_{j}(E)\) by \(c_{2j,\mathbb{Z}_{2}}(E)\) and refer to them as the \(\mathbb{Z}_{2}\)-equivariant Chern classes of \(E\). Note that this terminology is somewhat innacurate. Since the lift of \(j_{0}\) to \(E\) squares to \(-1\), we do not have a \(\mathbb{Z}_{2}\)-action on \(E\) and so we do not have \(\mathbb{Z}_{2}\)-equivariant Chern classes in the usual sense. Note also that we have only defined the classes \(c_{2j,\mathbb{Z}_{2}}(E)\) in cohomology with \(\mathbb{Z}_{2}\)-coefficients. Having defined the classes \(c_{2j,\mathbb{Z}_{2}}(E)\), one can also define equivariant Segre classes \(s_{2j,\mathbb{Z}_{2}}(E)\in H^{*}_{\mathbb{Z}_{2}}(B_{0})/(u^{3})\), characterised by the relation \(c_{\mathbb{Z}_{2}}(E)s_{\mathbb{Z}_{2}}(E)=1\), where \(c_{\mathbb{Z}_{2}}(E)=1+c_{2,\mathbb{Z}_{2}}(E)+\cdots\) and \(s_{\mathbb{Z}_{2}}(E)=1+s_{2,\mathbb{Z}_{2}}(E)+\cdots\) are the \(\mathbb{Z}_{2}\)-equivariant total Chern and total Segre classes. These classes are stable and so we can also define the \(\mathbb{Z}_{2}\)-equivariant Chern and Segre classes of a virtual bundle \(E_{1}-E_{2}\), where \(E_{1},E_{2}\) are complex vector bundles on \(B_{0}\) both admitting an antilinear lift of \(j_{0}\) squaring to \(-1\). Suppose now that assumptions (1)-(4) hold. Then by the above discussion, \(e_{Pin(2)}(V_{i})\) has the form \[e_{Pin(2)}(V_{i})=\sum_{j=0}^{a_{i}/2}q^{a_{i}/2-j}c_{2j,\mathbb{Z}_{2}}(V_{i})\] and similarly for \(e_{Pin(2)}(V_{i}^{\prime})\). Since \(p_{1}^{*}(q)=q_{1}\) and \(\psi_{2}(q_{1})=q+\mu\), we have \[\psi_{2}(e_{G}(V_{1})) =\sum_{j=0}^{a_{1}/2}(q+\mu)^{a/2-j}c_{2j,\mathbb{Z}_{2}}(V_{1})\] \[=\sum_{j=0}^{a_{1}/2}\sum_{l=0}^{a_{1}/2-j}\binom{a_{1}/2-j}{l} \mu^{a_{1}/2-j-l}q^{l}c_{2j,\mathbb{Z}_{2}}(V)\] \[=\sum_{k=0}^{a_{1}/2}\mu^{a_{1}/2-k}\sum_{j=0}^{k}q^{k-j}c_{2j, \mathbb{Z}_{2}}(V)\binom{a/2-j}{k-j}.\] Similar expressions hold for \(e_{Pin(2)}(V_{i}^{\prime})\). From this it follows that \[\psi_{2}(e_{G}(D_{1}))^{-1}=\sum_{k\geq 0}\mu^{-d_{1}/2-k}\sum_{j=0}^{k}q^{k-j} s_{2j,\mathbb{Z}_{2}}(D_{1})\binom{-d_{1}/2-j}{k-j}\] and similarly \[\psi_{1}(e_{G}(D_{2}))^{-1}=\sum_{k\geq 0}\mu^{-d_{2}/2-k}\sum_{j=0}^{k}q^{k-j} s_{2j,\mathbb{Z}_{2}}(D_{2})\binom{-d_{2}/2-j}{k-j}.\] Although these expressions appear to be infinite sums, they should really be regarded as elements of \(H_{Pin(2)}^{*}(F_{i})[y,y^{-1}]\). But \(Pin(2)\) acts freely on \(F_{i}\), so only finitely many terms are non-zero when pulled back to \(F_{i}\). ## 5. Seiberg-Witten invariants for spin structures In this section we will use the product formula to compute \(SW_{X,\mathfrak{s}}^{Pin(2)}\) for any compact, oriented, smooth 4-manifold and any spin-structure \(\mathfrak{s}\). The connected sum formula for the Bauer-Furuta invariant [5] says that the Seiberg-Witten monopole map \(f_{X\#Y}\) for a connected sum \(X\#Y\) is the external smash product of the monopole maps \(f_{X},f_{Y}\) for \(X\) and \(Y\). In other words \(f_{X\#Y}\) is obtained by pulling back \(f_{X}\) and \(f_{Y}\) to the product \(Pic^{\mathfrak{s}_{X}}(X)\times Pic^{\mathfrak{s}_{Y}}(Y)\) and then taking the fibrewise smash product. To simplify notation we will often omit mention of the spin structures \(\mathfrak{s}_{X},\mathfrak{s}_{Y}\). Write \(\widehat{f}_{X}\) for the pullback of \(f_{X}\) to \(B_{X}=Pic^{\mathfrak{s}_{X}}(X)\times S(H^{+}(X))\) and similarly define \(\widehat{f}_{Y}\) and \(\widehat{f}_{X\#Y}\). Let \(\phi=(\phi_{1},\phi_{2}):B_{X\#Y}\to S(H^{+}(X)\oplus H^{+}(Y))\) be the tautological chamber for \(X\#Y\). The zero loci \(Z_{1},Z_{2}\subset B_{X\#Y}\) are \(Z_{1}=Pic^{\mathfrak{s}_{X}}(X)\times B_{Y}\) and \(Z_{2}=B_{X}\times Pic^{\mathfrak{s}_{Y}}(Y)\), moreover \(\phi_{1}|_{Z_{2}}\) is the tautological chamber for \(X\) (pulled back under the projection \(Z_{2}\to B_{X}\)) and \(\phi_{2}|_{Z_{1}}\) is the tautological chamber for \(Y\) (pulled back under \(Z_{1}\to B_{Y}\)). Let \(j_{i}:Z_{i}\to B_{X\#Y}\) be the inclusions. Recall by Proposition 3.1 that we have isomorphisms \[H^{*}_{\mathbb{Z}_{2}}(B_{X}) \cong H^{*}(Pic^{\mathfrak{s}}(X))[u]/(u^{b_{+}(X)})\] \[H^{*}_{\mathbb{Z}_{2}}(B_{Y}) \cong H^{*}(Pic^{\mathfrak{s}}(X))[u]/(u^{b_{+}(Y)})\] \[H^{*}_{\mathbb{Z}_{2}}(B_{X\#Y}) \cong H^{*}(Pic^{\mathfrak{s}}(X)\times Pic^{\mathfrak{s}_{Y}}(Y) )[u]/(u^{b_{+}(X)+b_{+}(Y)}).\] By a similar argument, we have isomorphisms \[H^{*}_{\mathbb{Z}_{2}}(Z_{1}) \cong H^{*}(Pic^{\mathfrak{s}}(X)\times Pic^{\mathfrak{s}_{Y}}( Y))[u]/(u^{b_{+}(Y)})\] \[H^{*}_{\mathbb{Z}_{2}}(Z_{2}) \cong H^{*}(Pic^{\mathfrak{s}}(X)\times Pic^{\mathfrak{s}_{Y}}( Y))[u]/(u^{b_{+}(X)}).\] It is straightforward to see that the push-forward map \[(j_{1})_{*}:H^{*}(Pic^{\mathfrak{s}}(X)\times Pic^{\mathfrak{s}_{Y}}(Y))[u]/(u ^{b_{+}(Y)})\to H^{*}(Pic^{\mathfrak{s}}(X)\times Pic^{\mathfrak{s}_{Y}}(Y)) [u]/(u^{b_{+}(X)+b_{+}(Y)})\] is multiplication by \(u^{b_{+}(X)}\) and similarly \((j_{2})_{*}\) is multiplication by \(u^{b_{+}(Y)}\). Therefore, Theorem 4.4 takes the form: \[SW^{G}_{X\#Y}(\theta) =u^{b_{+}(X)}\left(id\otimes SW^{Pin(2)}_{Y}\right)(\psi_{2}(e_{G }(D_{X})^{-1}\theta))\] \[\qquad+u^{b_{+}(Y)}\left(id\otimes SW^{Pin(2)}_{X}\right)(\psi_{ 1}(e_{G}(D_{Y})^{-1}\theta)).\] **Lemma 5.1**.: _Let \(X\) be a compact, oriented, smooth \(4\)-manifold with \(b_{1}(X)=0\) and let \(\mathfrak{s}\) be a spin structure on \(X\). Then \(SW^{Pin(2)}_{X,\mathfrak{s}}(q^{j})=0\) unless \(j=-\sigma(X)/16-1\). In particular, \(SW^{Pin(2)}_{X,\mathfrak{s}}=0\) unless \(\sigma(X)<0\)._ Proof.: Observe that \(SW^{Pin(2)}_{X,\mathfrak{s}}(q^{j})\in H^{*}(\mathbb{RP}^{b_{+}(X)-1})\cong \mathbb{Z}_{2}[u]/(u^{b_{+}(X)})\) has degree \(4j+\sigma(X)/4+b_{+}(X)+1\). Furthermore, since \(u^{3}=0\) in \(H^{*}_{Pin(2)}(pt)\), we have \(u^{3}SW^{Pin(2)}_{X,\mathfrak{s}}(q^{j})=SW^{Pin(2)}_{X,\mathfrak{s}}(u^{3}q^ {j})=0\). Hence \(SW^{Pin(2)}_{X,\mathfrak{s}}(q^{j})\) is zero unless \(b_{+}(X)-3\leq 4j+\sigma(X)/4+b_{+}(X)+1\leq b_{+}(X)-1\). The only value of \(j\) satisfying this is \(j=-\sigma(X)/16-1\). For \(m\geq 1\), let \(mK3\) denote the connected sum of \(m\) copies of the \(K3\) surface. This has a unique spin structure \(\mathfrak{s}\) and so we will write \(SW^{Pin(2)}_{mK3}\) for \(SW^{Pin(2)}_{mK3,\mathfrak{s}}\). **Lemma 5.2**.: _For any \(m\geq 1\), we have_ \[SW^{Pin(2)}_{mK3}(q^{j})=\begin{cases}u^{3m-3}&j=m-1,\\ 0&\text{otherwise}.\end{cases}\] Proof.: By Lemma 5.1, \(SW^{Pin(2)}_{mK3}(q^{j})\) is zero unless \(j=m-1\). So it remains to compute \(SW^{Pin(2)}_{mK3}(q^{m-1})\) for each \(m\geq 1\). We will prove the result by induction. In the case \(m=1\), \(SW^{Pin(2)}_{K3}(1)\) has degree zero and by Lemma 3.3, \(SW^{Pin(2)}_{K3}(1)=SW_{K3}(1)\) is the ordinary mod \(2\) Seiberg-Witten invariant of \(K3\), which is \(1\). Now assume \(SW^{Pin(2)}_{mK3}(q^{m-1})=1\) for some \(m\geq 1\). Writing \((m+1)K3=K3\#mK3\), we can apply Theorem 4.4. We have \[\psi_{1}(e_{G}(D_{1})^{-1})=(\mu+q)^{-1},\quad\psi_{2}(e_{G}(D_{2})^{-1})=(\mu+ q)^{-m}.\] Then, taking \(\theta=q_{1}^{m}\), we find \[SW^{G}_{K3\#mK3}(q_{1}^{m})=u^{3}SW^{Pin(2)}_{mK3}((\mu+q)^{m-1})+u^{3m}SW^{Pin(2)} _{K3}((\mu+q)^{-m}q^{m}).\] Since \((\mu+q)^{-m}q^{m}\) is a multiple of \(q\), the second term drops out. Expanding \((\mu+q)^{m-1}\) and using the inductive hypothesis then gives \[SW^{G}_{K3\#mK3}(q_{1}^{m})=u^{3m}.\] Lastly, the forgetful map \(H^{*}_{G}(pt)\to H^{*}_{Pin(2)}(pt)\) sends \(q_{1}\) to \(q\) and hence \(SW^{Pin(2)}_{(m+1)K3}(q^{m})=u^{3m}\), which completes the inductive step. **Theorem 5.3**.: _Let \(X\) be a compact, oriented, smooth \(4\)-manifold with \(b_{+}(X)>0\) and let \(\mathfrak{s}\) be a spin-structure on \(X\). If \(b_{+}(X)\geq 3\), then_ \[SW^{Pin(2)}_{X,\mathfrak{s}}(q^{j})=u^{b_{+}(X)-3}s_{2(j+1+\sigma(X)/16), \mathbb{Z}_{2}}(D)\] _where we set \(s_{l,\mathbb{Z}_{2}}(D)=0\) if \(l<0\)._ _If \(b_{+}(X)<3\), then \(s_{2k,\mathbb{Z}_{2}}(D)\) is divisible by \(u^{3-b_{+}(X)}\) for all \(k>0\) and_ \[SW^{Pin(2)}_{X,\mathfrak{s}}(q^{j})=u^{b_{+}(X)-3}s_{2(j+1),\mathbb{Z}_{2}}(D).\] Proof.: Choose an \(m>0\) such that \(16m\geq 4b_{1}(X)-\sigma(X)\). Let \(M=X\#22m(S^{2}\times S^{2})\) and let \(\mathfrak{s}_{M}=(\mathfrak{s},\mathfrak{s}_{0})\), where \(\mathfrak{s}_{0}\) is the unique spin-structure on \(22m(S^{2}\times S^{2})\). We will compute \(SW^{Pin(2)}_{M,\mathfrak{s}_{M}}\) in two different ways. First note that \(SW^{Pin(2)}_{M,\mathfrak{s}_{M}}\) takes values in \(H^{*}_{\mathbb{Z}_{2}}(Pic^{*}(X)\times S^{b_{+}(X)+22m-1})\cong H^{*}(Pic^{ \mathfrak{s}}(X))[u]/(u^{b_{+}(X)+22m})\), by Proposition 3.1. We write \(M\) as the connected sum \(M=X\#22(S^{2}\times S^{2})\) and apply Theorem 4.4. By Lemma 5.1, \(SW^{Pin(2)}_{22(S^{2}\times S^{2}),\mathfrak{s}_{0}}(q^{j})=0\) for all \(j\) and hence \[SW^{G}_{X\#22(S^{2}\times S^{2}),\mathfrak{s}\#\mathfrak{s}_{0}}(q_{1}^{j})=u ^{22m}SW_{X,\mathfrak{s}}(q^{j}).\] Restricting to \(Pin(2)\subset G\), this reduces to \[SW^{Pin(2)}_{M,\mathfrak{s}_{M}}(q^{j})=u^{22m}SW_{X,\mathfrak{s}}(q^{j}). \tag{5.1}\] Next, we recall that \(22(S^{2}\times S^{2})\) is diffeomorphic to \(K3\#\overline{K3}\)[14]. Hence we can write \(M=(X\#m\overline{K3})\#mK3=M_{1}\#mK3\), where \(M_{1}=X\#m\overline{K3}\) and apply the product formula to this decomposition of \(M\). Write \(\mathfrak{s}_{1}\) for the spin-structure on \(M_{1}\) which is the connected sum of \(\mathfrak{s}\) with the unique spin-structure on \(\overline{K3}\) and write \(\mathfrak{s}_{2}\) for the unique spin-structure on \(mK3\). We claim that \(SW^{Pin(2)}_{M_{1},\mathfrak{s}_{1}}(q^{j})=0\) for all \(j\geq 0\). In fact, \(SW^{Pin(2)}_{M_{1},\mathfrak{s}_{1}}(q^{j})\in H^{*}_{\mathbb{Z}_{2}}(Pic^{*}(X )\times S^{19m+b_{+}(X)-1})\) has degree \[4j+(\sigma(X)+16m)/4+19m+b_{+}(X)+1\] \[\geq\sigma(X)/4+4m+19m+b_{+}(X)+1\] \[\geq b_{1}(X)+19m+b_{+}(X)+1,\] by the assumption that \(16m\geq 4b_{1}(X)-\sigma(X)\). But \(H^{*}(Pic^{*}(X)\times S^{19m+b_{+}(X)-1})\) is non-zero only in degree at most \(b_{1}(X)+19m+b_{+}(X)-1\), which proves the claim. Hence Theorem 4.4 gives: \[SW^{G}_{M_{1}\#mK3,\mathfrak{s}_{1}\#\mathfrak{s}_{2}}(q_{2}^{j})=u^{19m+b_{+ }(X)}SW_{mK3}(\psi_{2}(e_{G}(D_{M_{1}})^{-1}q_{2}^{j})).\] Next, using \(D_{M_{1}}=D_{X}-\mathbb{C}^{2m}\), we have that \[\psi_{2}(e_{G}(D_{M_{1}})^{-1}q_{2}^{j}) =\sum_{k\geq 0}\mu^{m-d_{X}/2-k}\sum_{l=0}^{k}q^{l}s_{2(k-l), \mathbb{Z}_{2}}(D_{M_{1}})\binom{m-d_{X}/2-k+l}{l}q^{j}\] \[=\sum_{k\geq 0}\mu^{m-d_{X}/2-k}\sum_{l=0}^{k}q^{l}s_{2(k-l), \mathbb{Z}_{2}}(D_{X})\binom{m-d_{X}/2-k+l}{l}q^{j}\] \[=\sum_{k\geq 0}\mu^{m-d_{X}/2-k}\sum_{l=0}^{k}q^{j+l}s_{2(k-l), \mathbb{Z}_{2}}(D_{X})\binom{m-d_{X}/2-k+l}{l}\] where \(d_{X}=-\sigma(X)/8\). Hence by Lemma 5.2, and assuming \(m\) is chosen large enough that \(j\leq m-1\), we have \[SW_{M_{1}\#mK3,s_{1}\#s_{2}}^{G}(q_{2}^{j})\] \[=u^{22m+b_{+}(X)-3}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(Sq^{2}(SW_{X,\mathfrak{s}}(x^{m}))\) is given by \[Sq^{2}(SW_{X,\mathfrak{s}}(x^{m}))=(-\sigma(X)/8+m)SW_{X,\mathfrak{s}}(x^{m+1})+(s _{1}(D)+w_{2}(H^{+}(X)))SW_{X,\mathfrak{s}}(x^{m}).\] This formula can be greatly simplified. First, since \(Pic^{\mathfrak{s}}(X)\) is a torus, the Steenrod squares are trivial and the left hand side is zero. Second since \(X\) is spin, \(\sigma(X)/8\) is even and \(s_{1}(D)=0\). Third, \(H^{+}(X)\to Pic^{\mathfrak{s}}(X)\) is a trivial bundle, so \(w_{2}(H^{+}(X))=0\). So we are left with \(mSW_{X,\mathfrak{s}}(x^{m+1})=0\) for all \(m\geq 0\). Taking \(m=2k-1\), we see that \(SW_{X,\mathfrak{s}}(x^{2k})=0\) for all \(k>0\). Now since \(b_{+}(X)=3\), Lemma 3.3 and Theorem 5.3 give \[0=SW_{X,\mathfrak{s}}(x^{2k})=SW_{X,\mathfrak{s}}^{Pin(2)}(q^{k})|_{u=0}=s_{2(k +1+\sigma(X)/16)}(D)\] for all \(k>0\). Hence \(s_{2j}(D)=0\ (\mathrm{mod}\ 2)\) for all \(j>1+\sigma(X)/16\). Combined with Lemma 3.3, Theorems 5.3 and 5.5 yield a complete calculation of the mod \(2\) Seiberg-Witten invariant for spin-structures: **Theorem 5.6**.: _Let \(X\) be a compact, oriented, smooth \(4\)-manifold with \(b_{+}(X)>0\) and let \(\mathfrak{s}\) be a spin-structure on \(X\). If \(b_{+}(X)\neq 3\), then \(SW_{X,\mathfrak{s}}(x^{j})=0\) for all \(j\geq 0\). If \(b_{+}(X)=3\), then \(SW_{X,\mathfrak{s}}(x^{j})=0\) for all \(j>0\) and_ \[SW_{X,\mathfrak{s}}(1)=s_{2(1+\sigma(X)/16)}(D).\] _Remark 5.7_.: If \(b_{+}(X)=3\) and \(X\) is spin, then \(\sigma(X)=0\) or \(-16\) by the \(10/8\)-inequality. In the case \(\sigma(X)=-16\), we get \(SW_{X,\mathfrak{s}}(1)=s_{0}(D)=1\) and in the case \(\sigma(X)=0\), we get \(SW_{X,\mathfrak{s}}(1)=s_{2}(D)=c_{2}(D)\in H^{4}(Pin^{\mathfrak{s}}(X);\mathbb{ Z}_{2})\). When \(\sigma(X)=0\), our result is a generalisation of a result of Morgan-Szabo [20], who proved the \(b_{1}(X)=0\) case. When \(\sigma(X)=-16\), our result is a generalisation of a result of a Ruberman-Strle [25], who proved the \(b_{1}(X)=4\) case. Theorems 5.6 and 5.3 give \(SW_{X,\mathfrak{s}}\) and \(SW_{X,\mathfrak{s}}^{Pin(2)}\) in terms of Segre classes of the index bundle \(D\to Pic^{\mathfrak{s}}(X)\). These can be computed using the families index theorem, as we will now show. Let \(T_{X}=H^{1}(X;\mathbb{R})/H^{1}(X;\mathbb{Z})\) be the moduli space of flat unitary line bundles on \(X\). Over \(X\times T_{X}\) we have a universal line bundle with connection, the _Poincare line bundle_\(L\to X\times T_{X}\) with the property that its restriction to \(X\times p\) is the line bundle corresponding to \(p\in T_{X}\). Let \(\Omega\in H^{2}(X\times T_{X};\mathbb{Z})\) be the first Chern class of the Poincare line bundle. We have that \(\Omega=\sum_{i}x_{i}\backsim y_{i}\), where \(\{y_{i}\}\) is a basis of \(H^{1}(X;\mathbb{Z})\) and \(\{x_{i}\}\) is the corresponding dual basis of \(H^{1}(T_{X};\mathbb{Z})\cong Hom(H^{1}(X;\mathbb{Z}),\mathbb{Z})\). The spin connection gives an identification \(Pic^{\mathfrak{s}}(X)\cong T_{X}\). Then the families index theorem gives: \[Ch(D)=\int_{X}e^{\Omega}\wedge\left(1-\frac{\sigma(X)}{8}vol_{X}\right),\] where \(\int_{X}\) means integration over the fibres of \(X\times T_{X}\to T_{X}\) and \(vol_{X}\) is a \(4\)-form on \(X\) such that \(\int_{X}vol_{X}=1\). Since each term in \(\Omega\) has degree \(1\) in \(X\), we have that \(\Omega^{5}=0\) and that \(\int_{X}\Omega^{n}\wedge vol_{X}=0\) for any \(n>0\). It follows that \[Ch(D)=-\frac{\sigma(X)}{8}+\frac{1}{24}\int_{X}\Omega^{4}.\] For any subset \(I\subset\{1,\ldots,b_{1}(X)\}\) of size \(4\), let \(c_{I}=\langle y_{i_{1}}y_{i_{2}}y_{i_{3}}y_{i_{4}},[X]\rangle\in\mathbb{Z}\) where \(I=\{i_{1},i_{2},i_{3},i_{4}\}\) ordered so that \(i_{1}<i_{2}<i_{3}<i_{4}\). Also set \(x_{I}=x_{i_{1}}x_{i_{2}}x_{i_{3}}x_{i_{4}}\). Then we have \[\frac{1}{24}\int_{X}\Omega^{4}=\sum_{|I|=4}c_{I}x_{I}\in H^{4}(T_{X};\mathbb{Z}).\] Let \(s=(1/24)\int_{X}\Omega^{4}\) and \(d=-\sigma(X)/8\). Then \(Ch(D)=d+s\). If we write \(Ch(D)=\sum_{i\geq 0}Ch_{i}(D)\), where \(Ch_{i}(D)\) has degree \(2i\), then \(Ch_{0}(D)=d\), \(Ch_{2}(D)=s\) and all other terms are zero. Since \(Ch_{1}(D)=c_{1}(D)\) and \(Ch_{2}(D)=(c_{1}(D)^{2}-2c_{2}(D))/2=-c_{2}(D)\), we see that \(c_{1}(D)=0\) and \(s=Ch_{2}(D)=s_{2}(D)\) is the second Segre class of \(D\). Using the splitting principle, one can express the total Segre class of a virtual bundle \(V\) in terms of the Chern character as: \[s(V)=exp\left(\sum_{n\geq 1}(-1)^{n}(n-1)!\;Ch_{n}(V)\right).\] Therefore, in the case \(V=D\), we have \(s(D)=e^{s_{2}(D)}\). Thus \(s_{j}(D)=0\) for odd \(j\) and \[s_{2j}(D)=\frac{1}{j!}s_{2}(D)^{j},\] where, as shown above, \(s_{2}(D)\) is given by \[s_{2}(D)=\sum_{|I|=4}c_{I}x_{I}\in H^{4}(T_{X};\mathbb{Z}). \tag{5.3}\] Choose an arbitrary ordering of subsets of \(\{1,\ldots,b_{1}(X)\}\) of size \(4\). Then it follows that \(s_{2j}(D)\) can be written as \[s_{2j}(D)=\sum_{I_{1}<\cdots<I_{j}}c_{I_{1}}\cdots c_{I_{j}}x_{I_{1}}\cdots x_ {I_{j}}.\] Using the above formula, Theorem 5.6 gives a complete description of the mod \(2\) Seiberg-Witten invariant of any spin structure, depending only on \(b_{+}(X),\sigma(X)\) and the \(4\)-fold cup products \(\langle y_{1}\,y_{2}\,y_{3}\,y_{4},[X]\rangle\), \(y_{1},y_{2},y_{3},y_{4}\in H^{1}(X;\mathbb{Z})\). We note here a consequence of Theorem 5.5 that is of independent interest: **Theorem 5.8**.: _Let \(X\) be a compact, oriented, smooth spin \(4\)-manifold with \(b_{+}(X)=3\) and \(\sigma(X)=-16\). Then \(\langle y_{1}\,y_{2}\,y_{3}\,y_{4},[X]\rangle\) is even for any \(y_{1},y_{2},y_{3},y_{4}\in H^{1}(X;\mathbb{Z})\)._ Proof.: Theorem 5.5 implies that \(s_{2}(D)=0\pmod{2}\), which by Equation (5.3) implies that all \(4\)-fold cup products \(\langle y_{1}\,y_{2}\,y_{3}\,y_{4},[X]\rangle\) are even. This result actually follows from a theorem of Furuta-Kametani [13, Theorem 5], proved by different means. See also [22, Theorem 4] for a related result. **Corollary 5.9**.: _Let \(M_{E_{8}}\) denote the compact, simply-connected topological \(4\)-manifold with intersection form the negative definite \(E_{8}\) lattice. Then \(T^{4}\#2M_{E_{8}}\#n(S^{1}\times S^{3})\) does not admit a smooth structure for any \(n\geq 0\)._ Proof.: Suppose that \(X=T^{4}\#2M_{E_{8}}\#n(S^{1}\times S^{3})\) admits a smooth structure. Since \(H^{2}(X;\mathbb{Z})\) has no \(2\)-torsion, the map \(\mathfrak{s}\to c(\mathfrak{s})\) sending a spin\({}^{c}\)-structure to its characteristic is a bijection. But the intersection form on \(H^{2}(X;\mathbb{Z})\) is even, so \(X\) is spin. We also have that \(\langle y_{1}\,y_{2}\,y_{3}\,y_{4},[X]\rangle=\pm 1\) for a basis \(y_{1},y_{2},y_{3},y_{4}\) of \(H^{1}(T^{4};\mathbb{Z})\subseteq H^{1}(X;\mathbb{Z})\). But this contradicts Theorem 5.8. ## 6. Seiberg-Witten invariants of spin families By adapting the arguments of Section 5, we will obtain a general formula for the \(Pin(2)\)-equivariant Seiberg-Witten invariants of spin families. Let \(B\) be a compact manifold with \(\mathbb{Z}_{2}\)-action defined by an involution \(j:B\to B\) and \(f:S^{V,U}\to S^{V^{\prime},U^{\prime}}\) a \(Pin(2)\)-monopole map. Since \(f\) might not admit a \(Pin(2)\)-equivariant chamber, we replace \(B\) by \(\widehat{B}=S(H^{+})\), the unit sphere bundle of \(H^{+}\) over \(B\) and let \(\pi:\widehat{B}\to B\). Consider the pullback \(\widehat{f}\) of \(f\) to \(\widehat{B}\). Then \(\widehat{f}\) admits a tautological chamber \(\phi^{taut}:\widehat{B}\to\pi^{*}(H^{+})\) which is simply given by the inclusion \(\widehat{B}=S(H^{+})\to\pi^{*}(H^{+})\). Using \(\phi^{taut}\) we get a \(Pin(2)\)-equivariant Seiberg-Witten invariant \[SW^{Pin(2)}_{\widehat{f}}:H^{*}_{Pin(2)}(pt)\to H^{*-2d+b_{+}+1}_{\mathbb{Z}_{ 2}}(\widehat{B})\] where we have written \(SW^{Pin(2)}_{\widehat{f}}\) in place of \(SW^{Pin(2),\phi^{taut}}_{\widehat{f}}\) to simplify notation. If \(\phi:B\to S(H^{+})\) is a \(Pin(2)\)-equivariant chamber for \(f\), then we clearly have the relation \[SW^{Pin(2),\phi}_{f}(\theta)=\phi^{*}(SW^{Pin(2)}_{\widehat{f}}(\theta))\] and hence it suffices to compute \(SW^{Pin(2)}_{\widehat{f}}\). We will make the following assumptions about the \(\mathbb{Z}_{2}\)-action on \(B\) which are satisfied for the Seiberg-Witten monopole map of a spin family, provided that the monodromy of the family acts trivially on \(H^{1}(X;\mathbb{Z})\). Namely, 1. \(j\) does not act freely on \(B\). 2. The map \(u:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*+1}_{\mathbb{Z}_{2}}(B)\) is injective. 3. Over each fixed point \(b\in B\) of \(j:B\to B\), the involution \(j:H^{+}_{b}\to H^{+}_{b}\) acts as \(-1\). Hence \(j\) acts freely on \(S(H^{+})\). One motivation for assumption (2) is given by the following lemma: **Lemma 6.1**.: _Let \(E_{1},E_{2}\to B\) be \(\mathbb{Z}_{2}\)-vector bundles over \(B\), \(\pi_{i}:S(E_{i})\to B\) the unit sphere bundles. Suppose that \(\iota:E_{1}\to E_{2}\) is a \(\mathbb{Z}_{2}\)-equivariant inclusion and that \(E_{2}/\iota E_{1}\cong\mathbb{R}^{k}_{-1}\), where \(\mathbb{R}_{-1}\) denotes the trivial real line bundle where \(\mathbb{Z}_{2}\) acts as multiplication by \(-1\). If \(u:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*+1}_{\mathbb{Z}_{2}}(B)\) is injective, then \(\iota_{*}:H^{*}_{\mathbb{Z}_{2}}(S(E_{1}))\to H^{*+k}_{\mathbb{Z}_{2}}((S(E_{ 2}))\) is also injective._ Proof.: The Gysin sequences for \(S(E_{1})\) and \(S(E_{2})\) are related through the following commutative diagram whose rows are exact: [MISSING_PAGE_POST] where \(r_{1},r_{2}=r_{1}+k\) are the ranks of \(E_{1},E_{2}\). The result follows by injectivity of \(u^{k}:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*+k}_{\mathbb{Z}_{2}}(B)\) and a diagram chase. Now we are ready to repeat the argument of Theorem 5.3. Given a \(Pin(2)\)-equivariant monopole map \(f:S^{V,U}\to S^{V^{\prime},U^{\prime}}\) over a base \(B\), we compute \(SW^{Pin(2)}_{\widehat{g}}\) in two different ways, where \(g=f\wedge f_{22m(S^{2}\times S^{2})}\) and \(f_{22m(S^{2}\times S^{2})}\) is the Seiberg-Witten \(Pin(2)\)-monopole map for \(22m(S^{2}\times S^{2})\), for some sufficiently large \(m\). First of all, let \(\mathbb{R}_{-1}\to B\) denote the trivial real line bundle where \(\mathbb{Z}_{2}\) acts as multiplication by \(-1\). Consider the inclusions \[\iota_{1}:S(H^{+})\to S(H^{+}\oplus\mathbb{R}_{-1}^{22m}),\quad\iota_{2}:S( \mathbb{R}_{-1}^{3m})\to S(H^{+}\oplus\mathbb{R}_{-1}^{22m})\] induced by the inclusions \(H^{+}\to H^{+}\oplus\mathbb{R}_{-1}^{22m}\) and \(\mathbb{R}_{-1}^{3m}\to(H^{+}\oplus\mathbb{R}_{-1}^{19m})\oplus\mathbb{R}_{-1 }^{3m}\cong H^{+}\oplus\mathbb{R}_{-1}^{22m}\) of \(\mathbb{Z}_{2}\)-vector bundles. Theorem 4.4 gives \[SW^{Pin(2)}_{\widehat{g}}(\theta)=(\iota_{1})_{*}(SW^{Pin(2)}_{\widehat{f}}( \theta)). \tag{6.1}\] Now let \(f_{mK3}\), \(f_{m\overline{K3}}\) be the \(Pin(2)\) Seiberg-Witten monopole maps for \(mK3\) and \(m\overline{K3}\). We can assume that \(f_{22m(S^{2}\times S^{2})}=f_{m\overline{K3}}\wedge f_{mK3}\). We write \(g=f\wedge f_{22m(S^{2}\times S^{2})}\) in the form \(g=(f\wedge f_{m\overline{K3}})\wedge f_{mK3}\) and apply Theorem 4.4. First of all, note that for any \(\theta\), \(SW^{Pin(2)}_{\widehat{f\wedge f_{m\overline{K3}}}}(\theta)\in H^{*}_{\mathbb{ Z}_{2}}(S(H^{+}\oplus\mathbb{R}_{-1}^{19m}))\) has degree \[deg(\theta)+\frac{\sigma(X)}{4}+4m+19m+b_{+}+1 \geq\frac{\sigma(X)}{4}+4m+19m+b_{+}+1\] \[>dim(B)+19m+b_{+}-1\] provided \(m\) satisfies \(m\geq dim(B)/4-\sigma(X)/16\). Our assumption on \(j:H^{+}\to H^{+}\) ensures that \(\mathbb{Z}_{2}\) acts freely on \(S(H^{+}\oplus\mathbb{R}_{-1}^{19m})\) and hence the quotient \(S(H^{+}\oplus\mathbb{R}_{-1}^{19m})/\mathbb{Z}_{2}\) is a manifold of dimension \(dim(B)+19m+b_{+}-1\). Hence the degree of \(SW^{Pin(2)}_{\widehat{f\wedge f_{m\overline{K3}}}}(\theta)\) is greater than the highest degree in which \(H^{*}_{\mathbb{Z}_{2}}(S(H^{+}\oplus\mathbb{R}_{-1}^{19m}))\) is non-zero. So \(SW^{Pin(2)}_{\widehat{f\wedge f_{m\overline{K3}}}}(\theta)=0\) for all \(\theta\). Together with Theorem 4.4, this implies that \[SW^{G}_{\widehat{g}}(q_{2}^{j})=(\iota_{2})_{*}(SW^{Pin(2)}_{mK3}(\psi_{2}(e_{ G}(D_{1})^{-1}q_{2}^{j})))\] where \(D_{1}=D-\mathbb{C}^{2m}\) and \(D=V-V^{\prime}\). Expanding the Euler class \(e_{G}(D_{1})\), collecting \(\mu^{0}\)-terms and simplifying, exactly as in the proof of Theorem 5.3, we obtain \[SW^{Pin(2)}_{\widehat{g}}(q_{2}^{j})=(\iota_{2})_{*}(u^{3m-3}s_{2(j+1-d/2), \mathbb{Z}_{2}}(D)). \tag{6.2}\] Equating (6.1) and (6.2) yields \[(\iota_{1})_{*}(SW^{Pin(2)}_{\widehat{f}}(q^{j}))=(\iota_{2})_{*}(u^{3m-3}s_{2 (j+1-d/2),\mathbb{Z}_{2}}(D)). \tag{6.3}\] By Lemma 6.1, and the assumption that \(u:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*+1}_{\mathbb{Z}_{2}}(B)\) is injective, it follows that \((\iota_{1})_{*}\) is injective and thus Equation (6.3) completely determines \(SW^{Pin(2)}_{\widehat{f}}\). **Lemma 6.2**.: _If \(u:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*+1}_{\mathbb{Z}_{2}}(B)\) is injective then for any \(\theta\in H^{*}_{Pin(2)}(pt)\), \(SW^{Pin(2)}_{\widehat{f}}(\theta)\) is in the image of the pullback \(\pi^{*}:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*}_{\mathbb{Z}_{2}}(S(H^{+}))\), where \(\pi:S(H^{+})\to B\) is the projection. In particular, the invariants \(SW^{Pin(2),\phi}_{f}(\theta)\) do not depend on the choice of a chamber._ Proof.: To simplify notation, set \(\alpha=SW^{Pin(2)}_{\widehat{f}}(\theta)\in H^{*}_{\mathbb{Z}_{2}}(S(H^{+}))\). Since \(u^{3}=0\) in \(H^{*}_{Pin(2)}(pt)\), we have \(u^{3}\alpha=SW^{Pin(2)}_{\widehat{f}}(u^{3}\theta)=0\). Consider the Gysin sequence \[\cdots\to H^{*}_{\mathbb{Z}_{2}}(B)\xrightarrow{\pi^{*}}H^{*}_{\mathbb{Z}_{2}} (S(H^{+}))\xrightarrow{\pi_{*}}H^{*-(b_{+}-1)}_{\mathbb{Z}_{2}}(B)\to\cdots\] Now \(u^{3}\alpha=0\) implies \(u^{4}\pi_{*}(\alpha)=0\). But \(u:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*+1}_{\mathbb{Z}_{2}}(B)\) is injective, so \(\pi_{*}(\alpha)=0\). Hence by exactness of the Gysin sequence, \(\alpha=\pi^{*}(\beta)\) for some \(\beta\in H^{*}_{\mathbb{Z}_{2}}(B)\). Now suppose that \(\phi:B\to S(H^{+})\) is a chamber. Then \(SW^{Pin(2),\phi}_{f}(\theta)=\phi^{*}(\alpha)=\phi^{*}\pi^{*}(\beta)=\beta\), which does not depend on the choice of \(\phi\) (note that in general, \(\beta\) is only unique up to multiples of \(e_{\mathbb{Z}_{2}}(H^{+})\), but if a chamber exists then \(e_{\mathbb{Z}_{2}}(H^{+})=0\)). Let \(E_{1}\to E_{2}\) be an inclusion of \(\mathbb{Z}_{2}\)-vector bundles on \(B\) and \(\iota:S(E_{1})\to S(E_{2})\) the induced map of sphere bundles. Then for any \(\beta\in H^{*}_{\mathbb{Z}_{2}}(B)\), we have \[\iota_{*}(\pi_{1}^{*}(\beta))=\pi_{2}^{*}(e_{\mathbb{Z}_{2}}(E_{2}/E_{1})\beta), \tag{6.4}\] where \(\pi_{1},\pi_{2}\) are the projections \(\pi_{i}:S(E_{i})\to B\). By Lemma 6.2, we have that \(SW^{Pin(2)}_{\widehat{f}}(q^{j})=\pi^{*}(\alpha_{j})\) for some \(\alpha_{j}\in H^{*}_{\mathbb{Z}_{2}}(B)/\langle e_{\mathbb{Z}_{2}}(H^{+})\rangle\), where \(\pi\) is the projection \(\pi:S(H^{+})\to B\). Applying (6.4) to Equation (6.3) gives: \[u^{22m}\alpha_{j}=e_{\mathbb{Z}_{2}}(H^{+})u^{22m-3}s_{2(j+1-d/2),\mathbb{Z}_{ 2}}(D),\] which holds as an equality in \(H^{*}_{\mathbb{Z}_{2}}(B)/(u^{22m}e_{\mathbb{Z}_{2}}(H^{+}))\). Cancelling a common factor of \(u^{22m-3}\) gives \[u^{3}\alpha_{j}=e_{\mathbb{Z}_{2}}(H^{+})s_{2(j+1-d/2),\mathbb{Z}_{2}}(D)\] in \(H^{*}_{\mathbb{Z}_{2}}(B)/(u^{3}e_{\mathbb{Z}_{2}}(H^{+}))\). Since \(u:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*}_{\mathbb{Z}_{2}}(B)\) is injective, the right hand side must be a multiple of \(u^{3}\). Let \(u^{-3}:u^{3}H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*-3}_{\mathbb{Z}_{2}}(B)\) denote the inverse of \(u^{3}\) on its image. Then \[\alpha_{j}=u^{-3}e_{\mathbb{Z}_{2}}(H^{+})s_{2(j+1-d/2),\mathbb{Z}_{2}}(D).\] Pulling back to \(H^{*}_{\mathbb{Z}_{2}}(S(H^{+}))\), we obtain \[SW^{Pin(2)}_{\widehat{f}}(q^{j})=\pi^{*}(u^{-3}e_{\mathbb{Z}_{2}}(H^{+})s_{2(j +1-d/2),\mathbb{Z}_{2}}(D)).\] We have proven the following: **Theorem 6.3**.: _Suppose that \(u:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*+1}_{\mathbb{Z}_{2}}(B)\) is injective and suppose that \(j:H^{+}\to H^{+}\) acts as \(-1\) over the fixed points of \(j:B\to B\). Then for each \(j\geq 0\), \(e_{\mathbb{Z}_{2}}(H^{+})s_{2(j+1-d/2),\mathbb{Z}_{2}}(D)\) is a multiple of \(u^{3}\) and_ \[SW^{Pin(2)}_{\widehat{f}}(q^{j})=\pi^{*}(u^{-3}e_{\mathbb{Z}_{2}}(H^{+})s_{2(j +1-d/2),\mathbb{Z}_{2}}(D)).\] Now suppose that \(\pi_{E}:E\to B_{0}\) is a smooth spin family of \(4\)-manifolds. This means that \(E\) is a fibre bundle with fibres given by a compact, oriented smooth \(4\)-manifold \(X\), with transition functions given by orientation preserving diffeomorphisms of \(X\) and in addition we are given a spin-structure \(\mathfrak{s}_{E}\) on the vertical tangent bundle \(T(E/B_{0})=Ker((\pi_{E})_{*}:TE\to TB_{0})\). If \(b_{1}(X)>0\) then we need to assume also that there exists a section \(s:B_{0}\to E\). In this case we get a families Seiberg-Witten monopole map \(f:S^{V,U}\to S^{V^{\prime},U^{\prime}}\) over \(B\), where \(B=Pic^{s_{E}}(E/B_{0})\) is the moduli space of gauge equivalence classes of spin\({}^{c}\)-connections on the fibres of \(E\). See [2, Example 2.4] for details of the construction, including an explanation of why the section \(s\) is needed. Thus \(B\to B_{0}\) is a fibre bundle whose fibre over \(b\in B_{0}\) is \(Pic^{s_{E}|_{X_{b}}}(X_{b})\), where \(X_{b}=\pi_{E}^{-1}(b)\) is the fibre of \(E\) over \(b\). This is a torus bundle over \(B_{0}\). Moreover, since the family has a spin structure, there is a section \(s:B_{0}\to B\) given by the spin connection. Thus \(B\) is completely determined by the degree \(1\) monodromy representation \(\pi_{1}(B_{0})\to Aut(H^{1}(X;\mathbb{Z}))\). The involution \(j:B\to B\) acts as the identity on \(B_{0}\) and as \(-1\) on the fibres of \(B\to B_{0}\). Assuming that the monodromy of the family \(E\to B_{0}\) acts trivially on \(H^{1}(X;\mathbb{Z})\), then \(B\cong B_{0}\times Pic^{s_{E}}(X)\) and it follows easily that \(H^{*}_{\mathbb{Z}_{2}}(B)\cong H^{*}(B)[u]\). In particular, \(u:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*+1}_{\mathbb{Z}_{2}}(B)\) is injective. To each spin\({}^{c}\)-connection, we have a corresponding Dirac operator. Thus \(B\) is the parameter space for a family of Dirac operators. The virtual bundle \(D=V-V^{\prime}\) is the families index of this family. The bundle \(H^{+}\to B\) is the pullback to \(B\) of the bundle \(H^{+}(X)\to B_{0}\) whose fibre over a point \(b\in B_{0}\) is the space \(H^{+}(X_{b})\) of self-dual harmonic \(2\)-forms on \(X_{b}\) (with respect to a given family of metrics). The involution \(j:H^{+}(X_{b})\to H^{+}(X_{b})\) acts as a combination of \(j:B\to B\) on the base, together with multiplication by \(-1\) on the fibres. Thus \(j\) acts as multiplication by \(-1\) over the fixed points of \(j:B\to B\). Our assumptions (1)-(3) are satisfied and hence Theorem 6.3 applies. To compute the equivariant Euler class \(e_{\mathbb{Z}_{2}}(H^{+})\), it is best to think of \(H^{+}\) as being the tensor product \(H^{+}(X)\otimes\mathbb{R}_{-1}\). Then the splitting principle gives \[e_{\mathbb{Z}_{2}}(H^{+})=\sum_{j=0}^{b_{+}(X)}u^{j}w_{b_{+}(X)-j}(H^{+}(X)) \in H^{*}(B)[u]. \tag{6.5}\] Since \(e_{\mathbb{Z}_{2}}(H^{+})\) is a monic polynomial in \(u\), multiplication \(e_{\mathbb{Z}_{2}}(H^{+}):H^{*}(B)[u]\to H^{*+b_{+}(X)}(B)[u]\) is injective. Hence, by the Gysin sequence for \(S(H^{+})\to B\), we have an isomorphism \[H^{*}_{\mathbb{Z}_{2}}(S(H^{+}))\cong H^{*}(B)[u]/(e_{\mathbb{Z}_{2}}(H^{+})).\] Write \(SW^{Pin(2)}_{E,s_{E}}\) for the \(Pin(2)\)-equivariant Seiberg-Witten invariants of \(\widehat{f}\). This is a map \[SW^{Pin(2)}_{E,s_{E}}:H^{*}_{Pin(2)}(pt)\to H^{*-2d+b_{+}(X)+1}_{\mathbb{Z}_{2 }}(S(H^{+})).\] Applying Theorem 6.3 gives **Theorem 6.4**.: _Let \(E\to B_{0}\) be a spin family. If \(b_{1}(X)>0\), assume there exists a section \(s:B_{0}\to E\) and that the monodromy of the family acts trivially on \(H^{1}(X;\mathbb{Z})\). Then for any \(k\geq 1+\sigma(X)/16\), we have_ \[(w_{b_{+}}(H^{+}(X))+uw_{b_{+}-1}(H^{+}(X))+u^{2}w_{b_{+}-2}(X))s_{2k,\mathbb{ Z}_{2}}(D)=0\;(\mathrm{mod}\;u^{3}) \tag{6.6}\] _In particular, if \(\sigma(X)<0\), then \(w_{l}(H^{+}(X))=0\) for \(b_{+}(X)\leq l\leq b_{+}(X)-2\). Furthermore, we have_ \[SW^{Pin(2)}_{E,s_{E}}(q^{j})=\sum_{l=0}^{b_{+}(X)}u^{l-3}w_{b_{+}(X)-l}(H^{+}( X))s_{2(j+1+\sigma(X)/16),\mathbb{Z}_{2}}(D).\] Proof.: According to Theorem 6.3, \(e_{\mathbb{Z}_{2}}(H^{+}(X))s_{2(j+1+\sigma(X)/16),\mathbb{Z}_{2}}(D)\) is a multiple of \(u^{3}\) for any \(j\geq 0\). Using Equation (6.5), this gives \[(w_{b_{+}}(H^{+}(X))+uw_{b_{+}-1}(H^{+}(X))+u^{2}w_{b_{+}-2}(X))s_{2(j+1+\sigma (X)/16),\mathbb{Z}_{2}}(D)=0\;(\mathrm{mod}\;u^{3})\] for all \(j\geq 0\). In particular, if \(\sigma(X)<0\), then taking \(j=-1-\sigma(X)/16\), gives \(w_{l}(H^{+}(X))=0\) for \(b_{+}(X)-2\leq l\leq b_{+}(X)\). Futhermore, Theorem 6.3 and Equation (6.5) give \[SW^{Pin(2)}_{E,{\mathfrak{s}}_{E}}(q^{j})=\sum_{l=0}^{b_{+}(X)}u^{l-3}w_{b_{+}( X)-l}(H^{+}(X))s_{2(j+1+\sigma(X)/16),{\mathbb{Z}}_{2}}(D).\] _Remark 6.5_.: The condition that \(w_{l}(H^{+}(X))=0\) for \(b_{+}(X)-2\leq l\leq b_{+}(X)\) when \(\sigma(X)<0\) was also shown in [3, Theorem 1.2]. The vanishing condition (6.6) is somewhat difficult to use because of the presence of the \({\mathbb{Z}}_{2}\)-equivariant Segre classes which could possibly have \(u\) and \(u^{2}\)-terms. However, by looking at the lowest order term in \(u\) in (6.6), we obtain: **Corollary 6.6**.: _Let \(E\to B_{0}\) be a spin family. If \(b_{1}(X)>0\), assume there exists a section \(s:B_{0}\to E\) and that the monodromy of the family acts trivially on \(H^{1}(X;{\mathbb{Z}})\). Let \(l\geq 0\) be the largest non-negative integer for which \(w_{l}(H^{+}(X))\neq 0\). If \(l\geq b_{+}(X)-2\), then_ \[w_{l}(H^{+}(X))s_{2(j+1+\sigma(X)/16)}(D)=0.\] _for all \(j\geq 0\)._ Proof.: If \(w_{k}(H^{+}(X))=0\) for \(k>l\) and \(l\geq b_{+}(X)-2\), then the left hand side of (6.6) has the form \(u^{b_{+}(X)-l}w_{l}(H^{+}(X))s_{2(j+1+\sigma(X)/16)}(D)+\cdots\) where \(\cdots\) denotes terms of higher order in \(u\). Consider a spin family of \(4\)-manifolds \(E\to B_{0}\) with \(Pin(2)\)-equivariant monopole map \(f\). Restricting to the circle subgroup \(S^{1}\subset Pin(2)\), and choosing a chamber \(\phi:B\to S(H^{+})\), we may consider the \(S^{1}\)-equivariant Seiberg-Witten invariants of \(f\) (taken as usual with \({\mathbb{Z}}_{2}\)-coefficients) \[SW^{\phi}_{E,{\mathfrak{s}}_{E}}:H^{*}_{S^{1}}(pt)\to H^{*-2d+b_{+}(X)+1}(B).\] The cohomology classes \(SW_{E,{\mathfrak{s}}_{E}}(x^{m})\in H^{2m-2d+b_{+}(X)+1}(B)\) are the (mod \(2\)) _families Seiberg-Witten invariants_ of the spin\({}^{c}\)-family \((E,{\mathfrak{s}}_{E})\), as defined for instance in [2]. Using Theorem 6.4, we obtain **Theorem 6.7**.: _Let \(E\to B_{0}\) be a spin family. If \(b_{1}(X)>0\), assume there exists a section \(s:B_{0}\to E\) and that the monodromy of the family acts trivially on \(H^{1}(X;{\mathbb{Z}})\). Then for any chamber \(\phi\), we have_ \[SW^{\phi}_{E,{\mathfrak{s}}_{E}}(x^{2j})=\sum_{l=1}^{3}\left(u^{l-3}w_{b_{+}(X )-l}(H^{+}(X))s_{2(j+1+\sigma(X)/16),{\mathbb{Z}}_{2}}(D)\right)\big{|}_{u=0}.\] _In particular if \(\sigma(X)<0\) or if \(b_{1}(X)=0\), then_ \[SW^{\phi}_{E,{\mathfrak{s}}_{E}}(x^{2j})=w_{b_{+}(X)-3}(H^{+}(X))s_{2(j+1+ \sigma(X)/16)}(D).\] Proof.: Let \(\phi:B\to S(H^{+}(X))\) be a chamber. This defines a \({\mathbb{Z}}_{2}\)-equivariant map \(S^{0}\times B=S({\mathbb{R}}\phi)\to S(H^{+}(X))\), inducing a map \[\phi^{*}:H^{*}_{{\mathbb{Z}}_{2}}(S(H^{+}(X)))\to H^{*}_{{\mathbb{Z}}_{2}}(S^ {0}\times B)\cong H^{*}(B).\] By the same reasoning as in Lemma 3.3, we have that \(SW_{E,s_{E}}(x^{2j})=\phi^{*}(SW_{E,s_{E}}^{Pin(2)}(q^{j}))\), so it remains to describe the map \(\phi^{*}\). The existence of \(\phi\) implies that \(w_{b_{+}(X)}(H^{+}(X))=0\) and hence \(e_{\mathbb{Z}_{2}}(H^{+}(X))\) is divisible by \(u\), by Equation (6.5). Hence, the forgetful map \(H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*}(B)\) factors through \(H^{*}_{\mathbb{Z}_{2}}(S(H^{+}(X)))\to H^{*}(B)\), and it is clear that this gives the map \(\phi^{*}\), because \(\phi^{*}\circ\pi^{*}:H^{*}_{\mathbb{Z}_{2}}(B)\to H^{*}(B)\) is the forgetful map, where \(\pi:S(H^{+}(X))\to B\) is the projection. It then follows that \(\phi^{*}(SW_{E,s_{E}}^{Pin(2)}(q^{j}))\) is given by extracting the \(u^{0}\)-term, giving \[SW_{E,s_{E}}^{\phi}(x^{2j})=\sum_{l=1}^{3}\left.\left(u^{l-3}w_{b_{+}(X)-l}(H^ {+}(X))s_{2(j+1+\sigma(X)/16),\mathbb{Z}_{2}}(D)\right)\right|_{u=0}.\] If \(\sigma(X)<0\), then Theorem 6.4 also gives \(w_{l}(H^{+}(X))=0\) for \(l>b_{+}(X)-3\), so the formula simplifies to \(SW_{E,s_{E}}^{\phi}(x^{2j})=w_{b_{+}(X)-3}(H^{+}(X))s_{2(j+1+\sigma(X)/16)}(D)\). If \(b_{1}(X)=0\), then \(j\) acts trivially on \(B\) and \(D\) is a quaternionic virtual bundle. Using the inclusion \(Pin(2)\subset Sp(1)\), it follows easily that \[e_{Pin(2)}(D)^{-1}=q^{d/2}+q^{d/2-1}s_{2}(D)+\cdots+s_{d/2}(D).\] Hence in this case the equivariant Segre classes of \(D\) are just equal to the usual Segre classes. So it follows that \(SW_{E,s_{E}}^{\phi}(x^{2j})=w_{b_{+}(X)-3}(H^{+}(X))s_{2(j+1+\sigma(X)/16)}(D)\).
2301.00453
Investigating the Dynamics of Social Norm Emergence within Online Communities
Although the effects of the social norm on mitigating misinformation are identified, scant knowledge exists about the patterns of social norm emergence, such as the patterns and variations of social tipping in online communities with diverse characteristics. Accordingly, this study investigates the features of social tipping in online communities and examines the correlations between the tipping features and characteristics of online communities. Taking the side effects of COVID-19 vaccination as the case topic, we first track the patterns of tipping features in 100 online communities, which are detected using Louvain Algorithm from the aggregated communication network on Twitter between May 2020 and April 2021. Then, we use multi-variant linear regression to explore the correlations between tipping features and community characteristics. We find that social tipping in online communities can sustain for two to four months and lead to a 50% increase in populations who accept the normative belief in online communities. The regression indicates that the duration of social tipping is positively related to the community populations and original acceptance of social norms, while the correlation between the tipping duration and the degrees among community members is negative. Additionally, the network modularity and original acceptance of social norms have negative relationships with the extent of social tipping, while the degree and betweenness centrality can have significant positive relationships with the extent of tipping. Our findings shed light on more precise normative interventions on misinformation in digital environments as it offers preliminary evidence about the timing and mechanism of social norm emergence.
Shangde Gao, Yan Wang, My T. Thai
2023-01-01T18:06:26Z
http://arxiv.org/abs/2301.00453v1
# Investigating the Dynamics of Social Norm Emergence within Online Communities ###### Abstract Although social norms' effect on mitigating misinformation is identified, scant knowledge exists about the patterns of social norm emergence, such as the patterns and variations of social tipping in online communities with diverse characteristics. Accordingly, this study investigates the features of social tipping in online communities and examines the correlations between the tipping features and characteristics of online communities. Taking "the side effects of COVID-19 vaccination" as the case topic, we first track the patterns of tipping features in 100 online communities, which are detected using Louvain Algorithm from the aggregated communication network on Twitter between May 2020 and April 2021. Then, we use multi-variant linear regression to explore the correlations between tipping features and communities' characteristics. We find that social tipping in online communities can sustain for two to four months and lead to a 50% increase in populations who accept the normative belief in online communities. The regression indicates that the duration of social tipping is positively related to the community populations and original acceptance of social norms, while the correlation between the tipping duration and the degrees among community members is negative. Additionally, the network modularity and original acceptance of social norms have negative relationships with the extent of social tipping, while the users' degree and betweenness centrality can have significant positive relationships with the extent of tipping. Our findings shed light on more precise normative interventions on misinformation in digital environments as it offers preliminary evidence about the timing and mechanism of social norm emergence. ## 1 Introduction The extensive development of online platforms has fostered the spread of messages generated by stakeholders at various levels, e.g., governmental agencies and individual users, during public events (Y. Wang et al., 2021). A large proportion of user-generated online messages contain inaccurate and misleading information, i.e., misinformation (Del Vicario et al., 2016; Wang et al., 2022). The wide diffusion of misinformation has threatened human society from multiple perspectives, e.g., interfering with collective decision-making on democratic, environmental, and public health issues (West & Bergstrom, 2021). There is an emergent need for suppressing misinformation spreading and mitigating the negative consequences of online misinformation on human society (West & Bergstrom, 2021). Existing studies (e.g., D. T. Nguyen et al. (2012), N. P. Nguyen et al. (2012), Zhang, Alim, et al. (2015, 2016), Zhang et al. (2018), Zhang, Kuhnle, et al. (2016), Zhang, Zhang, et al. (2015)) tend to suppress misinformation with (i) debunking, i.e., correcting the misinformation after people are exposed to it, and (ii) prebunking, i.e., helping people recognize the false/misleading contents (U. K. H. Ecker et al., 2022; Lewandowsky & van der Linden, 2021). The debunking strategy is widely adopted to provide targeted countermeasures for misinformation of specific topics (U. K. H. Ecker et al., 2022), e.g., provide messages with factual elaboration (Gao et al., 2021; van der Meer & Jin, 2020; Wang et al., 2022), fact-checking content (Humprecht, 2020), and messages that stimulate the health-protective measures (Humprecht, 2020). The debunking strategy is not always effective when the explanations that support the misinformation exist widely (Chan et al., 2017). The effect of debunking messages tends to be short-term and washed out by future exposure to misinformation (Mourali & Drake, 2022). Also, the debunking strategy can only be conducted after people's initial exposure to the misinformation (van der Meer & Jin, 2020), while the negative consequences of misinformation may already exist and cause notable social costs. On the contrary, the prebunking strategy is potentially an effective vehicle that overcomes the limitations of the debunking strategy and confers large-scale resistance against misinformation among the public (van der Linden et al., 2020). The prebunking strategy is based on the social psychological theory of "inoculation". If people are pre-warned and form the belief of rejecting misinformation, they might be "immune" to misinformation (Lewandowsky & van der Linden, 2021). Compared to the debunking strategy, the prebunking strategy focuses on influencing people's beliefs on the topics of misinformation, posing long-term effects on the public and reducing the occurrence of negative consequences of misinformation (Basol et al., 2021). When being implemented at a large scale, the pre-bunking strategy is conducted with _social norm interventions_, which aim to generate the social norms and consensus that support the factual evidence and reject misinformation (Dow et al., 2022). The basis of social norm interventions is people's adherence to the surrounding social norms (Constantino et al., 2022). Existing in both the digital and physical world (Gao et al., 2022), social norms, i.e., the shared beliefs or acceptable behaviors in communities, have shown a significant relationship with people's belief in the content of misinformation (Andt & Akesson, 2021; Gimpel et al., 2021; Lapinski & Rimal, 2005). Adhering to social norms can satisfy a desire to avoid sanctions, confer benefits by coordinating with others, and provide a simple heuristic about what is accepted/wise in a particular context (Constantino et al., 2022). Based on this psychological phenomenon, social norm interventions have been implemented to help form the belief of supporting factual evidence and rejecting misinformation in both the physical and digital realms (Andt & Akesson, 2021; Gimpel et al., 2021; Lapinski & Rimal, 2005), such as suppressing misinformation about climate actions and health behaviors (Constantino et al., 2022; U. K. Ecker et al., 2022). Specifically, by showing individuals the text that describes the "common beliefs" (i.e., social norms) towards the misinformation of a certain topic, individuals tend to modify their beliefs to match the "common beliefs" and reduce the reliance on the misinformation (U. K. Ecker et al., 2022). In another case, by showing individuals a message that "most responsible people think twice before sharing articles" (a social norm), individuals are not likely to share social media articles that contain misleading or contested content (Andi & Akesson, 2021). Though the role of the social norm in suppressing misinformation has been identified (Dow et al., 2022; Constantino et al., 2022; U. K. Ecker et al., 2022), scant empirical evidence has been provided to inform the implementation of social norm interventions. Several knowledge gaps and challenges remain. First, with the controlled experiments in physical worlds, recent works have identified that social norm emergence in their artificially designed communities tended to have a tipping process, i.e., social tipping (Berger, 2021; Centola et al., 2018; Ehret et al., 2022). Social tipping is a process that when the "tipping point" is reached, a small change in an individual community can create abrupt, nonlinear change in the acceptance of the normative beliefs across the community (Berger, 2021). By predicting the occurrence and extent of social tipping, policymakers can improve the effectiveness of the social norm interventions by adjusting the timing and efforts of implementing the interventions (Andreoni et al., 2021; Ehret et al., 2022). However, due to the lack of analysis of the online communities, it is unclear whether social tipping also exists in online communities and follows certain patterns regarding the tipping features, e.g., the duration and extent of social tipping. Little knowledge exists to guide the practices of social norm interventions regarding the timing and efforts that are needed to promote the tipping process of norm emergence. Second, experiments in existing studies have identified some evidence regarding the potential relationships between community characteristics and the diffusion of normative beliefs (Hu & Leung, 2017; Savarimuthu & Cranefield, 2011; Sen & Sen, 2010; Yu et al., 2014). However, these experiments were generally based on artificially designed communities in real-world or virtual scenarios, and the experiment findings may not be applicable in the communities of the online environment. Also, how the social tipping process varies in the community characteristics has not been disclosed in the existing studies. There is a need for empirical studies that explore the relationships between community characteristics and social tipping based on real-world communities, providing a reference for the design of social norm interventions. To fill this research gap, this study aims to answer the following research questions (RQ): * RQ1: Does social tipping exist during the social norm emergence of online communities? If so, what are the characteristics and patterns of social tipping? * RQ2: Do the features of social tipping correlate with different network characteristics of individual communities? This study takes the case of the norms on Twitter regarding the side effects of COVID-19 vaccines. The diffusion of vaccine-related misinformation has led to severe consequences during the pandemic (Loomba et al., 2021). A survey in 2020 showed that more than 55% of U.S. adult participants became hesitant in obtaining COVID-19 vaccines because they believed in the misinformation about the side effects, political issues, and safety issues of the vaccines (Graham et al., 2020). When exposed to misinformation about COVID-19 vaccines, people can become hesitant to take the COVID-19 vaccines, exacerbating their risks to be infected (Loomba et al., 2021). There is an emergent need for suppressing misinformation spreading and mitigating the negative consequences of online misinformation on human society. We utilize Louvain Algorithm (Blondel et al., 2008) to extract the communication communities between Twitter accounts from the tweets containing the topics of COVID-19 vaccines. We adopt the definition of "beliefs" from existing psychological studies (Camina et al., 2021; Durando et al., 2016; Herzog et al., 2013; Ritchie et al., 2021) and focus on if a user thinks the manipulated "side effects" of COVID-19 vaccines exist and accepts/rejects the COVID-19 vaccination. Regarding this case, "supporting COVID-19 vaccination" is our desired online social norm and we investigate the social tipping of the expressed normative belief across communities. We further examine how the dynamics of norm emergences vary across community characteristics, such as modularity and betweenness centrality (Winkelmann et al., 2022). The study contributes to disclosing the temporal patterns and mechanisms of social norm emergence in the online environment. Our findings can facilitate the strategic design of normative interventions for precisely mitigating the dissemination of misinformation in the online environment. ## 2 Data and Methods ### Overview As shown in **Fig. 1**, this study starts by collecting real-time tweets regarding the COVID-19 vaccines and related misinformation using Twitter Streaming API (Twitter, 2022). We define communities in the online environment based on Newman (2003), i.e., groups of vertices that have a high density of edges within them, with a lower density of edges between other groups. Specifically for this study, we detect communities from the "retweeting" and "mentioning" networks among Twitter users in the whole study period. For example, if one Twitter user retweets/mentions another user within the whole study period, one edge will exist between these two users. Among the identified individual communities, we select those with a relatively large population (i.e., more than ten users) and long periods of existence (i.e., more than ten days). With these communities, we track the temporal change of the community population that follows the normative belief (i.e., tracking norm emergence) and extract the community characteristics (e.g., modularity, average degree). After preparation, we first answer RQ1 by observing if social tipping can be identified in the temporal trend of social norm emergence in our detected individual communities. If tipping exists, we capture the patterns of the features of social tipping, which include the tipping extent and duration in this study. Based on the tipping features and community characteristics, we answer RQ2 and explore if significant correlations exist between social tipping and community characteristics. ### Data Preparation The basic dataset is collected with Twitter Streaming API between May 1, 2020, and April 30, 2021, regarding COVID-19 vaccines. Specifically, we use keywords of COVID-19 vaccinations to filter out the tweets that are related to COVID-19 vaccines, including the keywords of "vaccine," "vax," "vaccination," and brands of COVID-19 vaccines, e.g., "Pfizer". We extract the online communities based on the communication networks such as "mentioning/replying" messages (i.e., "@username") and retweeting messages (i.e., "RT @username") for multiple reasons. First, retweeting/replying behaviors tend to happen between the users who have following relationships and represent the active social ties between online users (Ozer et al., 2016; B. Wang et al., 2021; Weitzeil et al., 2012). Especially, a study of retweets about COVID-19 (B. Wang et al., 2021) indicated that more than 50% of the retweets about COVID-19 information were generated between users with follower/following relationships. Second, retweeting/replying behaviors can well reflect the social influence of social media users, as the users who tend to retweet or reply to the messages from others if they are influenced by the tweet content (Evkoski et al., 2021; Yuan and Crooks, 2018). We can potentially capture how a certain belief diffuses among social media users based on the interactions between the users (e.g., retweeting/replying to tweets) (Evkoski et al., 2021). Based on the summary of COVID-19 vaccine-related misinformation from Skafle et al. (2022), we focus on the "side effect" topic of COVID-19 vaccines from the collected tweets, which generally discuss: (a) whether COVID-19 vaccines have side effects that can heavily threaten human health, (b) whether COVID-19 vaccines can make people killed, and (c) whether COVID-19 vaccines have not passed trials and are poisonous. We use keywords (**Table 1**) of these three topics to identify the related tweets in our collected dataset. The keywords in the pattern of "word A + word B", represent the queries that a tweet is regarded as relevant to the topics if both "word A" and "word B" can be identified in the main text of the tweet. The nodes in the online individual communities are the users of the tweets in the basic dataset. We only keep the users whose tweets mentioned other users in the basic dataset, or the users who have been mentioned by other users in the basic dataset. The news bot accounts are also removed. We finally extract 19,839,188 tweets containing the keywords about the three topics of misinformation that were posted by 5,462,900 distinct users (see **Table 1**). We furtherly detect individual communities and analyze the norm emergence with this dataset. ### Community Detection In the retrieved communication network, the edges between users are formed when users reply to or retweet from other users. The weights of the edges are the frequencies of one user mentioning the other user within one day. We detect individual communities from social networks using Louvain Algorithms (Blondel et al., 2008). Louvain Algorithm is a combinational optimization algorithm that aims to maximize the modularity among the detected individual communities. The algorithm has a process that first assigns every node to be in its community and then for each node it tries to find the maximum positive modularity gain by moving each node to all its neighbor communities. If no positive gain is achieved the node remains in its original community (Blondel et al., 2008). Compared to other algorithms, Louvain Algorithm can efficiently capture the individual communities from a large-scale network, such as a social media network with millions of users. To better reveal the social tipping in large communities instead of small groups (e.g., a small group with less than ten members), we select the 100 communities with the largest populations among our detected communities for the following analysis. Classifying Individual Users' Expressed beliefs towards Misinformation about COVID-19 Vaccines and Tracking Norm Emergence in Communities Based on the user's tweets, we classify the expressed beliefs of individuals at a certain period regarding the side effect of COVID-19 vaccines. We first classify the expressed beliefs in the tweets of individual users. We train a Long Short-Term Memory (LSTM) model with 2,000 tweets related to COVID-19 vaccination and use this model to estimate if tweets from specific users with expressed beliefs that support or reject misinformation about the side effects of the COVID-19 vaccination. LSTM has a good performance in existing studies regarding text classification because it captures phrase-level and sentence-level feature patterns in the tweet text (Zhou et al., 2018). The validated accuracy and loss of the LSTM classifier during training are shown in **Fig. 2**, which reach 0.8892 and 0.2292 separately after training, and the RMSE of the classification outcomes are 0.3719. These metrics indicate that our LSTM classifier has an acceptable performance in classifying the expressed beliefs of individual users. After classifying the expressed beliefs delivered in the tweets, we obtain the overall expressed belief of each user on each day based on their tweets on that day. Specifically, we calculate the proportion of tweets that one user generates in one day that rejects the misinformation about COVID-19 vaccines. Specifically, if more than 50% of the tweets are supporting the COVID-19 vaccination, we regard the user accept the COVID-19 vaccination on that day. If only one tweet is generated by one user on one day, we regard the expressed belief in the tweet as the expressed belief of that user on a specific date. We then aggregate the individuals' expressed beliefs to the community level and track the norm emergence in our sample communities. We regard the normative belief as "rejecting the misinformation about COVID-19 vaccines regarding side effects", and the emergence of norms within a community is tracked by the temporal trend of the proportion of community members who hold the normative belief. From the temporal trends, we may identify the tipping points where the acceptance increased rapidly. ### Characterizing the Emergence of Social Norm Based on the temporal trends of norm emergence in the sample communities, we first observe the trends and detect if social tipping exists in the communities (RQ1). We detect the existence of social tipping according to the tipping's definition, i.e., the increase of community members adopting the norms in specific periods is relatively more rapid than in the past periods (Berger, 2021). We calculate the daily increase in the proportion of community members adopting the normative belief, observing if the increase in a certain period is relatively more rapid than the previous periods. If so, we will regard the social tipping as existing during the norm emergence of our sample communities. If social tipping does exist in the sample communities, we adopt the measurements of social tipping in existing studies (Andrighetto and Vriens, 2022), including the _duration_ and the _extent_ of the social tipping (illustrated in **Fig. 3**). The duration represents the number of time steps that the social tipping exists. The extent of social tipping is measured as the change in the proportion of community members adopting the normative belief before and after social tipping. ### Investigating Relationships Between Community Characteristics and Tipping Features Some characteristics of online individual communities (**Table 2**) may influence the duration of social tipping by increasing or decreasing the rapidness of the tipping process. Some community characteristics Figure 3: Illustration of duration and extent of social tipping Figure 2: The accuracy and loss of the LSTM classifier during training may also influence the extent of social tipping by causing large-scale social norm acceptance within the individual community, e.g., the modularity of online individual communities. This study investigates the statistical correlations between the characteristics of online communities (**Table 2**) and the duration and extent of social tipping regarding the proportion of community members accepting the social norms (RQ2). We specify the influence of each community's characteristics on the duration and extent of social tipping when examining each hypothesis. We specifically test the following hypotheses that are designed for each of the community characteristics in **Table 2**. * _H1: The modularity of a community has a positive relationship with the duration and extent of social tipping._ * _H2: The average messaging frequency among members in an online community has a positive relationship with the duration and extent of social tipping._ * _H3: The size, i.e., the number of members, of a community has a negative relationship with the duration and extent of social tipping._ * _H4: The original proportion of community members who accept the normative belief has a negative relationship with the duration and extent of social tipping._ * _H5.1: The average degree of network communities has a positive relationship with the duration and extent of social tipping_ * _H5.2: The average betweenness centrality of network communities has a positive relationship with the duration and extent of social tipping_ Before hypothesis testing, we check the statistical distributions of all the considered community characteristics and the features of social tipping. In this way, we can identify if the data of hypothesis testing has an obvious bias. As shown in **Fig. 4**, most communities have modularity that is lower than 0.1. The network size of most communities is smaller than 200 users, and the messaging frequency among the community users tends to be lower than 10 messages a day. For the original acceptance of social norms, most communities have an acceptance level of lower than 40% when the communities emerge. But still, more than twenty communities have the original acceptance that is higher than 80% when the communities emerge. Additionally, the average degree and betweenness centrality of communities tend to evenly distribute in a small range, e.g., 1.8 to 2.0 for the average degree, and 0 to 0.12 for the betweenness centrality. \begin{table} \begin{tabular}{l l} \hline \hline Characteristics & Reference \\ \hline Modularity & (Winkelmann et al., 2022) \\ Messaging frequency & (Centola et al., 2018) \\ Network size & (Sabarwal and Higgins, 2021) \\ Original acceptance levels of social norms & (Berger, 2021) \\ Degree and betweenness centrality of community members & (Winkelmann et al., 2022) \\ \hline \hline \end{tabular} \end{table} Table 2: Characteristics of online communities We examine all the hypotheses mentioned above with multi-variant linear regression (Eq. 1 and 2). Based on the identified communities, we examine our proposed hypotheses based on the statistical significance (i.e., \(p-value\)) and whether the coefficients of community characteristics are positive or negative. For example, to examine hypothesis H1 regarding the duration of social tipping, if the \(p-value\) for the variable \(Modularity\) is low and the coefficient for this variable is positive, we can state that modularity has a significantly positive relationship with the duration of social tipping in an online community. \[\text{Duration}\text{$\sim$}Modularity+\text{{Message Frequency}}+\text{{Network Size}}+\text{{Original Acceptance}} \tag{1}\] \[+\text{{Average Degree}}+\text{{Average Betweenness Centrality}}\] (2) \[\text{{Extent}}\text{$\sim$}Modularity+\text{{Message Frequency}}+\text{{Network Size}}+\text{{Original Acceptance}}\] \[+\text{{Average Degree}}+\text{{Average Betweenness Centrality}}\] ## 3 Results ### Trends and Patterns of Social Norm Emergence in the Sample Communities To answer RQ1, we first check the temporal trends of social norm emergence, i.e., the change of norm acceptance among sample communities, aiming to identify if "social tipping" can be identified. Specifically, we determine that social tipping happened within a certain period (e.g., between two specific dates) if the daily change of the proportion of the population who adopt the normative belief (i.e., rejecting misinformation) in the community is much higher than in the past periods. From the temporal trends of social norm emergence in the largest ten sample communities (**Fig. 5**), we find that social tipping does exist, and the social tipping of different communities occurred nearly spontaneously between December 2020 (when the U.S. FDA first issued emergency usage of COVID-19 vaccines (HHS, 2022)) to April 2021. Especially at the end of December 2020, the daily increase of the population who adopt the norms exceeded 10%, which was much higher than the past daily increase (which tended to be lower than 4%). After tipping in these communities, the populations that hold the normative belief towards COVID-19 vaccination in each community generally reached 65% after three months of social tipping. Figure 4: Distributions of community characteristics Based on the social tipping we identified, we further check the statistical distributions of features of social tipping among our detected communities, shown in **Fig. 6**. The histograms of the tipping durations and extent indicate that, for norms related to the misinformation about COVID-19 vaccines, the social tipping in online communities tends to be relatively long-term and intense. Specifically, the average duration of tipping is 83.26 days, the median tipping duration is 96.5 days, and 95% of the sample communities have a tipping duration between 59 days and 103 days. 17% of the communities have durations that are shorter than one month, and the duration of 8% of the community is longer than four months. For the extent of tipping, we can identify that the increase of population who adopt the normative belief in 86% of the sample communities exceeds 40%, and the tipping in 56% of the sample communities even has the extent that exceeds 50%. Overall, social tipping in online communities regarding the norms of rejecting misinformation tends to exist for two to four months, and the tipping extent in more than half of the online communities may exceed 50%. ### Relationships Between Community Characteristics and Tipping Features Before conducting the regression, we first check the dependence of the community characteristics, and the outcome is shown in **Fig. 7**. Specifically, the absolute value of the correlation between each pair of Figure 5: Temporal trends (a) and daily change (b) of social norm emergence in the ten largest sample communities Figure 6: Distributions of features of social tipping among detected communities community characteristics is lower than 0.2. The test outcomes indicate that the considered community characteristics in this study are relatively independent of other characteristics and can be included in the multi-variant linear regression models. _Relationships between community characteristics and the duration of social tipping._ The table of the regression outcomes is shown in **Table 3**, and the estimated coefficients and 95% confidence intervals (CI) for each community's characteristics are shown in **Fig. 8**. Among our selected characteristics of the detected communities, the network sizes (H3), original acceptance levels of social norms (H4), and the average degree (H5.1) among users have significantly positive impacts on the duration of social tipping. Specifically, although not significant, modularity and communication frequency among community members have a negative relationship with the duration of social tipping (H1, H2). The high-level betweenness centrality in online communities has a positive relationship with the duration of social tipping, but this relationship is not significant (H5b). Based on the outcomes of this regression outcomes, we identify that social tipping is highly related to the context and interactions among the community members. Specifically, the high-level average degree indicates that each community member can communicate with a large number of peers within the community. The original proportion of community members adopting the normative belief indicates the context literacy of the community members regarding the topics of misinformation. Our results indicate that social norms can spread more easily if the individuals are exposed to the information and interact with more peers than the communities with few interactions. Also, the community members who originally do not reject the misinformation may not easily change their belief if they can expose to many interactions with their peers that originally reject the misinformation (i.e., high-level original acceptance). Additionally, the speed of norm emergence may not increase in the large-scale communities, making the duration of tipping longer in the large-scale communities than in the small communities. Relationships between community characteristics and the extent of social tippingThe table of the regression outcomes is shown in **Table 4**, and estimated coefficients and 95% confidence intervals (95% CI) for each variable of community characteristics are shown in **Fig. 9**. Among our selected characteristics of the detected communities, the average degree (H5a) and betweenness centrality (H5b) among users have a significantly positive impact on the duration of social tipping. Meanwhile, the modularity (H1) and original acceptance (H4) of social norms can have a significantly negative relationship with the extent of social tipping. Different from the average degree and betweenness centrality, the significance of the relationships between other community characteristics and the extent of social tipping is not high. Specifically, network size and communication frequency among community members have an insignificant relationship with the extent of social tipping (H2, H3). Based on the regression outcomes, we identify that the extent of social tipping is also highly related to the context and interactions among the community members. Both the betweenness centrality and degree are related to how closely the community members are connected, and the original acceptance of the normative belief is related to the literacy of community members regarding the topics of misinformation. The positive and high-level influence of average degree and betweenness centrality on the tipping extent indicates that more community members will finally turn to the normative belief if they are exposed to heavy interactions with other community peers. Also, similar Figure 8: Estimated values and 95% CI of coefficients in regression for the duration of social tipping (significant variables are within red boxes) \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Variables & Coefficient & Standard Error & t Value & P\(>\)\(|\)t\(|\) & [0.025 & 0.975] \\ \hline Modularity & -0.3648 & 0.427 & -0.855 & 0.394 & -1.207 & 0.477 \\ **Network Size** & 0.0012 & 0.001 & 2.212 & 0.028* & 0 & 0.002 \\ Messaging Frequency & -0.0011 & 0.002 & -0.545 & 0.587 & -0.005 & 0.003 \\ **Original Accept Level** & 0.6879 & 0.305 & 2.258 & 0.025* & 0.087 & 1.289 \\ **Average Degree of Users** & 0.4688 & 0.08 & 5.879 & \(<0.001***\) & 0.311 & 0.626 \\ Average Betweenness & 2.3743 & 2.324 & 1.022 & 0.308 & -2.212 & 6.961 \\ Centrality of Users & & & & & & \\ \hline Significance Levels: 0 ‘***’ 0.001 ‘**’ 0.01 ‘**’ 0.05 ‘.’ 0.1 ‘ ’ 1 & & & & \\ \hline \hline \end{tabular} \end{table} Table 3: Outcomes of multi-variant linear regression for the _duration_ of social tipping (Adjusted \(R^{2}\): 0.648) to the regression outcomes of tipping duration, the community members who originally do not reject the misinformation may not easily change their expressed belief if they can expose to many interactions with the peers that originally reject the misinformation (i.e., high-level original acceptance). ## 4 Discussion Social norm interventions can potentially mitigate the spread of misinformation, while insufficient knowledge exists regarding the existence and patterns of social tipping in the online environment, as well as how the tipping features vary in communities with different network characteristics. This study investigates the existence of social tipping in the emergence process of the norms and focuses on rejecting the misinformation about COVID-19 vaccines' side effects. Also, our regression outcomes indicate that the duration of tipping is more correlated to the size, average degree, and original acceptance of the normative belief among the community members. The extent of social tipping (i.e., the increase of community members adopting the normative belief) is more related to the average degree, average betweenness centrality, modularity, and the original acceptance of the normative belief among the community members. This study advances existing knowledge bodies from several perspectives. First, existing studies focused more on the physical world or artificially designed communities (Berger, 2021; Centola et al., 2018; Ehret et al., 2022), lacking exploration of the existence and patterns of social tipping in the online digital \begin{table} \begin{tabular}{l l l l l l l} \hline \hline Variables & Coefficient & Standard Error & t Value & P\(>\)\(|t|\) & [0.025 & 0.975] \\ \hline **Modularity** & -0.1635 & 0.073 & -2.256 & 0.025* & -0.307 & -0.02 \\ Network Size & -0.0001 & 0.0000897 & -1.271 & 0.205 & 0 & 6E-05 \\ Messaging Frequency & 0.0001 & 0.000 & 0.272 & 0.786 & -0.001 & 0.001 \\ **Original Accept Level** & -0.1127 & 0.052 & -2.176 & 0.031* & -0.215 & -0.01 \\ **Average Degree of Users** & 0.4725 & 0.014 & 34.87 & \(<0.001***\) & 0.446 & 0.499 \\ **Average Betweenness** & & & & & & \\ **Centrality of Users** & 1.2297 & 0.395 & 3.113 & 0.002** & 0.45 & 2.009 \\ \hline Significance Levels: 0 ‘***’ 0.001 ‘**’ 0.01 ‘**’ 0.05 ‘.’ ’ 0.1 ‘ 1 & & & & \\ \hline \hline \end{tabular} \end{table} Table 4: Outcomes of multi-variant linear regression for the _extent_ of social tipping (Adjusted \(R^{2}\): 0.972) Figure 9: Estimated values and 95% CI of coefficients in regression for the extent of social tipping (significant variables are within red boxes) environment. As there can be a difference between the social norm emergence in digital and other environments, existing knowledge of social tipping may not be fully applicable to the online social norm intervention. To fill this gap, we conduct empirical studies with the online communication dataset from Twitter and investigate the social norm emergence in 100 sample communities. To a certain extent, our study helps identify the statistical distributions of the duration and extent of social tipping in online communities. The large datasets and sample communities with various characteristics in this study make it possible to disclose the general patterns of social tipping in online environments. Second, the existing knowledge body (e.g., Hu & Leung 2017, Savarimuthu & Cranefield 2011, and Sen & Sen 2010) rarely analyzed the relationships between the patterns of social tipping and the network characteristics of online communities, e.g., the modularity of the communities or the degree of community members. To fill this gap, our hypothesis testing with 100 sample communities can help identify the characteristics of online communities that are significantly correlated with the tipping duration and extent. We highlight the significant correlation between the features of social tipping and the modularity, community size, average degree, average betweenness centrality, and original acceptance of the normative belief. Our findings can contribute to disclosing the general relationships between social tipping and community characteristics, supporting the future designing of online social norm intervention strategies. Limitations still exist in this study and open opportunities for our further studies. First, there are still some external factors that can influence the individuals' expressed belief (changes), such as governmental policies, while this study does not include these factors. Our future studies will include the factors of physical communities to capture the relationships more accurately between social tipping and online community characteristics. Second, this study focuses on the topics of COVID-19 vaccine-related misinformation, of which the community characteristics and social tipping may follow distinct temporal patterns than other topics. To generate more generalizable findings regarding social tipping in online communities, our future studies will study multiple topics of online communications, e.g., other prevention measures for COVID-19. Third, this study regards each individual community as relatively isolated from its neighboring communities, while it is possible that the norm emergence in the neighboring communities also contributes to the social tipping of the individual communities. Our future studies will investigate the norm emergence and social tipping in the circumstance of multi-community social networks and explore the relationships between social tipping in different communities. Fourth, we regard the characteristics of communities as relatively stable in the study period, while community characteristics may be temporally dynamic and have different levels of influence on the norm emergence over different periods. Our future studies will capture the dynamics of online communities and investigate the temporal interactions between the network characteristics of individual communities and the trend of norm emergence. ## 5 Conclusion Exploring the patterns of social tipping and the relationship between social tipping and community characteristics is critical for tailoring social norm interventions for mitigating online misinformation. Our study contributes to the knowledge regarding the heterogeneous temporal patterns and mechanisms of social tipping in online communities. Our findings can guide public health authorities, emergency responders, and other crisis managers regarding suppressing online misinformation, such as actively disseminating and endorsing messages delivering benign normative beliefs on online platforms. With tailored intervention strategies, crisis managers can motivate the online populations to conduct appropriate prevention measures (e.g., taking COVID-19 vaccines) as well as mitigate the adverse impacts caused by ineffective prevention behaviors (e.g., rejecting vaccinations arbitrarily). With the probunking interventions with social norms, individuals can potentially form positive attitudes towards the public health campaign and proactively reject and suppress the spread of online misinformation.
2310.16570
Give Me the Facts! A Survey on Factual Knowledge Probing in Pre-trained Language Models
Pre-trained Language Models (PLMs) are trained on vast unlabeled data, rich in world knowledge. This fact has sparked the interest of the community in quantifying the amount of factual knowledge present in PLMs, as this explains their performance on downstream tasks, and potentially justifies their use as knowledge bases. In this work, we survey methods and datasets that are used to probe PLMs for factual knowledge. Our contributions are: (1) We propose a categorization scheme for factual probing methods that is based on how their inputs, outputs and the probed PLMs are adapted; (2) We provide an overview of the datasets used for factual probing; (3) We synthesize insights about knowledge retention and prompt optimization in PLMs, analyze obstacles to adopting PLMs as knowledge bases and outline directions for future work.
Paul Youssef, Osman Alperen Koraş, Meijie Li, Jörg Schlötterer, Christin Seifert
2023-10-25T11:57:13Z
http://arxiv.org/abs/2310.16570v2
# Give Me the Facts! A Survey on Factual Knowledge Probing ###### Abstract Pre-trained Language Models (PLMs) are trained on vast unlabeled data, rich in world knowledge. This fact has sparked the interest of the community in quantifying the amount of factual knowledge present in PLMs, as this explains their performance on downstream tasks, and potentially justifies their use as knowledge bases. In this work, we survey methods and datasets that are used to probe PLMs for factual knowledge. Our contributions are: (1) We propose a categorization scheme for factual probing methods that is based on how their inputs, outputs and the probed PLMs are adapted; (2) We provide an overview of the datasets used for factual probing; (3) We synthesize insights about knowledge retention and prompt optimization in PLMs, analyze obstacles to adopting PLMs as knowledge bases and outline directions for future work. ## 1 Introduction Pre-trained language models have been a game changer in NLP. Their reliance on large unlabeled corpora for pre-training and the availability of computational resources have enabled a speedy scaling of these models. This scaling has been reflected on the performance of numerous downstream tasks in NLP Devlin et al. (2019); Chowdhery et al. (2022); Touvron et al. (2023), and led to the wide adaptation of the _pre-train then finetune_ framework. The success of PLMs is attributed to the rich representations and the knowledge captured from the pre-training corpora De Cao et al. (2021); Han et al. (2021); Ye et al. (2022). There has, therefore, been a huge interest in investigating and quantifying the type and amount of knowledge present in PLMs, e.g., Davison et al. (2019); Jawahar et al. (2019); Petroni et al. (2019); Tenney et al. (2019); Roberts et al. (2020), in order to have a better understanding about which kinds of knowledge are internalized during pre-training, and to develop methods to make PLMs more knowledge-rich and obtain gains on various downstream tasks. Besides the interest in quantifying knowledge for better downstream tasks performance, there is a special interest in factual knowledge present in PLMs, because they are envisioned to become _soft knowledge bases_, from which one can easily extract relational knowledge that had been captured during pre-training Petroni et al. (2019); Sung et al. (2021). Querying PLMs for knowledge would eliminate the complex NLP pipelines used for knowledge extraction, the need for labeled data to train models for relational knowledge extraction, and schema designing Petroni et al. (2019). Furthermore, PLMs would allow users to formulate queries to knowledge bases (KBs) in natural language, which makes them accessible to a wider user base Heinzerling and Inui (2021). Despite recent advances enabling smooth conversational interactions, e.g., with Chat-GPT1, factuality is still an open issue Ray (2023). Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) Many methods and datasets have been proposed to _probe_ PLMs for factual knowledge. Probing involves a PLM and a dataset. The dataset contains truthful facts. These facts are used to estimate the amount of knowledge in PLMs. More specifically, the dataset contains inputs that identify the fact we are looking for, in order to extract it from the PLM (e.g., "Dante was born in Figure 1: An overview of our categorization scheme of factual knowledge probing methods. truth answers that help evaluate if the retrieved answers are indeed correct (e.g., Florence). The data is often described in terms of relations (e.g., "place-of-birth") between subjects (e.g., "Dante") and objects (e.g., "Florence"). To produce prompts, a template is created for each relation (e.g., "[X] was born in [MASK]"), that is then filled with subject entities. The inputs can also have other forms such as questions (e.g., "Where was Dante born?"). In this work, we review recent work about factual knowledge probing. For the survey, we considered papers that cite the seminal work by Petroni et al. (2019) which first introduced the concept of PLMs as KBs.2 We make the following contributions: (1) We provide a categorization of factual knowledge probing methods that is based on how inputs, PLMs and their outputs are adapted (see Figure 1 and Section 2); (2) We provide an overview of the datasets used for factual knowledge probing and categorize these under three classes based on their goal (Section 3); (3) We synthesize insights about knowledge retention and prompt optimization in PLMs (Section 4), analyze obstacles to adopting PLMs as knowledge bases (Section 5), and outline directions for future work (Section 7). We make our corpus of relevant papers publicly available. Footnote 2: For more details refer to Appendix A.1 ## 2 Methods for Factual Probing We categorize factual probing methods based on adaptations to i) input, ii) model, and iii) output. Categories are not mutually exclusive, i.e., one method could adapt input and model simultaneously. Figure 1 and Table 1 provide an overview of the probing methods. We only consider prompting methods that have been explicitly used for factual knowledge probing. For a general review of prompting methods, we refer to Liu et al. (2023). ### Probing Inputs We distinguish between non-optimized or fixed inputs, and optimized inputs that are adapted in various ways to elicit more facts from PLMs. #### 2.1.1 Non-optimized Inputs Extracting factual knowledge from PLMs depends on providing them with short inputs that indirectly describe the sought-after information. These methods can take various forms (cloze prompts Taylor (1953), questions, or entities). Non-optimized inputs represent the simplest case, where the probing inputs are not altered in any way. tities as inputs, and compare them to ground truth descriptions. #### 2.1.2 Optimized Inputs Probing inputs contribute substantially to the probing procedure. PLMs are sensitive to the inputs (Petroni et al., 2019; Jiang et al., 2020; Elazar et al., 2021), and even syntactical variations or distractors, that do not alter the meaning, cause the PLM's predictions to change (Heinzerling and Inui, 2021; Longpre et al., 2021; Pandia and Ettinger, 2021; Podkorytov et al., 2021; Li et al., 2022). Therefore, depending on the probing inputs, the estimate on factual knowledge we obtain may vary significantly. Optimized inputs represent variations of the inputs, where the inputs are changed to account for the sensitivity of the probed PLMs. Diversification and miningmethods aim to diversify and optimize prompts by mining Wikipedia or other resources, and selecting the best performing prompts or a combination of them. For example, Jiang et al. (2020) propose a mining-based and a paraphrasing-based approach to create alternative prompts that outperform manual ones. The final prompts are selected based on their performance on a training set, and can also be combined in an ensemble. Bouraoui et al. (2020) mine for prompts that contain the entities of interest, and filter these based on the ability of the probed PLMs to predict the masked objects. After the filtering step, the remaining prompts are utilized to create a dataset that consists of positive inputs, i.e., containing true subject-object pairs, and negative inputs, which contain false pairs. This dataset is then used for the final evaluation. Direct optimizationmethods aim to directly optimize existing prompts. This optimization happens either in a discrete space, to keep the prompts in natural language, or in a continuous space where the prompts do not have to correspond to specific tokens from the vocabulary. Optimization could also target only the masked token or the order of the examples in the prompt, in case a few examples are provided in the prompt to better indicate the task. Shin et al. (2020)'s AUTOPROMPT extends manually created prompts by prompts with a pre-defined number of trigger tokens, and employs gradient-based search to sequentially replace the trigger tokens with concrete tokens. These tokens are chosen to increase the probability of predicting the correct object. OPTIPROMPT (Zhong et al., 2021) is similar to AUTOPROMPT, but allows for the trigger tokens to be replaced with vectors from a continuous embedding space. In a similar fashion, Qin and Eisner (2021) propose learning an ensemble of continuous prompts per relation. Additionally, they perturb the representations of the prompts in each layer in the probed PLMs using small learnable vectors. The intuition is to have activation patterns that are similar to the ones encountered during pre-training, which would make it easier to elicit knowledge from PLMs. Newman et al. (2022) utilize adapters (Houlsby et al., 2019) to map the embedding vectors to continuous prompts in order to make the probed PLMs less sensitive to different phrasings of the same prompts. Saeed and Papotti (2022) augment the masked tokens with a special type of embeddings, called Type Embeddings. These embeddings are derived from several entities that share the same type, and are shown to help tie the probed PLM's predictions to the expected type of the masked entity. PERO (Kumar and Talukdar, 2021) depends on querying PLMs with prompts containing few training examples (or shots), which demonstrate the task to the queried PLMs. Since PLMs are quite sensitive to the order and the quality of the provided training examples in the prompt, PERO leverages a genetic algorithm to find an optimized prompt and a separator token to concatenate the examples in the prompts. (Li et al., 2022) exploit the symmetry of the task, and optimize prompts in a continuous space so that the probability of predicting both the subject and the object is maximized using the resulting prompts. Generation with PLMmethods re-write prompts with the help of a secondary PLM. Haviv et al. (2021) re-write manual prompts using another version of the probed model. The re-writing model is trained to produce prompts that help extract more knowledge from the probed one, which is kept unchanged. Zhang et al. (2022) leverage a generative PLM to produce optimized prompts. ### Probed PLMs PLMs are probed for knowledge using either their original pre-trained parameters (Petroni et al., 2019; Jiang et al., 2020), or after adapting these parameters (Roberts et al., 2020; Meng et al., 2022). #### 2.2.1 Vanilla PLMs Methods in this category do not induce any changes to the probed PLMs, and depend on pre-training ob jectives to probe PLMs for factual knowledge. Using the pre-trained parameters is the most straightforward approach and is claimed to preserve the facts learned during pre-training Elazar et al. (2021); Newman et al. (2022). Most methods leverage the language modeling objectives from pre-training to probe for factual knowledge Petroni et al. (2019); Jiang et al. (2020); Shin et al. (2020); Haviv et al. (2021); Kumar and Talukdar (2021); Zhong et al. (2021); Kalo and Fichtel (2022); Newman et al. (2022); Onoe et al. (2022); Saeed and Papotti (2022). Other methods rely on representations that come from the model's body, discarding task-specific parameters altogether (e.g., the Masked Language Modeling head in BERT-like models) Lietard et al. (2021) or use representations of the subject and object entities in the case of static word embeddings Dufter et al. (2021). #### 2.2.2 Adapted PLMs Some works adapt the PLMs under evaluation to enable evaluation tasks, that do not correspond to any pre-training objective. The adaptation, however, is also coupled with risks such as train-test overlap Lewis et al. (2021); Wang et al. (2021). Supervised adaptation.Most methods finetune the probed PLMs in a supervised manner to adapt them to the probing task. Roberts et al. (2020) finetune T5 models for closed-book question answering, where models have only questions as inputs, while leaving out any context or external knowledge sources that might contain the answer. Similarly, Wang et al. (2021) finetune BART to output a related passage, and then the answer. Bouraoui et al. (2020) finetune BERT to classify prompts based on whether the relation between the subject and object entities truly holds or not. Fichtel et al. (2021) finetune a BERT model with its masked language modeling head to predict the masked tokens in the provided prompts. Abaho et al. (2022) propose an additional position-attention layer on top of transformer models, where the position of the masked token is kept constant, and the remaining tokens are given positions relative to the masked token. This approach is considered to put more focus on the masked tokens and its interaction with the remaining tokens in the prompt. Chen et al. (2022) leverage a task description that depends on the relation between the subject and object entity, alongside a few labeled examples to train the probed PLMs. At inference time, the PLMs are kept frozen and are provided with unseen task descriptions and labeled examples to adapt to the task. Elazar et al. (2021) further train BERT with a consistency loss to increase its robustness to paraphrases that describe the same relation. Shi et al. (2021) finetune generative PLMs to generate entity descriptions depending only on their knoweldge from pre-training. Qin and Eisner (2021) do not directly change any parameters in PLMs, but rather introduce additional trainable parameters in each layer that change the hidden representations of the prompts to help make them more suitable for knowledge extraction. Self-supervised adaptation.Adaptations in a self-supervised manner can introduce changes to the model without explicitly finetuning the model to the probing task. For example, Meng et al. (2022) propose to _re-wire_ the probed PLM in a self-supervised manner. Their method depends on using data from the pre-training phase, splitting each sentence into a head part and a tail part, and using a contrastive learning objective to push the representations of the matching head and tail pairs (positives) closer to one another, and that of the non-matching pairs (negatives) to be further apart. The evaluation is based on the similarity between the representations of the prompt and a predefined set of entities that represent potential answers. ### Outputs Methods focusing on the outputs of PLMs address restricting the output space of PLMs, debiasing their outputs, and handling multi-token entities. Typed querying. Kassner et al. (2021) propose to restrict the space of possible values for replacing the masked token (object) from the whole vocabulary to a specific set of tokens whose type matches the type of the ground truth object. For example, if the PLM is queried with the prompt: "The smallest country in the world is [MASK]", only entities of type country are considered to replace the [MASK] token. This method has two advantages: it reduces the number of objects under consideration and allows for a better comparison across PLMs with different vocabularies Kassner et al. (2021). Debiasing.Zhao et al. (2021) identify biases in the predictions of PLMs towards common and recent tokens, and propose a method that adapts the output probabilities by first estimating these biases using neutral examples and then correcting them. This debiasing method is shown to reduce the variance across prompts and has a positive effect on fact retrieval. Malkin et al. (2022) propose a method to increase the effect of distant tokens on the predictions of PLMs. The method depends on combining two output distributions over the vocabulary. One distribution is based on the full-length input, whereas the other is based on a shortened version of the same input. Wang et al. (2023) identify the problem of object bias in optimized prompts and propose to make all potential objects equally probable when no subject is provided, and increasing the probability of the correct object, when the subject is available. Yoshikawa and Okazaki (2023) output predictions only above a sufficient confidence threshold. This results in a less biased evaluation, and reflects the ability of PLMs in excluding uncertain predictions. To address the problems of multiple valid answers and frequency bias, i.e., the co-occurence of some subject and object entities despite not being in a factual relation to one another, Dong et al. (2022) use two templates, one contains the correct relation while the other contains an erroneous relation between the two entities, and compare the probability for the correct object under both relations. Multi-token entities.To handle multi-token entities, Jiang et al. (2020) propose using a pre-defined number of masked tokens and filling these using different strategies: 1) independent from each other, 2) sequentially (left-to-right for English), 3) starting with the most confident predictions. Kalinsky et al. (2023) leverage the masked token representation to generate multiple tokens using a small generative model. ## 3 Datasets for Factual Probing We found a variety of datasets (44 in our corpus) that have been proposed or used for probing factual knowledge in PLMs: 18 datasets for probing general knowledge, 8 for domain-specific knowledge and 18 datasets that target other aspects, e.g, consistency of PLMs (cf. Table 2). Datasets for **general knowledge** probing are used to quantify generic factual knowledge in PLMs with the most prominent being LAMA(Petroni et al., 2019). WIKI-UNI (Cao et al., 2021) is similar to LAMA, but with a uniform distribution of object entities. LAMA-UHN(Poerner et al., 2020) is a subset of LAMA without easy-to-guess examples. DLAMA(Keleg and Magdy, 2023) targets culturally diverse facts. While 16 datasets are solely English, there are three multilingual datasets (mLAMA(Kassner et al., 2021), X-FACTR(Jiang et al., 2020) and DLAMA(Keleg and Magdy, 2023)). IndicGLUE(Kakwani et al., 2020) contains 11 Indic languages. Most datasets consist of cloze prompts, while QA datasets (WebQuestions(Berant et al., 2013), TriviaQA(Joshi et al., 2017), NQ(Kwiatkowski et al., 2019)), PopQA and EntityQuestions(Mallen et al., 2023) are also used to quantify factual knowledge (Roberts et al., 2020). Wang et al. (2021) adapt SQuAD(Rajpurkar et al., 2018) for closed-book question answering. 6 out of 8 datasets used for probing **domain-specific** knowledge target the biomedical domain (e.g., MedQA(Jin et al., 2021), BioLAMA(Sung et al., 2021) and MedLAMA(Meng et al., 2022)). The multilingual dataset EXAMS(Hardalov et al., 2020) focuses on scientific QA, whereas LEFT(Ciosici et al., 2021) contains questions from humanities and social sciences. The community has constructed further datasets to investigate **other aspects** of using PLMs as knowledge bases. PARAREL(Elazar et al., 2021) and its multilingual counterpart mPARAREL(Fierro and Sogaard, 2022) target the sensitivity of PLMs to paraphrases. Negated/Misprimed LAMA(Kassner and Schutze, 2020) focuses on how negation/mispriming affects fact retrieval from PLMs, whereas Pandia and Ettinger (2021) target the effect of distractors. Updating knowledge in PLMs is considered by Jang et al. (2022, 2022); Lee et al. (2022); Meng et al. (2022); Hase et al. (2023); Hoelscher-Obermaier et al. (2023); Margatina et al. (2023). TEMPLAMA(Dhingra et al., 2022) is concerned with time-dependent facts retrieval, whereas SituatedQA(Zhang and Choi, 2021) considers both, temporal and geographical contexts. Heinzerling and Inui (2021) use a large dataset to evaluate the knowledge storing and retrieval capabilities of PLMs, and hence their use as KBs. Singhania et al. (2022) challenge the community to build a KB from PLMs, and provide a dataset to facilitate fact retrieval. ## 4 Insights about Knowledge Retention and Prompt Optimization Two further aspects emerged from the surveyed papers: i) factors affecting knowledge retention, and ii) whether prompts should be optimized. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Paper** & **Polarity** & **Polarity** & **Example** & **Tested PLMs** & **Eval** \\ \hline Petroni et al. (2019) & **Polarity** & & & Dante was born in [MASK]. & fairnessq-fconv, ELMo, & p@k \\ & & & & Transformer-XL, BERT & \\ \hline Bouraoui et al. (2020) & **Polarity** & **Polarity** & \(\checkmark\) & mining trigger prompts & BERT & F1 \\ & & & (X) is the capital of [Y]. & & \\ Hardalov et al. (2020) & **Polarity** & & +\(\checkmark\) & \(\textless{}\)QP \(\rightarrow\) CA1,A2,A3,A4D & XLM-R & p@1 \\ Jiang et al. (2020) & **Polarity** & **Polarity** & & \(\textless{}\)person. & mBERT, XLM, XLM-R & p@1 \\ Jiang et al. (2020) & **Polarity** & **Polarity** & & \(\textless{}\)person mining and paraphrinsing & BERT, ERNE, KnowBert & p@1 \\ & & & Direct is developed by [MASK]. & [MASK] & \\ & & & & \(\textless{}\) released Director. Direct is created by & \\ & & & & [MASK]. & \\ Roberts et al. (2020) & **Polarity** & **Polarity** & & \(\textless{}\)person mining in the imperial space in Tokyo? & T5 & EM \\ & & & & & \\ Shin et al. (2020) & **Polarity** & & & (X) is memory arcade branding by [MASK] & \\ \hline Duffer et al. (2021) & **Polarity** & & & sim(capital entity), \(\checkmark\) country entity) & BERT, mBERT, fastText & p@1 \\ Elazar et al. (2021) & **Polarity** & & & trims PMM on constituency loss & BERT & p@1, cons, \\ & & & & The capital of Italy is [MASK], Italy* & cac \\ & & & capital, [MASK]. & & \\ Fichtel et al. (2021) & **Polarity** & & & \(\checkmark\) & Dante was born in [MASK]. & BERT & p@1 \\ Harivi et al. (2021) & **Polarity** & & & re-wining with PMM & BERT & p@1 \\ Kassner et al. (2021) & **Polarity** & & & & \(\textless{}\)person mining & BERT, mBERT & p@1 \\ Kumar and Talukdar (2021) & **Polarity** & & & & \(\textless{}\)person mining & BERT & p@1 \\ & & & & & \(\textless{}\)person mining in containes? & \\ Liélard et al. (2021) & **Polarity** & & & & \(\textless{}\)person mining in containes? & \\ Qin and Esner (2021) & **Polarity** & & & & \(\textless{}\)person mining in containes? & \\ & & & & & \(\textless{}\)person mining and \(\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{}\textless{\textlessless{ \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Dataset** & **Cat.** & **Lang.** & **Example** & **\#Inst.** & **Access** \\ \hline \hline \multirow{9}{*}{\begin{tabular}{} \end{tabular} } & LAMA (Petroni et al., 2019) & **Cat.** & **en** & Dante was born in [MASK] & 40k & + \\ & **Google Analogy(semantic)** & **Cat.** & **en** & It is located in [X], the capital of [Y] & 9k & + \\ & **(Bouraoui et al., 2020) & **Cat.** & **en** & What degrees did Obama get? & 6k & + \\ & WebQuestions (Roberts et al., 2020) & **Cat.** & **en** & [X] is the capital of [Y] & 0.5k & + \\ & BATS (Cency) (Bouraoui et al., 2020) & **Cat.** & **en** & Who won the Nobel Peace Prize in 2089? & 96k & + \\ & TriviaQA (Roberts et al., 2020) & **Cat.** & **en** & Who lives in the imperial palace in Tokyo? & 322k & + \\ & **IndicGLUE (Kakwani et al., 2020) & **Cat.** & **indic3** & Shanthbupara -MASK is an important village in & 239k & + \\ & **Avrel'Ielshi, Gujarat State. & & Arriello! & & & \\ & X-FACTR (Jiang et al., 2020) & **Cat.** & **multi** & The mother tongue of Obama is [MASK] & 398k & + \\ & LAMA-UHN (Poerner et al., 2020) & **Cat.** & **en** & USA maintains diplomatic relations with [MASK] & 32k & o \\ & LPAQA (Jiang et al., 2020) & **Cat.** & **en** & DirectX is developed/created by [MASK] & 3k & + \\ & mLAMA (Kassner et al., 2021) & **Cat.** & **multi** & Paris is the capital of [MASK] & 855k & + \\ & DESGCE (Shil et al., 2021) & **Cat.** & **en** & [Cart Menger] was an Austrian economist... & 37k & + \\ & VWIK-UIN (Cao et al., 2021) & **Cat.** & **en** & Turing was born in [MASK] & 70k & + \\ & SQuAD (Wang et al., 2021a) & **Cat.** & **en** & \textless{Q}\(\rightarrow\) & “answer related passage? \(\textless{Q}\)\(\rightarrow\) & 92k & + \\ & KAMEL (Kalo and Fichel, 2022) & **Cat.** & **en** & \textless{Q}\(\textless{Q}\)\(\rightarrow\), what languages does Confuzius speak? & 47k & + \\ & DLAMA (Keleg and Maddy, 2023) & **Cat.** & **multi** & Egypt is located in [MASK] & 78k & + \\ & PopQA (Mallen et al., 2023) & **Cat.** & **en** & What is the capital of Louisiana? & 14K & + \\ & Entity/Questions (Mallen et al., 2023) & **Cat.** & **en** & Who is the author of The Target? & 177k & + \\ \hline \hline \multirow{9}{*}{ \begin{tabular}{} \end{tabular} } & EXAMS (Hardalov et al., 2020) & **Cat.** & **multi**\textless{Q}** & multi & \textless{Q}\(\times\)1,A2,A3,A40\(\rightarrow\)\(\textless{Q}_{\textless{Q}}\) & 24k & + \\ & MedQA (Lin et al., 2021) & **Cat.** & **en.** & **en.** & **\textless{Q}\(\times\)1,A2,A3,A40\(\rightarrow\)\(\textless{Q}_{\textless{Q}}\) & 61k & + \\ & DafonE (Alphamani et al., 2021) & **Cat.** & **en.** & The patient has high \& CSP-type hypertension & **T** & o \\ & **(Yuan et al., 2021) & **Cat.** & **en** & **aprac ### Factors Affecting Knowledge Retention PLMs are diverse with respect to their architectures, pre-training objectives and their pre-training data. A compelling question is: how do all these factors affect knowledge retention in PLMs? Large language models are known to perform generally better and hold more knowledge Brown et al. (2020); Roberts et al. (2020). However, the model's architecture and pre-training objectives are more decisive for knowledge retention than its size Li et al. (2022). For example, pre-training with the Salient Span Masking objective Guu et al. (2020) helps PLMs to absorb more facts Roberts et al. (2020); Cole et al. (2023). Similarly, Xiong et al. (2020) demonstrate that training the model to predict if the original entities in the text have been replaced with other entities is beneficial for fact retrieval. More generally, Ye et al. (2021) conclude that a masking strategy matching the downstream task, positively affects the performance on that task. A larger pre-training corpus with an encoder-only model Liu et al. (2020) leads to higher knowledge retention Zhang et al. (2021), but with an encoder-decoder model Lewis et al. (2020), a larger corpus negatively affects knowledge retention Wang et al. (2021). Recency Chiang et al. (2020) and frequency Kandpal et al. (2023), i.e., _when_ and _how often_ the data is observed at training, are also essential for knowledge retention. Larger models and more pre-training data can improve knowledge retention if combined with the right choices for architecture and pre-training objective(s). However, scaling might not be sufficient Kandpal et al. (2023). Even though many works propose new architectures and pre-training objectives to increase factual knowledge retention in PLMs and their robustness to prompts Fevry et al. (2020); Hosseini et al. (2021); Sadeq et al. (2022); Whitehouse et al. (2022); Min et al. (2023); Zhong et al. (2023), this is a promising future work direction, as there is more room for improvement. ### Should Prompts be Optimized? Prompt Optimizing leads to better probing performance Jiang et al. (2020); Shin et al. (2020); Kumar and Talukdar (2021); Newman et al. (2022); Zhang et al. (2022). However, it remains unclear whether this improvement is due to optimized prompts leaking new knowledge into the probed PLMs. Optimized prompts can be mere paraphrases of manually created prompts Bouraoui et al. (2020); Jiang et al. (2020). These paraphrases might be better fact retrievers because of their similarity to the pre-training corpus Cao et al. (2022). Other prompt optimization methods find better prompts in discrete or continuous spaces Shin et al. (2020); Zhong et al. (2021). These prompts are largely uninterpretable, and can even retrieve facts from randomly initialized PLMs Zhong et al. (2021); Ishibashi et al. (2023). Performance improvements for optimized prompts can be attributed either to prompts becoming more similar to the pre-training data or overfitting the facts distribution. Evaluation should take the pre-training corpora and the facts distribution in the probing dataset into account Cao et al. (2021); Cao et al. (2022). Future work should consider adapting prompt optimization methods to produce more interpretable prompts. This would keep the performance gains, and increase the trustworthiness of optimized prompts. ## 5 Obstacles to Adopting PLMs as KBs Consistency.A challenge to relying on PLMs as knowledge bases is their sensitivity to the input queries Fierro and Sogaard (2022). PLMs rely on shallow surface features and lexical correlations Kassner and Schutze (2020); Misra et al. (2020); Poerner et al. (2020); Rogers et al. (2020); Li et al. (2022), which explains their high sensitivity to the way queries are formulated. Current solutions Elazar et al. (2021); Newman et al. (2022) train PLMs to be robust to variations in inputs, but further improvements are needed to make PLMs reliable knowledge bases. PLMs are known to be highly sensitive to prompts, especially in languages other than English Fierro and Sogaard (2022), where less resources are available. Making PLMs more robust to prompts in non-English languages is a promising future work direction. Interpretability.Identifying where facts are stored and how they are retrieved is essential to adopt PLMs as trustworthy knowledge sources. Several approaches locate knowledge in PLMs Wallat et al. (2020); Podkorytov et al. (2021); Alkhaldi et al. (2022); Dai et al. (2022); Meng et al. (2022), with different conclusions depending on the architecture (e.g., knowledge is located in the middle layers of GPT-like models Meng et al. (2022), or in the upper layers in BERT-like models Dai et al. (2022)). Another line of work focuses on the data aspect, showing the dependence of PLMs on word co-occurrences and positionally close words Li et al. (2022), or tracing back predictions to training data Akyurek et al. (2022); Park et al. (2023). Knowing how PLMs retrieve facts remains challenging, but necessary to make PLMs transparent fact retrievers. The introduction of a fact tracing benchmark Akyurek et al. (2022) opens the door for works in this direction. Updating Knowledge.PLMs come with a fixed set of pre-trained parameters that encode knowledge about the world. As time passes, this knowledge becomes partially outdated. Hence, editing existing knowledge in PLMs and augmenting them with new knowledge is crucial for their use as knowledge bases Zini and Awad (2022). One line of research locates the modules responsible for factual predictions and modifies these to update the corresponding facts Dai et al. (2022); De Cao et al. (2021); Meng et al. (2022). Other lines of research keep the original PLM unchanged, but augment it with additional parameters to induce the desired changes Wang et al. (2021); Lee et al. (2022), or encode facts with time stamps in PLMs to make them "time-aware" Dhingra et al. (2022). When updating facts in PLMs, it is crucial that only the targeted facts are affected and that these facts are retrievable using different paraphrases De Cao et al. (2021); Hase et al. (2023). However, current methods for facts editing Meng et al. (2022); Chen et al. (2023) still do not fulfill these requirements Hoelscher-Obermaier et al. (2023). Methods that introduce additional parameters should be made more scalable Jang et al. (2022). ## 6 Related Work AlKhamissi et al. (2022) elaborate requirements for PLMs as knowledge bases and review recent literature w.r.t. those requirements. These requirements are widely known (e.g., consistency Petroni et al. (2019) and updating knowledge De Cao et al. (2021)). Our analysis leads to similar general observations (cf. Section 5), and additionally reviews more recent solutions to these obstacles. Cao et al. (2023) cover probing PLMs as part of the knowledge cycle in PLMs, but do not address factual knowledge probing at the same level of detail as we do. Liu et al. (2023) survey prompting methods in detail. However, they cover only a part of factual knowledge probing methods. Safavi and Koutra (2021) survey how PLMs acquire relational knowledge, organizing knowledge representations strategies in PLMs based on different levels of KBs supervision. We provide a novel categorization scheme and conduct a systematic analysis of methods for factual knowledge probing that goes beyond all existing surveys. We additionally provide a categorization of factual probing datasets. Furthermore, we discuss recent findings on knowledge retention, the use of optimized prompts, and challenges with corresponding recent solutions to adopting PLMs as KBs, shedding light on several future work directions. In contrast to other work, we employed a systematic approach to curate and analyze relevant literature to a comprehensive and unbiased representation of existing work. ## 7 Discussion and Future Work Factual probing methods are developed to extract as many facts as possible from the new smart pools of knowledge, namely PLMs. This gives us an estimate about how much PLMs have learned from pre-training, and help us to assess their suitability for use cases such as PLMs-as-KBs. Improving probing methods should go hand-in-hand with advances in PLMs themselves, to help us better assess and make use of PLMs. Our analysis (cf. Section 2) shows that current probing methods focus mostly on one the the three dimensions we use in our categorization (inputs, PLMs, outputs). Introducing adaptations across two or more of these dimensions (e.g., optimizing inputs while also debiasing outputs) might lead to further improvements with respect to factual knowledge retrieval. Besides improving probing methods, it is also essential to pay attention to the benchmark datasets. Some probing datasets are shown to be biased towards certain entities Cao et al. (2021). Constructing unbiased probing datasets is crucial to have unbiased estimates of factual knowledge in PLMs. At the same time, developing comprehensive datasets which correspond to the capacity of the recently published large PLMs, e.g., OpenAI (2023); Penedo et al. (2023); Touvron et al. (2023), is an important future work direction. We also believe that it is necessary for current evaluation schemes to not be limited to counting how often PLMs answer correctly. Instead, we call for a comprehensive evaluation that includes further important factors such as the number and frequency of the answers in the pre-training corpus, creation period of the pre-training corpus, model size, and the number of training epochs. ## 8 Limitations For our corpus construction we relied on all the publications that cited Petroni et al. (2019). Although this represents the first work that sparked the community's interest in the factual knowledge present in PLMs and their use as KBs, there might be parallel works or works that go into the same direction but do not directly cite Petroni et al. (2019)'s work, which are not included in our corpus. Additionally, we relied on the venue information provided by Semantic Scholar's API to filter out irrelevant publications. These information are not always accurate and might have affected our initial corpus. In this work, we focused on works that revolve around factual knowledge, and excluded works that focus on other types of knowledge (e.g., linguistic knowledge and commonsense knowledge). However, there are methods that are used for other types of knowledge that could also be applied to factual knowledge and vice versa. We consciously excluded works that focused on other types of knowledge, but this does not mean that such methods are not applicable to factual knowledge probing. ## Acknowledgements We thank Jan Trienes, and the three anonymous reviewers for their insightful comments on this work.
2306.04692
Towards cosmological simulations of the magnetized intracluster medium with resolved Coulomb collision scale
We present the first results of one extremely high resolution, non-radiative magnetohydrodynamical cosmological zoom-in simulation of a massive cluster with a virial mass M$_\mathrm{vir} = 2.0 \times 10^{15}$ solar masses. We adopt a mass resolution of $4 \times 10^5$ M$_{\odot}$ with a maximum spatial resolution of around 250 pc in the central regions of the cluster. We follow the detailed amplification process in a resolved small-scale turbulent dynamo in the Intracluster medium (ICM) with strong exponential growth until redshift 4, after which the field grows weakly in the adiabatic compression limit until redshift 2. The energy in the field is slightly reduced as the system approaches redshift zero in agreement with adiabatic decompression. The field structure is highly turbulent in the center and shows field reversals on a length scale of a few 10 kpc and an anti-correlation between the radial and angular field components in the central region that is ordered by small-scale turbulent dynamo action. The large-scale field on Mpc scales is almost isotropic, indicating that the structure formation process in massive galaxy cluster formation is suppressing memory of both the initial field configuration and the amplified morphology via the turbulent dynamo in the central regions. We demonstrate that extremely high-resolution simulations of the magnetized ICM are in reach that can resolve the small-scale magnetic field structure which is of major importance for the injection of and transport of cosmic rays in the ICM. This work is a major cornerstone for follow-up studies with an on-the-fly treatment of cosmic rays to model in detail electron-synchrotron and gamma-ray emissions.
Ulrich P. Steinwandel, Klaus Dolag, Ludwig Böss, Tirso Marin-Gilabert
2023-06-07T18:00:11Z
http://arxiv.org/abs/2306.04692v1
Towards cosmological simulations of the magnetized intracluster medium with resolved Coulomb collision scale1 ###### Abstract We present the first results of one extremely high resolution, non-radiative magnetohydrodynamical cosmological zoom-in simulation of a massive cluster with a virial mass M\({}_{\rm vir}=2.0\times 10^{15}\) solar masses. We adopt a mass resolution of \(4\times 10^{5}\) M\({}_{\odot}\) with a maximum spatial resolution of around 250 pc in the central regions of the cluster. We follow the detailed amplification process in a resolved small-scale turbulent dynamo in the Intracluster medium (ICM) with strong exponential growth until redshift 4, after which the field grows weakly in the adiabatic compression limit until redshift 2. The energy in the field is slightly reduced as the system approaches redshift zero in agreement with adiabatic decompression. The field structure is highly turbulent in the center and shows field reversals on a length scale of a few 10 kpc and an anti-correlation between the radial and angular field components in the central region that is ordered by small-scale turbulent dynamo action. The large-scale field on Mpc scales is almost isotropic, indicating that the structure formation process in massive galaxy cluster formation is suppressing memory of both the initial field configuration and the amplified morphology via the turbulent dynamo in the central regions. We demonstrate that extremely high-resolution simulations of the magnetized ICM are in reach that can resolve the small-scale magnetic field structure which is of major importance for the injection of and transport of cosmic rays in the ICM. This work is a major cornerstone for follow-up studies with an on-the-fly treatment of cosmic rays to model in detail electron-synchrotron and gamma-ray emissions. Galaxy clusters (584), Magnetohydrodynamical simulations (1966), Intracluster Medium (858), Magnetic Fields (994), Cosmic magnetic field theory (321), Extragalactic magnetic fields (507) + Footnote †: journal: ApJ ## 1 Introduction Magnetic fields are omnipresent in the Universe and are observed in many astrophysical systems such as compact objects, accretion discs, proto-planetary and proto-stellar discs, the interstellar medium (ISM), planets, stars, and in the largest structures such as galaxies, and the intra-cluster medium (ICM) of galaxy clusters. While magnetic field strengths on the smaller scales in planets, stars compact objects, and accretion discs can reach values from several Gauss (G) to \(10^{15}\) G in pulsars, the magnetic fields on the larger scales are, generally speaking, more moderate and typically saturate at the canonical value of a few to a few tens of \(\mu\)G in galaxies and galaxy clusters but can reach strengths of mG in the dense ISM. Past and current research has developed the following picture of magnetic field amplification in the Universe. A small scale-turbulent dynamo is amplifying tiny seed fields to the values we observe nowadays in galaxies and galaxy clusters at \(\mu\)G-level. The exact origin of these seed fields is still under debate and several processes have been suggested that can generate seed fields of the order of around \(10^{-20}\) G (e.g. Biermann, 1950; Harrison, 1970; Demozzi et al., 2009; Gnedin et al., 2000; Durier and Dalla Vecchia, 2012). In galaxies, these fields are ordered and further influenced by a large-scale (mean-field) \(\alpha\)-\(\Omega\) dynamo (e.g Parker, 1955; Steenbeck et al., 1966; Parker, 1979; Ruzmaikin et al., 1988) and can be ejected in galactic outflows that can, in turn, magnetize the circumgalactic medium (CGM) (e.g. Bertone et al., 2006; Pakmor et al., 2017, 2020; van de Voort et al., 2021). However, on galaxy cluster scales in the ICM, turbulence driven by the structure formation processes and merger shocks will quickly generate a saturated magnetic field with a field strength of around \(\sim\mu\)G on the scales of Mpc without the need for an explicit seeding of these fields by galactic winds (e.g. Vazza et al., 2018; Steinwandel et al., 2021). In turn, these fields will then be ordered on larger scales by the structure formation process itself with some evidence that the Void magnetic field can "remember" some of the strucutre of the initial seed field on the scales of a few 10 Mpc (e.g. Mtchedlidze et al., 2022). The central process behind the turbulent amplification of magnetic fields is the stretch-twist-fold mechanism as introduced by Zel'dovich (1970) but researched by a number of groups (e.g. Kraichnan and Nagarajan, 1967; Kazantsev, 1968; Kazantsev et al., 1985; Kulsrud and Anderson, 1992; Brandenburg et al., 1995; Kulsrud et al., 1997; Xu and Lazarian, 2020). The process is schematically described in Fig. 11. Small-scale turbulence is first stretching field lines, which is increasing the field strength but at constant magnetic flux. The stretched field lines are then twisted. This step is crucial to note since it requires three spatial dimensions and makes subsequent simulations with reduced dimensions really tricky to interpret. Finally, the twisted field lines are folded, increasing the magnetic flux itself. If this process is repeated, it is easy to understand that it will yield exponential growth in the magnetic field strength. The exponential growth of the field can occur as long as the dynamo stays in the _linear (kinematic) regime_ in which the magnetic field is so weak that the tension force of the field lines is much weaker than the force stored in the small scale turbulent eddies. However, as the field strength grows the tension force becomes stronger, and the dynamo transits to the _non-linear regime_, in which the tension force is comparable to the forces exhibited by turbulence. Hence, eventually, the energy stored in turbulence is not enough to perform subsequent folding of field lines and the dynamo _saturates_. Obviously, this depends on the interplay between the amplification of the field and diffusion/dissipation of the field. The magnetic energy is then redistributed from the smaller scales to the larger scales in an inverse cascade. On both galaxy and galaxy cluster scales numerical simulations have well established that this process is dominant in amplifying magnetic fields using different numerical prescriptions (e.g. Dolag et al., 1999, 2002; Kotarba et al., 2009; Dubois and Teyssier, 2008; Wang and Abel, 2009; Pakmor and Springel, 2013; Pakmor et al., 2017; Butsky et al., 2017; Garaldi et al., 2021; Steinwandel et al., 2019, 2020, 2022, 2021; Vazza et al., 2014, 2018). However, small-scale dynamos can only generate correlated fields on the scale of the turbulence and require a process that is ordering the field on the largest scales. For instance, one can derive the outer scale of MHD turbulence by using the peak in the magnetic energy spectra that is essentially set by the magnetic Reynolds number that compares the advection time scale with the magnetic diffusion time scale. The exact interplay between those two timescales will set the peak of the magnetic power spectra and thus the outer scale of the underlying MHD turbulence. In other words, the scale of equipartition is set by the peak of the underlying magnetic power spectrum and the exact nature of MHD turbulence. Footnote 1: We put some effort into this Figure for educational purposes and hope that the community might deem it a useful illustration for the inner workings of the turbulent dynamo in the ICM. In this paper, we present the first results from a modern state-of-the-art galaxy cluster formation simulation that is specifically targeted to understand the complex properties of the ICM in a fully cosmological context at unprecedented resolution. Hereby, we will, for the first time demonstrate that the high resolution targeted in this paper marks the endpoint for a classic MHD treatment on galaxy cluster simulations as we will show that the coulomb mean free path is resolved over a vast regime of typical densities and temperatures in the ICM. While there is some progress in the modeling of low-mass galaxy clusters (e.g. Kannan et al., 2017; Tremmel et al., 2019; Pillepich et al., 2019; Ricarte et al., 2021; Butsky et al., 2019) massive galaxy clusters are only rarely studied in cosmological zoom simulations or large cosmological volume simulations. The reason for this is twofold. First, these objects often follow more complex accretion scenarios than lower-mass objects such as low-mass galaxy clusters (\(\sim 10^{14}\) M\({}_{\odot}\)) and galaxy groups (\(\sim 10^{13}\) M\({}_{\odot}\)) that can include several (major and minor) mergers of the latter objects. Second, the ICM on large scales is governed by plasma-astrophysical processes such as magnetic fields (e.g., Drake et al., 2021; Berlok, 2022; Squire et al., 2023; Kunz et al., 2016, 2019, 2022; Vazza et al., 2014, 2018; Steinwandel et al., 2021), (anisotropic) viscosity (e.g., Sijacki and Springel, 2006; Berlok et al., 2020; Marin-Gilabert et al., 2022), and (anisotropic) conduction (e.g., Kannan et al., 2017; Hopkins, 2017; Berlok et al., 2021) as well as cosmic rays (e.g., Sijacki et al., 2008; Boss et al., 2023), that can significantly contribute to its phase structure. Hence, these processes have to be modeled appropriately in simulations of more massive galaxy clusters, which is mostly done in detailed "turbulent driven studies" (e.g., Schekochihin et al., 2004; Porter et al., 2015; Mohapatra et al., 2021, 2022) and only rarely in large scale fully cosmological simulations. The complications when it comes to simulating such systems while including all the relevant plasma astrophysical processes are obvious. The numerical treatment of these processes is not only computationally expensive but requires also very high resolution to resolve the complex structure of the ICM. Moreover, the ultimate goal should be to push massive galaxy cluster simulations in a regime where they can start to capture the effects of smaller scale instabilities, such as the magneto thermal instability (MTI; e.g., McCourt et al., 2012) or the heat flux driven buoyancy instability (HBI; e.g., Parrish and Quataert, 2008), for which the presented simulation can be an important cornerstone to achieve this goal. In order to understand the detailed impact of such instabilities on the structure of the ICM, one first needs to understand fundamental plasma astrophysical mechanisms such as magnetic field amplification, and gauge that the a priori amplification of small seed fields provides the conditions that are necessary to produce the high \(\beta\) plasma needed for such instabilities to form. Furthermore, it remains unclear how important such instabilities remain for the thermal state of the ICM because most of the simulations carried out where these effects arise are much more idealized than a fully cosmological simulation. Hence one needs to work continuously towards higher resolution, fully cosmological simulations with the appropriate plasma astrophysical treatment in order to resolve the scale on which plasma instabilities can build up. The simulation presented in this paper represents an important link to achieving this goal with massive galaxy cluster simulations. In this paper, we present the first results of a simulation of a massive galaxy cluster with a total mass of \(2\times 10^{15}\) M\({}_{\odot}\) modeled with the full treatment for magnetohydrodynamics (MHD). We will study the amplification of the field at a resolution that is high enough to resolve the Coulomb collision scale in the ICM, and provide important insights into the effect of magnetic fields on the thermal pressure profiles of galaxy clusters. ## 2 Numerical Methods ### Simulation code We briefly present the numerical methods used and the numerical simulations discussed in this paper. We use the Tree-SPMHD code gadget-3 to carry out all the simulations presented in this work. Gravity is solved via the Tree-PM method where the long-range gravitational forces are computed on a PM mesh and the short-range forces are computed on the gravity tree. This reduces the workload on the tree significantly, most notably in terms of the memory imprint of the code (and a factor of around 2 in total run time for a given simulation, although this depends on the setup). Furthermore, the code has the option to use a split PM-mesh for zoom initial conditions that can have arbitrary resolution compared to the large-scale PM-mesh. However, for this simulation, we disabled the option of using a second PM grid and the force computation is somewhat more accurate as only the tree is used for updating forces on most of the high-resolution zoom region. Furthermore, truncation errors are suppressed in the zoom region as there is no need for interpolating between the forces computed by the gravity tree and the PM algorithm. The code utilizes a modern prescription for SPH that includes higher order kernels (e.g., Wendland, 1995, 2004; Dehnen and Aly, 2012) and a treatment that leads to an improved mixing behavior of SPH in shear flows based on artificial viscosity and artificial conduction (e.g. Price, 2012; Hopkins, 2013; Hu et al., 2014; Beck et al., 2016) following the implementation of Beck et al. (2016). Magnetohydrodynamics (MHD) is introduced based on the implementation of Dolag and Stasyszyn (2009) with the updates of Bonafede et al. (2011) that includes a treatment for non-ideal MHD. The non-ideal MHD is handled over a constant (physical) diffusion and dissipation. For the latter, we heat the gas with the magnetic field that is lost due to magnetic reconnection. We model (an)isotropic conduction via a conjugate gradient solver (Petkova and Springel, 2009; Steinwandel et al., 2020), that has been extended towards a bi-conjugate gradient solver in Arth et al. (2014), and Steinwandel et al. (2021) with the specific use case for massive galaxy cluster formation simulations that include magnetic fields. The simulation in this paper is carried out with a suppression factor of the Spitzer value for conduction of 5 per cent (e.g., Spitzer and Harm, 1953; Spitzer, 1956). We adopt a Wendland C4 kernel with 200 neighbors and bias correction as suggested by Dehnen and Aly (2012). We present the results of one high mass galaxy cluster zoom simulations at an unprecedented resolution of a halo with a total mass of \(\rm M_{tot}=2\times 10^{15}\)\(\rm M_{\odot}\) where we reach a spatial resolution of around 0.250 kpc and a mass resolution of a \(4\times 10^{5}\)\(\rm M_{\odot}\) in the ICM. The simulation "250X-MHD" has been performed in New York on the in-house cluster "rusty" of the Simons Foundation and consumed around 6 million core hours (excluding halo finding and debug runs). The simulation was performed in a non-radiative fashion without cooling and star formation. Hence, it is more targeted for understanding the complex plasma physical aspects of the ICM, rather than the galaxy formation process in super-massive clusters. The reason for this is two-fold. First, we want to understand how magnetic fields can contribute to the structure formation process on the largest scales in a clean experiment that is not dominated by underlying subgrid prescriptions for star formation and feedback. Second, a full physics run for one of these clusters that include treatment for cooling, star formation, and the feedback of stars and active galactic nuclei (AGN) is \(\sim 10\) times more expensive. However, once we have a better understanding of the plasma astrophysical aspects of the ICM in a fully cosmological simulation, we will attempt this simulation with a full feedback prescription but will postpone the results for future work. The initial conditions for the cluster are chosen from a lower-resolution dark matter-only simulation of a Gpc volume (Bonafede et al., 2011). Only in Gpc volume one can find a sample of massive clusters as the one re-simulated here in abundance. The base dark matter simulation has a resolution of \(1024^{3}\) particles leading to an overall mass resolution of \(10^{10}\)\(\rm M_{\odot}\). The cosmological parameters for the simulation are chosen based on WMAP7 cosmology with \(\Omega_{0}=0.24\), \(\Omega_{\Lambda}=0.76\), \(\Omega_{\rm baryon}=0.04\), \(h=0.72\) and \(\sigma_{8}=0.8\). We select dark matter particles at \(z=0\) for one of the most massive halos in the box and trace them back with the method described in Tormen et al. (1997) to obtain zoomed initial conditions. This cluster has been previously simulated with hydrodynamics only in Zhang et al. (2020, 2020). The domain from which we start re-simulation is large enough to avoid massive intruder particles within 5 times the virial radius at redshift zero. The magnetic field is initialized as a constant comoving seed field of \(10^{-14}\) G. This choice marks a quite large seed field and Figure 1: Schematic sketch of the small-scale turbulent dynamo that illustrates the stretching, twisting, and folding of field lines driven by ICM turbulence. As magnetic tension becomes stronger due to subsequent stretching, twisting, and folding, the process slows down and finally saturates when the turbulent kinetic energy in the smallest eddies that drive the process are in equipartition with the magnetic energy density. leads to a saturated dynamo by redshift 2. We tested this in detail in our lower resolution versions of this cluster in Steinwandel et al. (2021) and noted that a change of this seed field by a factor of 10 (lower or higher) produces similar results (although a lower seed field in combination with a higher magnetic diffusivity yielded lower mean magnetic fields in radial profiles). We note that for arbitrarily small seed field (values below \(10^{-20}\) G) we find no saturated dynamo in runs _without_ galaxy formation physics (cooling, star formation, stellar- and AGN-feedback). However, we note that we only tested this for the lowest resolution version presented in Steinwandel et al. (2021) and this could obviously change in the higher resolution versions of this cluster of which we currently run a few micro-physics variations. However, for now, we just focus on our fiducial MHD run and specifically the detailed structure of the magnetic field itself. ## 3 Results In this section, we will present our results and describe the major plots that are important for the study. ### Resolving the electron mean free path We start the presentation of our results with Fig. 2 where we show the spatial resolution of our SPH simulation (smoothing length) as a function of the electron mean free path, color-coded by mass (we note that we omit the color bars since this is 2d joint PDF). The pink and the turquoise line mark the regimes in which \(\lambda_{\rm{MFP}}\) and \(0.1\times\lambda_{\rm{MFP}}\) are marginally resolved. We compute the mean free path following Zel'dovich and Raizer (1967): \[\frac{1}{\lambda_{\rm{MFP}}}=\frac{2\pi}{9}\cdot n_{\rm{e}}\frac{Z^{4}e^{4}}{k _{\rm{B}}^{2}T^{2}}\ln\Lambda, \tag{1}\] where \(n_{\rm{e}}\) is the electron number density, and T is the temperature of each particle in the simulation. Z is the atomic number for which we adopt one (electrons, protons) and \(e\) is the elementary charge of \(4.803^{-10}\) statC (again electrons/protons). For \(\ln\Lambda\) we adopt 30, which seems to be in good agreement with typical values in the ICM. We find that the bulk of the mass is located between \(10^{-7}\) cm\({}^{-3}\) and \(10^{-2}\) cm\({}^{-3}\) and is therefore lying in between the bounding lines for resolved \(\lambda_{\rm{MFP}}\) and \(0.1\times\lambda_{\rm{MFP}}\). This indicates that our simulation operates at the limit where the classic MHD equations are a good approximation. Hence, future simulations at our resolution need to investigate the inclusion of anisotropic Figure 2: Left: Global distribution of the resolution as a function of the electron mean free path. Right: Resolution as a function of the electron mean free path as a measure for the Coulomb diffusion length scale. We show this for the region in the ICM that caries most of the mass in the ICM between \(10^{-7}\) cm\({}^{-3}\) and temperatures above \(10^{6}\) K (see Fig. 8). It is apparent that our resolution in this regime resolves the Coulomb mean free path (\(\lambda_{MFP}\)) over a broad density and temperature regime in the ICM. The oink and turquoise line mark the transition regions at which the smoothing length h\(\approx\lambda_{\rm{MFP}}\) and h\(\approx 0.1\times\lambda_{\rm{MFP}}\) indicating that we transit into a regime where kinetic aware MHD becomes an interesting addition. To the right of the turquoise line, one should consider a fully kinetic treatment. This indicates that our current resolution marks the limit for an MHD-only treatment for galaxy cluster simulations. Figure 3: Surface density (top left, projection), the temperature (top right, projection), the magnetic field strength (bottom left, slice with 100 kpc height) as well as the turbulent energy (bottom right, slice with 100 kpc height) for the simulation at redshift 0. The magnetic field at redshift \(z=0\) is in rough agreement with observed values in the Coma cluster as reported for example by (e.g., Bonafede et al., 2010). However, we note that the field is slightly higher than in observations (factor of three). Similar to previous galaxy cluster simulations the magnetic field drops quickly as a function of the radius. The turbulent energy reveals the complex shock structure we find in the ICM that is dominated in the center by internal weakly supersonic shocks and in the outskirts dominated by the strongly supersonic accretion shock. The dotted and dashed white circles indicate R\({}_{200}\) and R\({}_{500}\) respectively. Figure 4: Central structure of the magnetic field in a very thin slice of \(\Delta x=100\) kpc alongside the xy-direction of the cluster. We choose this orientation to demonstrate how the small and large-scale field structure are related to one another. The magnetic field in the center reaches a magnitude of around 20 \(\mu\)G on the smallest scales. The magnetic field is highly turbulent and not correlated. However, there are regions that show large-scale orientation on Mpc scales that likely originate from shocked gas. This indicates that the field is ordered on larger scales by the large-scale structure formation process. Figure 5: Radial profiles of the magnetic field (top left), the Plasma\(\beta\) parameter (top right), the temperature (bottom left), and the entropy (bottom right) for the simulated system at redshift zero. The cluster shows well-behaved ICM properties with magnetic field strengths ranging around 10 \(\mu\)G. The values of Plasma-\(\beta\) are high in the center with values between 40 and 100 but decrease quickly in the outskirts below to a few 100, which is in good agreement with the general belief that the ICM is high-\(\beta\) plasma. Temperatures in the center are high between \(10^{7}\) and \(10^{8}\) K. The entropy profile drops strongly in the center due to a steep drop in temperature. viscosity and heat conduction. Simulations at 10 times higher mass resolution should push for a more sophisticated framework that includes a closure for the detailed plasma kinetics when the resolution of the MHD simulation becomes much smaller than the mean free path of the electrons (and ions respectively). We will discuss the implications of this in greater detail in section 4.1. ### Structure and Morphology In Fig. 3 we show the MHD simulation at redshift zero. The panels describe the gas surface density (top left), the temperature (top right ), the structure of the magnetic field strength (bottom left), as well as the turbulent kinetic energy (bottom right). The white circles indicate R\({}_{200}\) (dotted) and R\({}_{500}\) (dashed). The virial radius of the system is \(\sim 2.3\) Mpc h\({}^{-1}\) at redshift zero. All the panels are obtained by binning the data to a uniform rectangular grid, using an SPH interpolation to the cell center, based on a slice in z-direction centered 100 kpc around the clusters' x and y position. Generally, we note that the cluster is a very relaxed system at redshift zero (there are no major mergers happening at this redshift) but shows a rather complex history of merger events with several significant major merger processes in its formation history. The most significant ones are at redshifts of \(z=0.3\), \(z=0.8\), \(z=1.2\), and \(z=2.7\) and are discussed in more detail for the lower resolution versions of this system in Steinwandel et al. (2021). The density projection reveals a lot of substructure beyond the virial radius falling into the cluster center, even at redshift \(z=0\), that will lead to subsequent major merger events in the near future. For instance, the second most massive halo in the simulation has still a mass of around \(10^{14}\) M\({}_{\odot}\) and its outskirts can be seen at the top right of the top left panel of Fig. 3. The temperature distribution shows peaks of a few times \(10^{8}\) K. The magnetic field structure at \(z=0\) is fully developed and saturates at the level of a few \(\mu\)G, but shows significantly higher values at larger redshift (not discussed in detail in this paper), which is consistent with earlier galaxy cluster simulations and the higher RM-values, typically observed under high redshift conditions. Additionally, the structure in the turbulent kinetic energy reveals the detailed structure of both internal MHD shocks as well as the external high Mach number accretion shock located beyond the virial radius of the shock. It is noteworthy to point out the incisiveness of the detailed MHD cluster shocks. In Fig. 4 we show a very thin slice of the xy-direction to illustrate the turbulent, highly uncorrelated but fully developed magnetic field structure, especially within the virial radius. The complex field structure observed in the virial radius indicates that the field is organized on larger scales by the large-scale structure formation. It is quite clear from the visualization that the magnetic field has a typical correlation length of several 100 kpc. This is larger than the typical size of the smallest turbulent eddies in the simulation with characteristic sizes of around 1 kpc. In Fig. 5 we show the radial evolution of central plasma parameters such as the magnetic field strength (upper left panel), Plasma-\(\beta\) (upper right panel), the temperature (bottom left panel) and the entropy (bottom right panel) mass (black) and volume-weighted (blue). We find central magnetic field strengths of up to 20 \(\mu\)G comparable to those reported in our previous work Steinwandel et al. (2021) for our lower resolution runs 25X-MHD and 10X-MHD. The Plasma-\(\beta\) parameter is typically high with central values of around 50 which increases to around 200 when approaching the virial radius of the system. Beyond the virial radius, Plasma-\(\beta\) is generally high between 100 and a few times 1000. While the values in the center are low, they are still well above unity and the magnetic field remains dynamically unimportant for the formation process of the cluster. However, we note that obviously, the field structure will be important for cosmic ray injection, re-acceleration, and propagation. To that end, we developed the spectral cosmic-ray model described in Boss et al. (2023) for electrons and protons which we will apply to a sister simulation on the fly in future work. The radial temperature profiles indicate that the ICM has a typical temperature of \(10^{8}\) K (\(\approx 10\) keV), which drops towards the center of the cluster by roughly a factor of two (\(\approx 4\) keV). The decrease of temperature in the center is in direct relation to the rather high Plasma-\(\beta\) values of around 50 that we find in the simulation. The reason for the decrease in temperature is likely influenced by the absence of important feedback physics that would increase entropy in the cluster center. We show our entropy profiles in the bottom right panel of Fig. 5 that bottom out towards the center. That in turn leads to a loss of thermal support of the cluster and a decrease of Plasma-\(\beta\) towards the center. As mentioned above, the decline in the central entropy profile is supported by the absence of heating by AGN-feedback. We do note that some of our simulations with a higher thermal conductivity than our adopted suppression coefficient of 0.05 show flat entropy cores around \(10^{3}\). This topic will be the subject of more detailed future studies. Moreover, we point out that the peak in the entropy profile past the virial radius marks the accretion (virial shock) of the cluster. Hereby it is interesting to point out that the volume-weighted prescription gives a more accurate position of the virial shock than the mass-weighted prescription. We ran the MHD simulation with a rather low magnetic diffusivity (\(10^{27}\) cm\({}^{2}\) s) and it has been shown in previous studies that a higher value for the magnetic diffusivity (\(10^{28}\) cm\({}^{2}\) s) can decrease the magnetic field strength by a factor of 1.5 to 2 (see Bonafede et al., 2011; Steinwandel et al., 2021). In this simulation, we chose a lower value for the diffusion to probe an upper limit for magnetic fields that can be expected in galaxy cluster simulations with particle-based techniques. Our future work will be centered around cosmic rays and their non-thermal emission, for which the magnetic field is an essential plasma astrophysical property, and even at low diffusivity, our method is over-shooting the magnetic field strength predicted in cluster centers based on RM signatures only by a factor of around two. In Steinwandel et al. (2021), we also demonstrated evidence for a small-scale turbulent dynamo driven by ICM turbulence that is injected via merger shocks. While the process is similar to our high-resolution simulation, it is not the main aspect of this work where we are more generally interested in the general ICM properties and their plasma astrophysical context. We will investigate the detailed amplification process via power-spectra and magnetic tension in future work and instead focus on the general field structure in this paper first. However, for reference, we show the time evolution and the exponential build-up of magnetic energy as a function of scale factor in Fig. 6. In blue we show the magnetic energy and in red the turbulent kinetic energy in the system. The magnetic energy grows exponentially between redshift 9 and 4 and is weaker (linear) at lower redshifts from redshift 4 to 1.5. The magnetic field energy peaks around \(10^{62}\) erg. We note that the cluster undergoes very heavy merger activity from redshift 3.7 to redshift 1.3 where it assembles the majority of its mass. After redshift 1.5, the magnetic energy is slightly decreasing. This happens because the cluster evolves from a high-density accretion state to a lower-density virialized condition. We note that we find good agreement between our previous simulations with these fundamental predictions of dynamo theory but we were not able to recover the observed field strengths of systems such as the Coma cluster for which our simulations overpredict the magnetic field by a factor of 2-3. In our previous work, we extensively investigated this by changing the initial seed field, the thermal conduction prescription, and the magnetic diffusivity of our non-ideal MHD solver. While we found some indication that the initial seed field can shift the central magnetic field strength in the cluster at redshift zero, consistent with the limit of adiabatic compression the trend of the larger central magnetic field strength in comparison to Coma remains. This is also the case for our 250X-MHD simulation that is producing central magnetic field strengths that are a factor of 3-4 larger than the central field of Coma as predicted by observations (see Feretti et al., 1995; Bonafede et al., 2009, 2010, 2013). However, from a theoretical perspective, the higher magnetic field strength is justified when considering the radial trends of \(\beta\) and the radial trends of the energy densities. First, the radial trend of beta reveals that the thermal pressure is dominated by roughly 1.5 orders of magnitude in the cluster center and up to 2.5 orders of magnitude in the cluster outskirts, which is characterizing our simulated ICM as typical high-\(\beta\) plasma dominated by the thermal component. Furthermore, it is interesting to point out that the cluster is in rough equilibrium at redshift zero where the total kinetic energy is in equipartition with the thermal component and the magnetic pressure is in equipartition with the turbulent kinetic energy (on the smallest scales). In Fig. 7 we show the radial pressure profiles of the cluster at redshift zero for turbulent kinetic energy (green), magnetic pressure (blue), and thermal pressure (red). The dashed lines mark the volume-weighted quantities for completeness. On small scales, turbulent kinetic energy and magnetic energy are in equipartition. This strongly suggests that at redshift zero there is a fully saturated small-scale turbulent dynamo. The magnetic pressure and the turbulent pressure drop simultaneously towards the outskirts until 1.0 R\({}_{\rm vir}\) where both flatten (mass-weighted profiles) with a ratio of P\({}_{\rm turb}\)/P\({}_{\rm B}\sim 100\). The volume-weighted profiles continue to drop. The magnetic field strength in the outskirts of the cluster is fluctuating around \(5\times 10^{-8}\) G (mass-weighted). In the mass-weighted profiles beyond 2 R\({}_{\rm vir}\), we find spikes in both turbulent kinetic pressure and magnetic pressure that are present due to the in-falling sub-structure around the most massive halo in our simulation. These are apparent only in the mass-weighted prescription and vanish by the volume weighting that naturally smooths over the finite size of halos and sub-halos in the simulation. We note that the turbulent pressure in the center is slightly lower than the magnetic pressure by roughly a factor of 1.2 to 1.5. The reason for this lies likely in the nature of our classification for "turbulent kinetic" pressure which we define as \(1/2\rho v_{\rm rms}^{2}\), where \(v_{\rm rms}\) is the random motion that remains after subtraction of the bulk motion within the kernel. That procedure is not exact and the error bar on this is at least a factor of 2. Hence, the error bar on the turbulent pressure is at least a factor of 4. Given these error bars, we are confident to make the statement that magnetic pressure and turbulent pressure are in equipartition in the cluster center. In Fig. 8 we show the density-temperature phase-space diagram for our simulated system at redshift zero mass (left) adn volume weighted (right). The cluster reaches characteristic temperatures of around \(10^{8}\) K at a density of around \(10^{-2}\) cm\({}^{-3}\). Above these densities, we observe a slight decline in temperature towards \(10^{7}\) K, likely related to the steeper entropy profile in the cluster center. In Fig. 9 we show a selection of joint 2d PDFs of several central quantities that are correlated with the magnetic field as a function of electron number density. All these PDFs are computed at redshift zero with all \(\sim 10^{9}\) gas particles in the simulation. The top left panel shows the magnetic field strength as a function of electron number density. The magenta line shows the adiabatic (flux freezing) compression regime where \(\mathbf{B}\) scales as n\({}_{e}^{2/3}\). The golden line shows what is typically referred to as the saturated dynamo regime where \(\mathbf{B}\) scales as n\({}_{e}^{1/2}\). Generally, we find that the system follows the adiabatic compression limit in low-density and weakly magnetized gas. At higher magnetic field strengths (and densities) we find that the system follows the scaling of the saturated dynamo. In the upper left panel of Fig. 9 we show that this saturated dynamo regime is equivalent to gravitational collapse at constant Alfven-velocity, which is a distinct feature for strong magnetic fields over at least four orders of magnitude in density (from \(10^{-6}\) to \(10^{-2}\) cm\({}^{-3}\)). We explicitly note this here because it is often overlooked. Since Alfven-speed scales as \(\rho^{-1/2}\) and we obtain a saturated dynamo regime that scales as \(\rho^{-1/2}\) as well, we must achieve collapse at constant Alfven-velocity of the densest regions of the ICM. To that end, it is worth noting that the nature of turbulence is subsonic and sub-alfvenic. In the left panel of the bottom row of Fig. 9 we additionally show the 2d PDF of magnetic energy alongside the expected scalings for magnetic energy with power-law indices of 0.25 for the saturated dynamo regime and 4/9 for the adiabatic limit for completeness. Finally, we show the 2d PDF for the quantity B/n\({}^{2/3}\) and compare to the slopes of the saturated dynamo following Kraichnan and Nagarajan (1967); Kazantsev (1968) (golden line) as well as the reconnection diffusion limit derived by Xu and Lazarian (2020) (purple line). We note that in practice it is very hard to distinguish between these two scenarios based on the slope in this diagram alone and one would have to carry out a detailed study of the reconnection rates, which is beyond the scope of this work. ### Magnetic field structure In Fig. 10 we show the total rate of change of the magnetic field in the whole simulation domain (top left), as well as the rate of change, split into shearing/turbulent motions (top center) and compressive modes (top right). In the bottom row of Fig. 10 we zoom in on a cold front that forms at low redshift in the very center of the cluster when a sub-structure that is penetrating the center is compressing gas in bow-shock that moves away from the cluster center. The compressive part on the right is easier to interpret as it is simply the velocity divergence. The center panels that mark shear are harder to interpret, as we have to carry out a contraction (Frobenius norm) over the two involved second-order tensors. Hence, \(\mathbf{\hat{b}}\mathbf{\hat{b}}:\nabla\mathbf{u}\) is defined as: \[\mathbf{\hat{b}}\mathbf{\hat{b}}:\nabla\mathbf{u}=\sum_{i}^{3}\sum_{j}^{3}b_{ i}b_{j}\partial_{i}u_{j}. \tag{2}\] where \(b_{i}\) and \(b_{j}\) are the components of \(\mathbf{\hat{b}}\) which is itself the (normalized) three-dimensional magnetic field Figure 6: Time evolution of the magnetic energy (blue) and turbulent kinetic energy (red). The magnetic energy increases exponentially from the start of the simulation at redshift 300 to redshift 4 (scale factor 0.2), after which it is increasing linearly until redshift 2, after which it is decreasing towards redshift zero. The peak of magnetic energy at redshift 2 comes from intensive gravitational contraction and merger activity at these times, where the energy fraction between magnetic and turbulent energy reaches a maximum peak of around 40 per cent. Towards lower redshift, the system settles towards a fraction of around 10 per cent. strength. Next, we need to understand where these terms are coming from. For this, let us consider the induction equation in the form: \[\frac{\partial\mathbf{B}}{\partial t}=\nabla\times(\mathbf{u}\times\mathbf{B})+ \eta\nabla^{2}\mathbf{B}. \tag{3}\] and drop the second (resistive) term. If we dot this equation with \(\mathbf{\hat{b}}/B\) and evaluate the first term on the right-hand side using common vector identities as well as using the definition of the Lagrangian derivative \[d/dt=\partial/\partial t+\mathbf{u}\cdot\nabla \tag{4}\] we will get the following form of the induction equation: \[\frac{1}{B}\frac{d\mathbf{B}}{dt}=\mathbf{\hat{b}}\mathbf{\hat{b}}:\nabla \mathbf{u}-\nabla\cdot\mathbf{u}. \tag{5}\] Figure 7: Radial profiles of the different pressure components that support the ICM. Red is the thermal pressure, green is the small-scale turbulent kinetic pressure, and blue is the magnetic pressure. The latter is in rough equipartition with the turbulent kinetic pressure which is in good agreement with the general predictions of dynamo theory. The first term is essentially measuring the shear (induced by the presence of the magnetic field), and the second term is the velocity divergence that indicates adiabatic compression and decompression respectively, depending on the sign. In summary, this allows us to study regions in the ICM where magnetic field growth (or suppression for that matter) is driven by shear/turbulence (first term in eq. 5 ) or the compressibility of the gas (second term in eq. 5). Hereby, it is important to note that the second term in eq. 5 is typically dropped as many ICM studies are carried out under the assumption of incompressibility (e.g Squire et al., 2023) that yields \(\nabla\cdot\mathbf{u}\approx 0\). Our simulation results from Fig. 10 indicate that in a volume-filling interpretation, this is actually a very good assumption. The only regimes in which this is violated are at the shock fronts in the ICM due to merger activity which is traced excellently by \(\nabla\cdot\mathbf{u}\). We find that the magnetic field is strongly increasing at the shock fronts due to compression. However, the bulk of the amplification in the vast majority of the volume is driven by shear as indicated by the top center panel of Fig. 10. We note that the total rate of change in the volume is generally speaking low at redshift zero and only the shocks are able to produce a positive rate of change of the magnetic field strength that is significant. However, these features are obviously highly transient and their magnetic energy will be dissipated after the shocks dissolve. In Fig. 11 we show the radial, and angular components of the magnetic field out to the 2 R\({}_{\rm vir}\) (left) and the innermost 500 kpc of the cluster (right). Generally, we find that the radial component and the angular components show field reversals on the scale of \(5-10\) kpc in the central region and approach zero with increasing distance from the center. This simply means that in the cluster outskirts, the positive and negative orientations of each component average out because they are isotropized by large-scale bulk in and/or outflow towards (away) from the cluster center. However, the substructure in-falling onto the central regions of the cluster drives strong merger shocks towards the outskirts which breaks this symmetry and injects turbulence that ultimately drives a small-scale turbulent dynamo that converts radial magnetic field to angular field. This is apparent due to the fact that radial and angular components show some evidence for alternating field reversals. That the field is organized on the larger scales by the structure formation process becomes apparent when we consider the 1D differential PDF of the different components of the magnetic field that is generally symmetric. However, we do find a slight excess radial field at around \(10^{-6}\) compared to the angular components of the magnetic field. Figure 8: Mass-weighted (left) and volume-weighted (right) phase-diagrams of density and temperature. We find that most of the mass in the ICM is located in a density regime between \(10^{-6}\) and \(10^{-2}\) cm\({}^{-3}\) and temperatures between \(10^{7}\) and \(5\times 10^{8}\) K. We note that the low-density part of the ICM in the outskirts of the cluster is adiabatically cooled down to very low temperatures, but the mass relative to the high-density ICM is negligible. Most of the volume of the ICM is occupied by gas with densities below \(10^{-4}\) cm\({}^{-3}\). Figure 9: Joint 2d PDFs for the density-magnetic field (top left), density-magnetic energy (top right), density-magnetic field, normalized to flux freezing limit, and density-Alfvén velocity. We find that the magnetic field and magnetic energy scale with the relations for flux freezing collapse in the weak field limit and a saturated dynamo in the strong field limit where gravitational collapse proceeds at constant Alfvén-velocity. The PDF normalized to flux freezing limit reveals that in practice it is hard to distinguish between the saturated dynamo (-1/6 scaling) and the reconnection diffusion limit (2/57 -1/6). In Fig. 13 we show the absolute value of \(\mathbf{J}=\nabla\times\mathbf{B}\) in the left panel and its z component \(J_{z}=\partial_{\mathrm{x}}B_{\mathrm{y}}-\partial_{\mathrm{y}}B_{\mathrm{x}}\) to further quantify the turbulent small-scale structure of the magnetic field. Specifically, \(\mathrm{J}_{z}\) indicates some evidence of magnetic reconnection in the ICM. To which degree this is resolved in this simulation will be investigated in greater detail in future work. However, the small-scale structure we find indicates a structure known from "Plasmoids", although we note that they are likely unresolved here. ## 4 Discussion In this section, we will discuss our results and compare them to other relevant work centered around numerical simulations of magnetic fields in galaxy clusters. We will split this section into three parts, discussing the general morphological features of the simulated ICM, the amplification of the magnetic field, and its saturation at a fraction of turbulent kinetic energy. Finally, we will discuss the structure and the development of the small and large-scale magnetic field. ### Consequences of resolving the mean free path of the electrons One important features of the presented simulation is the fact that it operates in a resolution regime where the resolution for most of the Lagrangian mass is of the order of, or better than the electron mean free path. We show this very important characteristic in Fig. 2. Hereby, it is important to note that this will obviously not be the case everywhere in the simulation and we restrict this statement for the typical ICM densities and temperatures between \(10^{-7}\) cm\({}^{-3}\) and \(10^{-2}\) cm\({}^{-3}\), as well as \(10^{6}\) K and \(10^{8}\) K. Most of the mass of the simulation is located in this regime and thus it is appropriate to center the discussion around that regime. The advantage of a Lagrangian code, such as ours is that we have very good resolution in dense regions where we find that the mean free path is generally much smaller than the resolved resolution scale. The mean free path will be large if either temperature is high or density is low and thus we find that the mean free path is large in the outskirts of the cluster where we find that despite the fact that the resolution is of order 100 to 500 kpc it is still at least 2 orders of magnitude higher than the actual electron mean-free path and hence the cluster outskirt is clearly a region where the pure MHD treatment might not be a good approximation anymore and even "kinetic aware" theories such as Braginskii-MHD (Braginskii, 1965) might not be the appropriate limit anymore to treat that regime and fully kinetic limit would be favorable. As of now, it is questionable if this can be achieved. We note, that there is a tail with a low electron mean free path in the outskirts which sits at the edge of the zoom region and is cooled adiabatically as it expands in the Hubble flow (top right of the left panel of Fig. 2). The interesting region for us in this work is located at a mean free path between 1 and a few 100 kpc (right panel of Fig. 2 where the resolution is on average at least half a dex smaller than the electron mean free path. This means that our simulation is pushing the edge for MHD in that regime and cluster simulations at our target resolution will be the ideal playground for the effect of "kinetic aware theories". If we were to push the resolution even further, we believe it might be necessary to adopt an MHD-kinetic hybrid approach. The question, however, is, if the effects will be strong enough to make a difference as it has been recently pointed out by Squire et al. (2023) that MHD might be a good approximation for the ICM after all. Regardless, it will be interesting to investigate the rates of strain and pressure anisotropies in future simulations of galaxy cluster formation to "bridge the gap" to the detailed work on plasma instabilities that can occur in the ICM. ### Morphology and structure of the ICM We extensively probed the structure and morphology of our simulated ICM with a specific focus on the simulation output at redshift zero and the present turbulent field structure. We present central ICM properties of the cluster in Fig. 3 and Fig. 4 where we show the density and temperature field of the simulated system, in the left and middle panel of Fig. 3. At redshift zero the cluster is representing a relaxed gravitational system with a massive central halo with a mass of \(\sim 2\times 10^{15}\) M\({}_{\odot}\) that is surrounded by a number of massive group-sized objects with masses of up to \(\sim 5\times 10^{13}\) M\({}_{\odot}\) and one smaller galaxy cluster with a mass of \(\sim 1.1\times 10^{14}\) M\({}_{\odot}\). The morphological structure within the virial radius of the most massive halo (\(\sim 3.1\) Mpc) is governed by a number of internal shocks that is arising from the in-falling substructure into the most massive halo, which appears to be the major source of turbulence in the ICM in the absence of the feedback of AGN. This results in a turbulent structure within the virial radius and a smoother distribution beyond, dominated by the individual peaks of the in-falling sub-structure. The turbulent small-scale structure is very apparent in Fig. 4 where we show a very thin slice of 100 kpc height along the x-direction that reveals strongly magnetized filamentary structures in the innermost Mpc of our simulated cluster. Despite the fact that these are smaller-scale with respect to the virial radius of the system, they still appear to be correlated over a length scale of at least a few 100 kpc where these regions reach field strengths of up to 20 \(\mu\)G. This is in agreement with the radial profiles of the magnetic field strength of our simulated system in the upper left panel of Fig. 5 that indicates that the magnetic field strength is reaching 20 \(\mu\)G in the center of the cluster and dropping to a field strength of around 4 \(\mu\)G at a scale of around 0.2 R\({}_{\rm vir}\) which corresponds to a physical scale of around 600 kpc. At larger scales, the field drops rapidly to a field strength of around 0.03 \(\mu\)G at the virial radius which corresponds to a physical scale of 3100 kpc. With respect to our own earlier work this is encouraging since the radial profiles seems to be converged in comparison to our lower resolution runs that we put forward in Steinwandel et al. (2021). However, the discrepancy with ICM magnetic field models, that are put forward with grid codes such as Enzo remain and our simulations over-predict the observed magnetic field in galaxy clusters by roughly a factor of three compared to the observations in Coma (e.g. Bonafede et al., 2011, 2013). Moreover, we also can explain this based on the simulation data at hand. For instance when we consider the radial profiles of temperature and entropy in the bottom panels of Fig. 5 we find a steep drop in the temperature and entropy in the center of the cluster. This becomes even more apparent if we consider the 2d PDF of density and temperature in Fig. 8. The drop in temperature and entropy leads to an increase in density towards the center as well, a well know problem for non-radiative simulations of galaxy clusters with SPH-methods. The effect is actually very clearly illustrated by the resolution study in our previous paper as presented in Fig. 4 of Steinwandel et al. (2021). Hence, it is easy to show that the increase of the field in the central part of our simulated cluster comes from the increase in density due to a decreasing entropy and temperature profile. The cluster magnetic field then just follows the increase of the Figure 10: Total rate of change of the magnetic field (left), shearing/turbulent rate of change of the magnetic field (center), and compressive rate of change of the magnetic field (right). The top row shows the whole simulation domain, while the bottom panel is focusing on the field structure around a cold front that forms right at redshift zero through a sub-structure that is penetrating the cluster center. Figure 11: Left: Radial profiles of the radial, toroidal, and azimuthal components of the field out to the virial radius at 2300 kpc. Right: Zoom-in of the radial profiles on the innermost 500 kpc of the simulated cluster. At large radii, the components average out, indicating a high symmetry in these parts of the cluster. In the central parts, the radial and toroidal components dominate the azimuthal component, but the trend is weak, indicating that the field structure is largely set by large-scale random motion induced but the structure formation process. Figure 12: Left: One dimensional PDF of the different field components \(\rm B_{r}\), \(\rm B_{\theta}\) and \(\rm B_{\phi}\). These indicate that the field is symmetric in all its components, which means that the field is ordered by large-scale random motions, and the final field structure is decoupled from the field structure that is enforced in the early formation process by the dynamo process. Right: Same PDFs for the \(\rm B_{x}\), \(\rm B_{y}\), \(\rm B_{z}\) which indicates that the field in the “Void” regions retains some memory of its initial configuration as there is a bias towards a positive \(\rm B_{x}\) component which was the original orientation of the seed field. However, this is only true for regions that are dominated by “comoving” expansion. density field in the adiabatic compression limit. Finally, we want to highlight that given these limitations of the simulation, we still find good agreement of our simulated ICM with a high-\(\beta\) Plasma with a central value of around 50 that reaches a few hundred a the scale of the virial radius and drops to around 1000 beyond the virial radius. These values are not all unrealistic when compared to the previous assumptions of the ICM as a high beta plasma. ### Dynamo amplification of the magnetic field We will briefly discuss the dynamo amplification of the field in the ICM but will postpone a more detailed spectral analysis via power spectra to future studies when we have a larger set of simulations available at our target resolution. Similar to our lower resolution runs that we presented in Steinwandel et al. (2021) we find that the magnetic field is increasing rapidly from the starting redshift 310 to around redshift 4 where we find a peak of the magnetic energy as shown in Fig. 6. It is interesting to point out that the total magnetic energy is always lower than the turbulent kinetic energy and at redshift zero we find a saturation value that of around 10 per cent. This is in good agreement in comparison to earlier work on dynamo theory. For instance Schober et al. (2015) investigated the saturation level of the turbulent dynamo for different magnetic Prandtl-number (Pm) regimes for \(Pm\ll 1\) and \(Pm\gg 1\) for different assumptions of the spectrum of the underlying MHD turbulence and find values between 0.1 and 3 per cent for \(Pm\ll 1\) and 1 and 30 per cent for \(Pm\gg 1\). Our simulation is in the regime of the latter and we find a operates at magnetic Reynolds number of a few 100 to a few 1000 putting us well above the regime for resolved dynamo action under the assumption of incompressible Kolmogorov turbulence, which is a fair assumption for the ICM. Thus we could expect a saturation value for the dynamo somewhere up to a few 10 per cent. Hence the 10 per cent at redshift zero is in rather good agreement with the precision of Schober et al. (2015) given the uncertainties in the exact nature of the turbulence. We note that the saturation value seems to be larger at higher redshift reaching a peak value of around 42 per cent at redshift 1.5. It is additionally important to point out that the magnetic field is in rough equipartition with the turbulent kinetic energy within the virial radius of the system, indicating one fundamental prediction of dynamo theory as shown by Fig. 7. A more detailed study of the magnetic field and related proprieties reveals a more complete picture of the magnetization of the ICM. In Fig. 9 we show a number of these properties in 2D mass-weighted PDFs as a function of electron number density. First, the top left panel shows the magnetic field strength itself. It is important to discuss the structure of the magnetic field strength itself. One can split the ICM into two regimes: a high-density part that is also strongly magnetized as well as a low-density part that is weakly magnetized. The dense gas is following a scaling of \(B\propto\rho^{1/2}\) which is for instance predicted by reconnection diffusion in the saturated regime of the turbulent dynamo. Hence the dense regions of the ICM undergo collapse in a saturated dynamo regime. More importantly, it is interesting to point out that this directly implies collapse at constant Alfven-velocity which we show in the top right panel of Fig. 9. This is not really surprising given the definition of Alfven-velocity as \[v_{A}=\frac{|\mathbf{B}|}{\sqrt{4\pi\rho}}. \tag{6}\] Hence if the magnetic field is proportional to the square root of density, Alfven-velocity in collapsing regions needs to be constant. Essentially, this is directly enforced by the equipartition condition where we assume: \[B_{\mathrm{rms}}^{2}\propto\rho v_{\mathrm{rms}}^{2}. \tag{7}\] This enforces \(\mathrm{B_{rms}}\propto\rho^{1/2}\) and collapses at constant Alfven-velocity if the turbulent velocity is constant as collapse proceeds. The latter is actually nontrivial since collapse to very dense regions would transit from sub- to supersonic regimes under strong cooling. This is not an issue under fully ionized ICM conditions and turbulent velocity is roughly constant down to the highest ICM densities that we resolve. However, it has some important consequences regarding the nature of the underlying turbulence that is subsonic with a sonic Mach number of \(\mathcal{M}_{\mathrm{s}}\approx 0.05-0.1\) and Alfven-Mach number \(\mathcal{M}_{\mathrm{A}}\approx 0.5-1.5\) in the high-density part of the ICM, that is in the weakly sub- to the trans-alfvenic regime. Additionally, turbulence in our case is strongly intermittent due to the nature of its origin by merger shocks, which makes it generally harder to identify the nature of the turbulence over \(\mathcal{M}_{\mathrm{s}}\) and \(\mathcal{M}_{\mathrm{A}}\). Hence, we only make a statement on these numbers at the relaxation and saturation of the system. Finally, it is important to point out that the Alfven-velocity, in general, is very high with values of around 200-300 km s\({}^{-1}\). The origin of this is the generally low densities that govern the ICM, while at the same time, the mean magnetic field remains at \(\mu\)G level similar to the field observed in local spiral galaxies (see Beck, 2015, for a review on this topic). It is worth mentioning that the magnetic energy is amplified by around 20 orders of magnitude, although we need to say that the lowest magnetic fields that we achieve in the voids actually represent the adiabatic decompressed field of the starting field of around \(10^{-14}\) co-moving G at redshift 310. That means that the bulk of the volume is filled with a field stronger than \(10^{-14}\) G (physical) at redshift zero. Finally, we briefly discuss the panel on the bottom right of Fig. 9 where we show the magnetic field strength normalized to the flux-freezing limit. We can clearly see that the low-density regions are actually slightly steeper than the flux freezing limit, while the high-density regions follow very well the prediction from either Kazant-sev theory (golden line) for the saturation regime of the turbulent dynamo or the prediction of Xu & Lazarian (2020) for the saturation regime of a turbulent dynamo under gravitational collapse (purple line). Despite the fact that we have very good resolution and gravitational motions are very well resolved there is not a clear trend of which of these lines is more accurate. This just demonstrates that the situation for the turbulent dynamo in a realistic system is more complex due to the scatter of the simulation itself. Hence, neither of the scenarios can be clearly ruled out based on our simulation. We do note that if we average the distribution under gravitational contraction at higher redshift we find a slope that is more consistent with the prediction of Xu & Lazarian (2020) yielding a fitted slope of around \(-0.1\). This is consistent with our earlier predictions in Steinwandel et al. (2021) for the lower-resolution versions of our high-res adiabatic galaxy cluster simulations in this work. ### Drivers of the rate of change of the magnetic field However, instead of repeating the analysis on our older, lower-resolution runs we want to stress that in this work we aimed for a more rigorous investigation of magnetic field amplification which we plan to extend in future studies. In Fig. 10 we aim to disentangle the magnetic field amplification (suppression) due to shear and compression. Hereby, we focus on aspects of the whole simulation domain which allows us to unravel that adiabatic compression at shock fronts in the low redshift Universe is heavily responsible for the high positive rate of change of the magnetic field, while the shear appears to have a negative impact on these compressed regions. While the former is quite intuitive, the latter is harder to disentangle since the term \(\hat{\mathbf{b}}\hat{\mathbf{b}}:\nabla\mathbf{u}\) has nine components. However, since the feature in the cold front itself is rather sharp it indicates that the magnetic field direction is perpendicular to the bulk flow which can explain the change in sign for the shearing component in the compressed region. This is actually a similar situation to the one reported by Komarov et al. (2016) in their Fig. 5. It is interesting to point out that while we find that the compressive mode of amplification is dominant at the shocks, these represent only a very small volume of the ICM where the ICM gas is behaving compressible. In the vast majority of the volume, the adiabatic term plays a minor role and shear is dominating the amplification in a volume-weighted picture. ### Large scale magnetic fields in the ICM Finally, yet importantly we want to discuss the implications of the fluctuations of the radial and angular field components as shown in Fig. 11 for B\({}_{r}\) (black), B\({}_{\theta}\) (red) and B\({}_{\phi}\) (green). We plot these twice as a function of radius, once normalized to the virial radius and once on a physical scale, which is the left and the right-hand panels of Fig. 11. The lines in the plot represent the clusters' mean field. This is important to highlight since there is some confusion about the term mean field in the literature where some papers assume the quantity that we dubbed B\({}_{\rm rms}\) as the clusters mean-field, but the mean field is the actual mean of the single components of the vector of \(\mathbf{B}\). We find rather lower values of around 0.1 \(\mu\)G in the ICM where radial and angular components fluctuate on a physical scale of around 10 kpc. This gives us a lower limit for the length scale on which magnetic fields are correlated. Additionally, we find that the mean field is essentially zero above a scale of 500 kpc, which gives us an upper limit for the magnetic field's correlation length. Comparing this length-scale to the size of magnetized structures in Fig. 4 this seems reasonable. We note that the mean radial and angular components of the field show trends to be anti-correlated with one another. This is not very surprising since a dynamo will convert radial field into the angular field and vice versa by definition. It is interesting to point out that the field is quite symmetric in all these components which we show in Fig. 12 that shows the one-dimensional PDF for the radial and angular components of the field. There seems to be an excess of the radial field around 0.2 \(\mu\)G but the difference is marginal and the lines are almost identical. What this likely means is that the structure formation process is dominant for ordering the field on Mpc-scales while the turbulent dynamo is amplifying it with a maximum correlation length of a few 100 kpc. Finally, we want to briefly discuss our findings for \(\mathbf{J}=\nabla\times\mathbf{B}\) and its consequences for magnetic reconnection in the ICM. No dynamo will work without magnetic re-connection, as the complex interplay between amplification of the field due to stretching twisting and folding is limited by reconnection on smaller scales which will dissipate magnetic field energy and convert it into heat. This will actually set the dynamos' growth rate and determine its saturation level. However, in cosmological simulations of realistic systems dynamo properties are studied by various different groups as referenced above while re-connection properties are typically ignored. There are two reasons for this first. Reconnection in most MHD simulations (at least on our scale) typically relies on numerical diffusivity for the re-connection part which is not ideal. However, our simulation assumes a constant magnetic resistivity, which allows us to make a least some very basic statements about magnetic re-connection. For instance, we can look at the absolute value of \(\mathbf{J}\) and its single components. We show an example of this in Fig. 13. While both of these reveal interesting sub-structure it is especially apparent from \(J_{\mathrm{z}}\) that reconnection seems to appear on all scales in the ICM as \(J_{\mathrm{z}}\) is rapidly switching signs. Obviously, a more targeted study is needed to disentangle to which degree this is resolved in our simulation, despite the fact that these structures seem to closely resemble the typical shapes of "Plasmoids". ## 5 Conclusion We presented the first results for a high-resolution simulation with magnetic fields in which we put a specific focus to study the ICMs magnetic field structure in greater detail and compare to previous simulations, including our own that achieved at least a factor of 10 lower mass and factor of around 4 lower spatial resolution. Our most important findings are: 1. We present a galaxy cluster simulation that resolves the Coulomb mean free path for a large fraction (\(>80\) per cent) of the Lagrangian mass of the systems. This motivates future studies at our resolution with either some kinetic aware theory (e.g. Braginskii-MHD) or some more sophisticated kinetic treatment in fully cosmological simulations of the magnetized ICM. 2. The total energy budget of the magnetic field saturates at a level of \(\sim 10\) per cent of the turbulent kinetic energy within the cluster. This is comparable with the saturation level in turbulent box simulations in the high magnetic Prandtl number regime. This is encouraging because, on the one hand, this makes the approximation of the ICM as a "turbulent box" a valid one in terms of the energetics. On the other hand, it is encouraging that we achieve the saturation level of typical dynamo simulations in the turbulent-driven regime in a more realistic simulation in a fully cosmological assembly scenario. Figure 13: Left: Absolute value of \(\mathbf{J}=\nabla\times\mathbf{B}\) showing detailed small scale structure. Right: \(J_{z}=\partial_{\mathrm{x}}B_{\mathrm{y}}-\partial_{\mathrm{y}}B_{\mathrm{x}}\). We find rapid changes in sign that indicate magnetic re-connection. 3. The mean magnetic field strength peaks at around redshift 1.5 at a value of 10 \(\mu\)G, while the field at redshift zero is lower by a factor of around two and saturates at a value of 5 \(\mu\)G. 4. The radial trend of the magnetic fields towards the center predicts magnetic fields that are higher than the ones obtained by RM-measurements of nearby clusters by a factor of two to three. 5. We investigated the origin of the large magnetic fields and find that the underlying issue is driven by the steep entropy profile that we find in the cluster center, despite the fact that our simulation is adiabatic. 6. In most regions within the cluster, turbulence is sub-sonic and trans-alfvenic. However, this changes in the outskirts where the field is weak and turbulence is mostly super-alfvenic and sub-sonic, apart from the regions in which the strong internal shocks drive the ICM in a mildly supersonic regime. The highest Mach numbers we find are related to the accretion shock that is located at around 2.3 R\({}_{\rm vir}\). 7. The small-scale magnetic field is correlated on a length-scale with from 10 kpc to a few 100 kpc. 8. The mean radial and angular components as a function of the radius are anti-correlated which demonstrates the dynamo origin of the magnetic fields in our simulation. 9. We investigated the spatial structure of the current \(\mathbf{J}\) and find evidence for magnetic reconnection on all spatial scales within the ICM, motivating further studies on magnetic reconnection in the ICM. Finally, we note that this is the first fully cosmological MHD simulation of a massive galaxy cluster with a mass of around \(2\times 10^{15}\) M\({}_{\odot}\) with \(\sim 10^{9}\) resolution elements in the virial radius at redshift zero. This results in a mass resolution of \(4\times 10^{5}\) M\({}_{\odot}\) making this (to our knowledge) the highest resolution galaxy cluster simulation of a system beyond \(10^{15}\) M\({}_{\odot}\) to date. ## Data Availability The data will be made available based on reasonable request to the corresponding author. UPS is grateful to Jonathan Squire, Lorenzo Sironi, and Aaron Tran for dicussion on Fig. 10. UPS thanks Amitava Bhattacharjee, Greg L. Bryan, Blakesley B. Burkhart, Drummond B. Fielding, Rudiger Pakmor, Rachel S. Somerville, Romain Teyssier as well as the members of CCAs Galaxy Formation and Plasma Physics groups for insightful and useful discussion. We thank Lucy Redding-Ikkanda for help with generating Fig. 1. UPS is supported by the Simons Foundation through a Flatiron Research Fellowship (FRF) at the Center for Computational Astrophysics. The Flatiron Institute is supported by the Simons Foundation. UPS acknowledges computing time provided by the resources at the Flatiron Institute on the cluster rusty. UPS acknowledges the computing time provided by the Leibniz Rechenzentrum (LRZ) of the Bayerische Akademie der Wissenschaften on the machine SuperMUC-NG (pn72bu). UPS is acknowledging the computing time provided by c2pap (pr27mi). KD, LMB and TM acknowledge support by the COMPLEX project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2019-AdG 882679. LMB acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2094 - 390783311. LMB acknowledges the computing time provided by c2pap under the project pn36ze. We use the cosmological simulation code gadget3 (Springel, 2005; Dolag & Stasyszyn, 2009; Beck et al., 2016) to run the simulations and use the language julia(Bezanson et al., 2014) to perform the analysis2 based on the packages that can be found here: [https://github.com/LudwigBoess](https://github.com/LudwigBoess)
2310.19643
Polemical Case Study of Opinion Dynamics:Patterns of Filter Bubbles in Non-Consensus, Rewire Phenomena
In this paper, we will review some of the issues that have been raised by opinion dynamics theory to date. In particular, we conducted a hypothesis-based simulation using a socio-physical approach regarding the filter bubble phenomenon that tends to occur under special conditions such as (1) Distance, (2) Time, and (3) Existence of strong opinion clusters in the barriers to consensus building (4) Place where opinions are not influenced In particular, this paper discusses the hypothesis and simulations of filter bubbles, in which opinions diverge or converge without reaching consensus under conditions in which non-consensus is likely to be emphasized. This article serves as an Appendix in "Case Study On Opinion Dynamics In Dyadic Risk On Possibilities Of Trust-Distrust Model."
Yasuko Kawahata
2023-10-30T15:32:38Z
http://arxiv.org/abs/2310.19643v3
[ ###### Abstract In this paper, we will review some of the issues that have been raised by opinion dynamics theory to date. In particular, we conducted a hypothesis-based simulation using a socio-physical approach regarding the filter bubble phenomenon that tends to occur under special conditions such as (1) Distance, (2) Time, and (3) Existence of strong opinion clusters in the barriers to consensus building (4) Place where opinions are not influenced In particular, this paper discusses the hypothesis and simulations of filter bubbles, in which opinions diverge or converge without reaching consensus under conditions in which non-consensus is likely to be emphasized. This article serves as an Appendix in "Case Study On Opinion Dynamics In Dyadic Risk On Possibilities Of Trust-Distrust Model." Opinion Dynamics, Filter bubbles]Polemical Case Study of Opinion Dynamics:Patterns of Filter Bubbles in Non-Consensus, Rewire Phenomena Yasuko Kawahata] Yasuko Kawahata \({}^{\ddagger}\) Faculty of Sociology, Department of Media Sociology, Rikkyo University, 3-34-1 Nishi-Ikebukuro,Toshima-ku, Tokyo, 171-8501, JAPAN. [email protected],[email protected] ## 1 Introduction The study of opinion dynamics has a long history and has been the subject of much research, mainly in the field of sociology. Early studies assumed linearity, but models incorporating nonlinearity were also studied. Consensus formation based on local majority rule has been studied as an application of renormalization group theory in physics. Also, a theory that compares the agreement and disagreement of opinions with the direction of the magnetic moment of magnetism has been studied in the field of social physics by applying the theory of magnetic physics by Garam et al. (1982). Many mathematical theories of opinion dynamics treat opinions as discrete values of +1 and 0 or +1 and -1. In contrast, some theories consider opinions as continuous values, which can be varied through the exchange of opinions with others. The bounded-confidence model is a typical model of a theory that deals with continuous transitions of opinions. The proliferation of public networks has enabled instantaneous two-way communication that transcends temporal and spatial constraints. The vast amount of textual data on the Web facilitates quantitative analytical studies of public opinion, which could not be visualized before. In this paper, we review the issues raised by previous opinion dynamics theories. In particular, we conduct simulations based on hypotheses from a socio-physical approach on the filter bubble phenomenon, which tends to occur under special conditions such as below. 1. Distance 2. Time 3. The existence of Strong Opinion 4. Place where Opinions are not Influenced These clusters that act as barriers to consensus formation. Therefore, we are working on a theory to explain consensus formation and opinion splitting in opinion exchanges on social media such as Twitter (X). In this paper, we propose a model based on the Like Bounded Confidence Model, which represents opinions as continuous quantity values. However, the Bounded Confidence _m_Model assumed that people with differing opinions work by ignoring, rather than ignoring, their opinions, but in this paper, the authors' approach, especially when opinions are strong or when filter bubbles or echo chambers occur, Hypotheses and considerations are presented, and the model is designed to incorporate and represent the effects of external external pressures and phenomena that depend on the surrounding circumstances. ### Research Focus In particular, this paper addresses the filter bubble phenomenon. In particular, the above phenomenon occurs in situations where non-consensus is likely to be emphasized. In this case, the authors hypothesize that opinions tend to diverge or converge without reaching consensus. The filter bubble in this case is discussed in terms of hypotheses and simulations. And this article serves as an Appendix in "Case Study On Opinion Dynamics In Dyadic Risk On Possibilities Of Trust-Distrust Model." ## 2 Preview Works about Filter bubble Filter bubbles are defined as the individual outcomes of different processes of information retrieval, perception, and selection, and by remembering the sum total, the individual user receives only customized choices from the world of available information that fit his or her existing attitudes.At the social level, individuals tend to share a common social media bubble with like-minded friends (Boutyline and Willer, 2017 ; McPherson, Smith-Lovin, and Cook, 2001). Similarly, the definition of an echo chamber has been said that over time, communities in which Internet content confirming a particular ideology is echoed by all sides are particularly prone to a process of group radicalization and polarization (Vinokur and Burnstein, 1978, Garrett, 2009 ; Sunstein, 2001, 2009). By this, diversity can be understood as source or content diversity. Source diversity refers to both the inclusion of a large number of information sources in a news outlet and the inclusion of a variety of referents in a news article. A wide range of different areas of interest in a particular topic, as well as in the selection of viewpoints offered to news consumers throughout, yields a diversity of content (Voakes et al. 1996) is also assumed. Scholars have expressed concern about whether algorithms value diversity as an important feature of news quality (Pasquale 2015). Theoretical concepts such as Pariser's (2011) filter bubble hypothesis suggest that algorithms aim to maximize economic benefits by increasing media consumption rather than guaranteeing diversity. According to this rationale, the algorithm excludes information that appears to be of little interest to individual users while presenting more content that they are more likely to consume. For example, a user who has experienced heavy sports news consumption will likely receive more sports news at the expense of other topics (e.g., political news). ### Echo chambers One might argue that the creation of "echo chambers" is also possible in the offline world simply by consuming certain television channels or newspapers. However, an increasing number of studies have recently hypothesized that creating "echo chambers" on the Internet is easy. Echo chambers are social phenomena in which the filter bubbles of interacting individuals overlap strongly. The danger of a society collapsing into distinct echo chambers could be explained as a lack of consensus throughout society, and a lack of at least some shared beliefs among otherwise disagreeing people, necessary for a democratic decision-making process (Sunstein, 2001, 2009 ). However, increasingly radicalized ideological online groups may, at some point, resort to real violence or terrorism to achieve their goals (e.g., Holtz, Wagner, and Sartawi, 2015 ; Weiman, 2006). For example, the Internet is an environment of choice, offering the possibility to meet many individuals around the world and breaking down regional limitations. In conclusion, it appears that users of social media platforms and social networking sites are at risk of both "filter bubbles" and "echo chambers". Similar theoretical constructs aim to increase the likelihood of like-minded contacts ("echo chambers"; Sunstein 2009) and a limited public sphere ("spheres"; Gitlin 1998). The latter, in particular, refers to the normative fear of unintentionally missing out on a variety of information that prevents individuals from being properly informed and becoming rational democratic citizens. They report that a system of pre-selection/implicit selection of personalized information may actually lead to a reduction in the presentation and consumption of anti-attitudinal information (Beam, 2014). Furthermore, one study found that approximately 12% of Google web search results show differences between users. This can be explained by pre-selected/implicit personalization (Hannak et al., 2013). Both studies support the "filter bubble" hypothesis. Another study showed that individuals actually choose to read news items that seem consistent with their opinions (Garrett, 2009). However, the effect on avoidance of anti-attitude news items was reported to be less pronounced in this study (Garrett, 2009). Furthermore, studies by Iyengar and Hahn (2009) and Peterson etal (2018) indicate that individuals prefer to read news articles, news websites, and content that align with their political orientation. Their findings further underscore the "echo chamber" hypothesis. They diagnose a high level of uncertainty about privacy issues on the Internet. Users are usually unaware of how their data will be used. Even those who express privacy concerns may be providing sensitive information due to a lack of awareness of the issue. Ideally, users who are concerned about privacy would want to control what information is shared and with whom. In practice, however, companies have ways to get people to share, such as creating default settings on their sites or giving the impression that everyone else is also sharing information. In fact, we have become accustomed to the idea that handing over personal data is the price you pay for a free service, that extra convenience [1]. ### Move on to Case study Modeling:Fillter Bubbles In this paper, we hypothesize and simulate the spread of this "non-consensual" information, the filter bubble. In an actual research case study on the filter bubble, an exploratory study of COVID-19 misinformation on Twitter (2021) found that by July 18, 2020, the International Fact-Checking Network (IFCN), which integrates over 92 fact-checking organizations, had identified 7,623 pandemic-related IFCN has uncovered more than 7,623 unique fact-checked articles on the pandemic. But misinformation does more than contribute to the spread. Misinformation intensified fear, caused social discord, and could lead to direct damage. (e.g., deliberately engaging in dangerous behavior). ## 3 Approorch to Fillter Bubbles Cases by socio-physics The filter bubble hypothesized in this paper is the "filter bubble phenomenon" when a topic already existing in a certain space, in another discourse, is re-looped within a certain community, or when the case of a topic propagating to a completely unrelated community is repeated. Here, we will hypothesize the process of topic re-propagation and repetition in terms of topic re-wiring to a certain agent. ### Rewiring Process The rewiring process is a stochastic procedure where edges between nodes may be reconfigured. The process is conducted through the following steps: 1. Random selection of nodes based on opinion threshold and rewiring probability. 2. Edges are reconfigured among the selected nodes to alter the information flow in the network. This process can be represented as a transition in edge configuration from \(E\) to \(E^{\prime}\). ### Opinion Formation Opinion formation is modeled as an iterative process where each node updates its opinion based on the average opinions of its neighbors: \[o_{i}(t+1)=\frac{1}{|N_{i}|}\sum_{j\in N_{i}}o_{j}(t), \tag{1}\] where \(N_{i}\) represents the set of neighbors of node \(i\), and \(t\) indicates the iteration step. ### Distribution of distances changes after each rewiring step To express how the distribution of distances changes after each rewiring step, we use the notion of distance as a random variable. Let \(D_{ij}^{(s)}\) be the distance from node \(i\) to node \(j\) at step \(s\). The set of distances between all nodes is considered to follow the following probability distribution: \[P(D_{ij}^{(s)}=d)=\frac{\text{step $s$ distances $d$ Pairs of Node}}{\text{all Nodes}} \tag{2}\] where \(P(D_{ij}^{(s)}=d)\) is the probability that the distance between two nodes is \(d\) at step \(s\). The average value of the distance is expressed in terms of expectation, and the average distance for all node pairs at a particular rewiring step \(s\) can be obtained: \[E[D^{(s)}]=\sum_{i\neq j}D_{ij}^{(s)}\cdot P(D_{ij}^{(s)}=d) \tag{3}\] This expected value \(E[D^{(s)}]\) gives the average distance between nodes at step \(s\). As the dynamics of the network progresses, these values will show changes that reflect patterns of information transfer and connectivity within the network. ### Conditional Node Selection in Network Rewiring During the rewiring process, nodes are selectively subjected to the possibility of rewiring based on a stochastic condition influenced by their opinion values. This selection process is governed by the following probabilistic rule: \[\text{select {node} with probability }P(\text{select }|\,o_{\text{node}})=\begin{cases}1& \text{if }o_{\text{node}}\leq t,\\ 0.5&\text{if }o_{\text{node}}>t.\end{cases}\] Where: \(t\) represents a threshold value for opinions. The probability with which a node is selected is determined based on this threshold in relation to the node's opinion value \(o_{\text{node}}\). If the opinion \(o_{\text{node}}\) of a particular node is less than or equal to \(t\), then that node is selected with a probability of 1. Conversely, if \(o_{\text{node}}\) is greater than \(t\), the node is selected with a probability of 0.5. Here, \(o_{\text{node}}\) represents the opinion value of the node, and \(P(\text{select }|\,o_{\text{node}})\) is the conditional probability of selecting the node for potential rewiring given its opinion value. The process aims at randomly choosing nodes, biased towards those with opinions equal to or less than 0.5. Nodes are added to the set of selected nodes until the set reaches a predetermined size, in this case, four nodes. ### Opinion Formation Process The opinion formation process in the network is an iterative procedure where the opinion of each node is influenced by the opinions of its predecessors. This is mathematically represented by the following equation and is executed iteratively for a specified number of steps or until the system reaches a steady state. For each iteration \(t\), the opinion \(o_{i}(t)\) of each node \(i\) is updated based on the opinions of its predecessor nodes. The updated opinion \(o_{i}(t+1)\) is calculated as follows: \[o_{i}(t+1)=\frac{1}{|N_{i}|}\sum_{j\in N_{i}}o_{j}(t), \tag{4}\] where: \(o_{i}(t+1)\) is the opinion of node \(i\) at iteration \(t+1\), \(N_{i}\) is the set of predecessors of node \(i\) in the network, \(|N_{i}|\) is the number of nodes in \(N_{i}\), \(o_{j}(t)\) is the opinion of node \(j\) at iteration \(t\), and node \(j\) is a member of \(N_{i}\). The process is repeated for a number of iterations, or until the opinions in the network stabilize, leading to the final opinion values for each node in the network. ### Model Output The final state of the system is represented by: 1. The adjusted opinion vector \(\mathbf{o^{\prime}}=(o^{\prime}_{1},o^{\prime}_{2},\ldots,o^{\prime}_{N})\) after the opinion formation process. 2. A visualization of the network, indicating the opinion values through node coloring. 3. A histogram representing the distribution of rewirings across nodes. The model involves parameters such as the rewiring probability and opinion dynamics that can be varied to study different scenarios of social influence and information flow within a network. Each node in the network represents an individual agent (an individual with an opinion), and the links between agents indicate the possibility of exchanging opinions. 1. Generating nodes with random opinion values 2. Add random directed link 3. Forming filter bubbles by rewiring links 4. Opinion formation process ### Cases of falling into a filter bubble Formula Opinion updating is done by averaging the opinions of a node's predecessor nodes: \[o^{(t+1)}_{i}=\frac{1}{N_{\text{neighbors}}}\sum_{j\in\text{neighbors}(i)}o^{ (t)}_{j}\] Here, \(o^{(t)}_{i}\) is the opinion of node \(i\) at time \(t\), and neighbors(\(i\)) is the set of precursor nodes of node \(i\). represents. ### Reproducing the filter bubble When the opinion value exceeds a certain threshold, the node stops updating its opinion. Because it is meant to reduce diversity of opinion and create filter bubbles: \[o^{(t+1)}_{i}=\begin{cases}\frac{1}{N_{\text{neighbors}}}\sum_{j\in\text{ neighbors}(i)}o^{(t)}_{j}&\text{if $o^{(t)}_{i}<$ threshold}\\ o^{(t)}_{i}&\text{otherwise}\end{cases}\] ### Parameter The following key parameters are used in the simulation: * Number of nodes in the network (e.g. \(N=10\), \(N=40\)) * Threshold of opinion value to stop updating opinions (e.g. threshold = 0.8) Figure 1: falling into a filter bubble during the consensus building process\(N=10\) Figure 2: falling into a filter bubble during the consensus building process\(N=40\) 1. **Tendency of Agent Rewiring** Some nodes (agents) have a large number of ingress and egress links, while other nodes have very few links. This suggests that some agents play a central role within the network. Darkly colored nodes are located in the center, indicating that these nodes have a high opinion value. These central agents are likely to influence other agents during the rewiring process. 2. **Consideration as a Process of Consensus Building** Many nodes are shown in shades of blue, indicating that many agents have similar opinions. This suggests that opinions are increasingly shared and influenced within the network. It is observed that some agents have different opinion values compared to other agents. This shows that there is still a diversity of opinions within the network. 3. **Consideration of the Generation Route of Filter Bubbles** Many of the nodes located at the center of the network have the same or similar colors, suggesting that there is active exchange of information and opinions between these nodes. This is a typical feature of filter bubbles, showing that agents who share the same opinions and information are strongly connected to each other. On the other hand, it is observed that nodes located at the periphery of the network have a different color from nodes at the center. This indicates that these agents may have different information sources and opinions than the central agent. From the above considerations, we can see that the formation of opinions and the propagation of information within a network are greatly influenced by the structure of the network and the relationships between agents. In particular, it is considered that the occurrence of filter bubbles and the polarization of opinions may be strongly influenced by the central agent or group of the network. This simulation aims to mimic how filter bubbles form within social networks. As discussed in this section, we can hypothesize and verify cases using a sociophysical approach that, due to rewiring processing and restrictions on opinion updating, convergence of opinions among agents and a reduction in diversity, that is, a filter bubble, can be observed. To simulate a case where the topic is a loop. We set up a conditional equation. It became imperative to modify the rewiring process of the nodes to create a cycle (loop) in the network, and also to adjust the opinion formation process so that the opinion values of the nodes are not updated under certain conditions. Modified the addition of links in the rewiring process to create a cycle in the network, i.e., the opinion formation process, to stop updating a node's opinion value when it exceeds a certain threshold value. ### Analysis of Network Results 1. **Tendency of agent rewiring** Some nodes (agents) have a large number of ingress and egress links, while other nodes have very few links. This suggests that some agents play a central role within the network. Darkly colored nodes are located in the center, indicating that these nodes have a high opinion value. These central agents are likely to influence other agents during the rewiring process. 2. **Consideration as a process of consensus building** Many nodes are shown in shades of blue, indicating that many agents have similar opinions. This suggests that opinions are increasingly shared and influenced within the network. It is observed that some agents have different opinion values compared to other agents. This shows that there is still a diversity of opinions within the network. 3. **Consideration of the generation route of filter bubbles** Many of the nodes located at the center of the network have the same or similar colors, suggesting that there is active exchange of information and opinions between these nodes. This is a typical feature of filter bubbles, Fig. 3: Reproducing the filter bubble showing that agents who share the same opinions and information are strongly connected to each other. On the other hand, it is observed that nodes located at the periphery of the network have a different color from nodes at the center. This indicates that these agents may have different information sources and opinions than the central agent. Multiple cases are constructed in which clusters with initially divergent opinions fall into a filter bubble as multiple clusters are provided with topics. Divide nodes into several clusters and assign different initial values of opinion to nodes in each cluster. Reconstruct connections between nodes based on certain conditions (e.g., a randomly selected node is connected to a node in another cluster). Repeat the convergence of opinions. **From Figure 4, Discussion** ## 1 Agent rewiring trends According to the code, 10 rewiring steps are performed. In each step, two nodes from each cluster are randomly selected and the selected nodes are connected to each other. The color of the graph indicates the opinion value of each node. Initially, the nodes were divided into three different clusters, with each cluster assigned a different initial value of opinion. Observing the graph, we can see edges between nodes of different colors, indicating that agents with different opinion values are connected to each other during the rewiring process. This suggests that agents may be exposed to diverse opinions. ## 2 Consideration as a process of consensus building The color gradient indicates convergence or diffusion of opinions among agents. When nodes of different clusters are connected, the opinions of the agents may be attracted to the opinions of the agents to which they are connected. From the color distribution on the graph, it appears that some agents have intermediate opinion values. This indicates that agents with many connections between different clusters may be exposed to a variety of opinions and thus form neutral opinions. ## 3 What kind of cases and opinion formations can cause this kind of phenomenon in a society? For example, experiences in international environments and multicultural communities often enrich people's thinking and values. This phenomenon is similar to the rewiring of agents and changes in opinion described above; in SNS and news media, people's opinions and perceptions can change as a result of exposure to diverse sources of information. ## 4 Consideration as a filter bubble phenomenon Clustering of colors in a graph can be considered as an example of the filter bubble phenomenon. Clusters in which certain opinions and information are concentrated suggest that agents within that group are exposed to similar information and opinions. Isolated clusters of different colors on the graph mean that each cluster exists within a different bubble of information or opinion. ## 5 Consideration of rewiring route trends Reroute rewiring serves as a pathway for agents to be exposed to new information and opinions. This can also be a factor that causes an agent's opinion to change. Observing the distribution of edges in the graph, we see several reroutes between agents with different opinions. This suggests that agents have a chance to be exposed to diverse opinions and information. On the other hand, if there is a dense rewiring within a particular cluster, agents within that cluster are more likely to be exposed primarily to the same information and opinions. This situation can be a factor that reinforces the filter bubble. ### Scenarios where multiple clusters of differing opinions fall into a filter bubble When constructing multiple cases where multiple clusters with initially divergent opinions fall into a filter bubble as a result of multiple clusters being provided with topics. Divide the nodes into several clusters and assign different initial values of opinion to the nodes in each cluster. Reconstruct Figure 4: Fall into a filter bubble as multiple cluster connections between nodes based on certain conditions (e.g., randomly selected nodes are connected to nodes in other clusters). The case of repeated convergence of opinions was verified. 1. **Agent rewiring trends** Although many connections exist between multiple agents, some agents appear to be rarely rewired. It is shown that agents of a particular color (opinion value) are frequently connected to agents of other colors. This is evidence of an exchange of opinions among agents with different opinions. 2. **Consideration as a process of consensus building** The central clustering of blue colors suggests that a strong consensus of opinion has developed among these agents. On the other hand, the red and green agents are dispersed and these agents are likely to have different opinion values. 3. **What is the case in society, opinion formation** This graph can be said to show strong agreement of opinion within a particular group and diversity of opinion among different groups. In society this could indicate the formation of common values and beliefs within a particular community or group and the diversity of opinions among different groups or communities. 4. **What is the case for the filter bubble phenomenon?** An area with a high concentration of blue-colored agents may indicate the presence of a filter bubble of information and opinions. It is likely that these agents are primarily exposed to information from the same or similar sources. 5. **Root tendency to rewire** We observe that some agents are rewired with many other agents. This may indicate that these agents play a central role in the exchange of information and opinions. On the other hand, there are also agents that are less rewired. This may indicate that these agents rely on limited sources of information or that they exchange few opinions with other agents. **Scenarios in which a cluster diverges the moment a topic provider with a different external force emerges** In the case of a scenario in which a cluster diverges the moment a topic provider with a different external force emerges after falling into a filter bubble. In the case of multiple cases where clusters with initially divergent opinions fall into a filter bubble when multiple clusters provide topics for discussion. Divide the nodes into several clusters and assign initial values of opinions close to the nodes in each cluster. Based on certain conditions, reconstruct the connections between nodes to fall into the filter bubble. After a certain step, we add a different topic provider as an external force and strongly change the opinion value of that node. We took the hypothesis of repeated convergence of opinions and ran the simulation. 1. **Agent rewiring trends** The small number of links between nodes suggests relatively little rewiring between agents. There are few indications that agents of a particular color (opinion value) are frequently connected to agents of other colors. This may indicate that the exchange of opinions is limited or takes place only within specific groups. Fig. 5: Fall into a filter bubble as multiple cluster, multiple clusters of differing opinion Fig. 6: Fall into a filter bubble as multiple cluster, multiple clusters of differing opinion 2. **Consideration as a process of consensus building** Red and green agents are dispersed, inferring that these agents are more likely to have different opinion values. On the other hand, agents that are concentrated in a particular area may suggest that there is a strong consensus of opinion within that group. 3. **What is the case in society, opinion formation** The graph can be said to indicate a strong agreement of opinion within a particular group and a diversity of opinion among different groups. In society, it may indicate the formation of common values and beliefs within a particular community or group, as well as diversity of opinion among different groups or communities. 4. **What is the case for the filter bubble phenomenon?** Areas with high concentrations of blue agents may indicate the presence of filter bubbles of information and opinion. These agents are likely to be primarily exposed to information from the same or similar sources. 5. **Root tendency to rewire** It is observed that some agents are rewired with many other agents. This may indicate that these agents play a central role in the exchange of information and opinions. On the other hand, there are some agents that are less rewired. This may indicate that these agents rely on limited sources of information or that they exchange few opinions with other agents. ### Cases in which the filter bubble is exacerbated by the strengthening From Figure 7, this cases in which a cluster that was initially connected by close opinions falls into a filter bubble when multiple clusters provide topics, and then the external cluster is removed the moment a topic provider with a different external force is found, and even if the same external force is involved over and over again, the connection of clusters that are even closer is strengthened, and the filter bubble is worsened. We hypothesize a case in which the bubble worsens. We create clusters, assign each a node with a close opinion value, and after a few steps, allow a strong external opinion provider to enter the network, and remove it from the network immediately after this external cluster joins the network. This repeated entry and removal of this outside opinion provider into the network was set up so that the existing clusters would become stronger and opinions would converge more. 1. **Tendency of agent rewiring** Due to the small number of links between agents, there appears to be little rewiring between agents with a particular opinion value (color). There is little indication of frequent connections between agents of different colors. This may indicate that the exchange of opinions is limited or takes place only within specific groups. 2. **Consideration as a process of consensus building** If agents are concentrated in one particular area, this may suggest that there is a strong consensus of opinion within that group. On the other hand, agents that are spread out are more likely to have different opinion values. 3. **What is the case in society, opinion formation** This graph can be said to indicate a strong consensus of opinion within a particular group and a diversity of opinion among different groups. In society, this could indicate the formation of common values and beliefs within a particular community or group and the diversity of opinions among different groups or communities. 4. **What is the case for the filter bubble phenomenon?** Areas with a high density of blue agents may indicate the presence of filter bubbles of information and opinion. These agents may be primarily exposed to information from the same or similar sources. 5. **Reroute trends in rewiring** It is observed that some agents are rewired with many other agents. This may indicate that these agents play a central role in the exchange of information and opinions. On the other hand, there are also agents that are not or less rewired. This may indicate that these agents rely on limited information sources or have little exchange of ideas with other agents. ### Cases in which the filter bubble bursts From Figure 9, after a cluster that was initially connected by close opinions falls into a filter bubble due to multiple clusters providing topics, the external cluster is removed the moment a topic provider with a different external force is introduced, Figure 7: Cases in which the filter bubble is exacerbated by the strengthening further strengthening the connection of clusters that are close to each other even if the same external force is involved over and over again. However, we will hypothesize and discuss the case in which that filter bubble bursts when more and more opposing clusters with even stronger opinions approach the filter bubbled cluster. In this case, an initial node cluster is created to form the filter bubble, At certain steps, a strong external opinion provider enters the network. This external cluster is removed immediately after joining the network, and this process is repeated to assume a pattern in which the opinions of the existing clusters become stronger. Then, when a new rebuttal cluster is created and its opinion is strengthened, the pattern is that the filter bubble bursts when the rebuttal cluster approaches the filter-bubbled cluster. 1. **Tendency of agents to rewire** Looking at the distribution of agent colors, it appears that there are few links between agents with specific opinion values. Based on color transitions, we can see that agents of certain colors are concentrated in certain areas. 2. **Consideration as a process of consensus building** There are areas where agents of a particular color are densely concentrated. This may suggest a strong consensus of opinion within that group. On the other hand, agents with a spread of colors are more likely to have different opinion values. In other words, opinions are disparate and divergent, reproducing a divergent state of affairs. 3. **What is the case in society, opinion formation** This graph can be said to show strong agreement of opinion within a particular group and diversity of opinion among different groups. Socially, it may indicate the formation of common values and beliefs within a particular community or group and the diversity of opinions among different groups or communities. Here, too, a state of disparate and divergent opinions is reproduced. 4. **What is the case for the filter bubble phenomenon?** High density areas of blue agents may indicate the presence of filter bubbles of information and opinion. These agents may be receiving information primarily from the same or similar sources. Although partially in conflict with the phenomenon in (4), the overall state is divergent. 5. **Consideration of root tendencies of rewiring** We observed that some agents are rewired with many other agents. This may indicate that these agents play a central role in the exchange of information and opinions. On the other hand, some agents are not rewired or not rewired very much. This may indicate that these agents rely on limited information sources or have little exchange of ideas with other agents. We can speculate that these may be the causes of the disparate and divergent opinions that are being reproduced. ### Opposing side becomes a filter bubble From Figure 9, after a cluster initially connected by close opinions falls into a filter bubble due to multiple clusters providing topics, the external cluster is removed the moment a topic provider with a different external force appears, and the connection between the clusters that are close is further strengthened even if the same external force is involved over and over again. However, when the number of rebuttal clusters with stronger opinions increases and approaches the filter bubble cluster, the filter bubble side partially encroaches on the rebuttal cluster, and we assume a case in which the rebuttal side is eventually caught up in the filter bubble side. A filter bubble is formed in the initial node cluster, and strong outside opinion providers continuously enter the Fig. 8: Cases in which the filter bubble is broken Fig. 9: Opposing side becomes a filter bubble network and are immediately removed, creating a counter-argument cluster, which strengthens its opinion. Then, when the refuting cluster approaches the filter-bubbled cluster, the case is constructed in which the partially refuting cluster is eroded to the filter-bubble side, and eventually the refuting party is also caught up in the filter-bubble side. 1. **Agent rewiring trends** Areas of relatively high density of nodes (especially in the range of 0.6 to 0.7) are observed from red to blue. This indicates that agents with these opinion values may be strongly connected to each other. On the other hand, nodes in the 0.1 to 0.3 range are dispersed throughout the graph, suggesting less rewiring among these agents. 2. **Consideration as a process of consensus building** Areas with a high density of agents of a darker color (0.6 or higher) are more likely to have formed a consensus of opinion within that group. On the other hand, areas with a mix of agents of different colors indicate that a diversity of opinions exists. 3. **What is the case within a society, opinion formation** We can say that this graph shows a strong agreement of opinions within a particular group and a diversity of opinions among different groups. Socially, it may indicate the formation of common values and beliefs within a particular community or group, and diversity of opinion among different groups or communities. 4. **What is the case for the filter bubble phenomenon?** Dense areas of dark-colored agents may indicate the presence of filter bubbles of information and opinion. These agents may be receiving information primarily from the same or similar sources. 5. **Consideration of rewiring root tendencies** We observed agents that are rerouted with many other agents. This indicates that these agents may play a central role in the exchange of information and opinions. On the other hand, there are also agents that are not rewired or not very rewired. This may indicate that these agents rely on limited information sources or have little exchange of ideas with other agents. ### Case in which the third opinion becomes mainstream From Case of Figure 11, after a cluster initially connected by close opinions falls into a filter bubble due to multiple clusters providing topics, the external cluster is removed the moment a topic provider with a different external force is found, further strengthening the connection of the close clusters even if the same external force is involved over and over again. However, the filter bubble side partially encroaches on the opposing clusters when more and more opposing clusters with stronger opinions approach the filter-bubbled cluster. However, a new opinion is generated here, and a new third opinion is generated from the two clusters, and we assume a case in which that cluster eventually becomes stronger. A filter bubble is formed in the initial node cluster, and outside opinion providers continuously enter the network and are removed shortly after. At that point, a counter-opinion cluster is generated, its opinion becomes stronger, and as the counter-opinion cluster approaches the filter-bubbled cluster, the counter-opinion cluster partially erodes to the filter-bubble side. Assume a case where a new opinion cluster is generated and it eventually becomes the most influential cluster. ### When two different opinions are strong From Case of Figure 11, after a cluster that was initially connected by close opinions falls into a filter bubble due to multiple clusters providing topics, the external cluster is removed the moment a topic provider with a different external force is found, further strengthening the connection of the clusters that are close even if the same external force is involved over and over again. However, assume a case where the intervention of two influential people of a certain opinion Figure 11: When two different opinions are strong Figure 10: Case in which the third opinion becomes mainstream pulls in these two different opinions at once. A filter bubble is formed in the initial node cluster, and outside opinion providers continuously enter the network, only to be removed shortly after. Then two people with strong opinion influence enter the network and pull the network's opinions in two different directions. Assume a case where some nodes remain separated from the opinions of the two people. 1. **Agent rewiring trends** It is clear that the agent in the center of the graph (node indicated as 10) is directly connected to many other agents. This indicates that this agent is very influential or plays a central role. Based on color, we see that agents of diverse opinions are connected to the central agent. This confirms that the central agent may have access to diverse sources of information and that the hypothetical case can be reproduced by the model. 2. **Consideration as a process of consensus building** The dark-colored agents (0.6 and above) are relatively evenly distributed, but the mixed colors of the agents connected to the central agent suggest that consensus may be in progress or that there is an active exchange of different opinions. 3. **What is the case in society, opinion formation** The graph may indicate a scenario where there is one central source or leader who has a direct relationship with a number of individuals or groups. For example, it might reflect a situation such as corporate, organizational, or community leadership. 4. **What is the case for the filter bubble phenomenon** Because the central agent is directly connected to many other agents, the information this agent receives may be diverse. However, if an external agent receives information only through the central agent, it may facilitate the formation of filter bubbles. 5. **Consideration of rewiring root tendencies**: The central agent has the most connections in the network and is likely to be the most frequently rerouted agent. It is assumed that other agents often exchange opinions and information via the central agent. ### Concentration of Central Opinions From Case of Figure 12, After a cluster that was initially closely connected by opinion falls into a filter bubble due to multiple clusters providing topics, the external cluster is removed the moment a topic provider with a different external force becomes involved, further strengthening the connection of the clusters that are close, even if the same external force is involved over and over again. However, the two most influential people of a given opinion repeatedly change their opinions. This repetition leads to frequent changes in the surrounding opinions, and the final case is that most opinions fall apart, while some are pulled by the repeated opinions of the two most influential people. 1. Agent rewiring trends The agent located in the center of the graph (node 10) is directly connected to many other agents. This suggests that this agent may be very influential or play a central role. Based on color, agents of varying opinions are connected to the central agents. This confirms that the two central agents may have access to diverse sources of information. 2. Consideration as a process of consensus building While the dark agents (0.6 and above) are relatively evenly distributed, the mixed colors of the agents connected to the central agent suggest that a consensus is in progress or that there is an active exchange of different opinions. 3. What is the case in society, opinion formation The graph may indicate a scenario in which the two central sources or leaders have direct relationships with a large number of individuals or groups. For example, it may reflect a situation such as corporate, organizational, or community leadership. 4. What is the case for the filter bubble phenomenon? Since the central agents are directly connected to many other agents, the information this agent receives could be Fig. 12: Concentration of Central Opinions diverse. However, if outside agents receive information only through the central agents, this may facilitate the formation of filter bubbles. 5. Consideration of Root Tendency of Rerouting The central agents are the most connected in the network and are likely to be the most frequently rerouted agents. Other agents are presumed to frequently exchange ideas and information via the central two agents. ## 4 Filter bubble on Propagation Probability in Network Structures We now turn to a discussion of propagation probabilities in simulations of filter bubble generation. Propagation probability serves as a quantitative measure that captures the characteristics of information flow and connectivity within a network. Based on the likelihood of information reaching another node from a particular node, this probability is calculated as follows \[P_{\text{propagation}}(i\to j)=\frac{\text{Number of reachable nodes from }i}{\text{Total number of nodes}} \tag{5}\] Here, \(P_{\text{propagation}}(i\to j)\) represents the propagation probability from node \(i\) to node \(j\). This probability is highly dependent on the topology of the network and the connectivity of each node. The connectivity within the network reflects the efficiency of overall information flow and the degree of interactions among nodes within the network. A high propagation probability indicates a smooth flow of information across the network, while a low probability suggests a tendency for information to remain within certain portions. The calculation of propagation probability involves a process of assessing the reachability of each node within the network. This is conducted following these steps: 1. Identify the shortest paths from each node \(i\) to all other nodes within the network. 2. Count the number of nodes reachable from each node \(i\). 3. Calculate the average number of reachable nodes across all nodes, normalizing this by the size of the network. ### Discussion on the possibility of Propagation Probability From Figure 13 and Figure 14, discussion on the possibility of Propagation Probability, 1. **Trend of Agent Rewirings** The "Distribution of Rewirings per Node" graph shows that many rewirings are performed at certain nodes. In particular, we can see that there are a great many rewirings in the vicinity of node 20. This indicates that the node plays a central role in the network or that rewiring occurs frequently due to some other factor. 2. **Consideration as a consensus building process** The "Propagation Probability per Step" graph shows that the propagation probability increases with each step of rewiring. This indicates that information propagation in the network is becoming more efficient as rewiring progresses. It can be confirmed that when each agent can receive information more easily, the possibility of smooth progress in consensus building will increase. 3. **What is the case in society and opinion formation** This indicator may indicate the influence of a leader or central figure in a society on information transmission Fig. 14: Propagation Probability per Step \(N=20\) Fig. 13: Distribution of Rewirings per Node \(N=20\) and opinion formation. If a particular agent plays a central role, we can see if that agent's opinions and information are more likely to influence other agents. 4. **What is the case for the filter bubble phenomenon?** If a central agent communicates only consistent opinions and information, other agents in the network are more likely to be influenced by those opinions and information. This could lead to the formation of a filter bubble, and parameters could be checked, keeping in mind that diverse opinions and information may be less likely to be propagated in the network. 5. **Consider the rewiring routing trends.** A closer look at the code shows that the probability of a node with an opinion value of 0.5 or higher being selected for rerouting is decreasing. This indicates that rerouting routes are adjusted based on opinion values. Also, rerouting is done primarily between the four selected nodes. This may strengthen the connections between certain agents in the network. Overall, the code and graph illustrate the process of information propagation, association among agents, and opinion formation within the network. It suggests that when a particular agent plays a central role, its impact on the network as a whole can be considered. ### Stubbornness Probability This approach allows for capturing the dynamics of information propagation under specific scenarios, assessing the impact of network structure on information propagation. The calculation of stubbornness probability is based on the idea that agents are considered stubborn if their opinion values exceed a certain threshold, countering the propagation trend. This is represented by the following equation: \[P_{\text{stubborn}}=1-\frac{1}{N}\sum_{i=1}^{N}\mathbf{1}_{(o_{i}>T)} \tag{6}\] where: \(P_{\text{stubborn}}\) represents the probability of stubbornness, indicating the proportion of nodes in the network that are resisting change. \(N\) is the number of nodes in the network. \(\mathbf{1}_{(o_{i}>T)}\) is the indicator function, which is 1 if the opinion \(o_{i}\) of node \(i\) exceeds the threshold \(T\), and 0 otherwise. From Figure 15 and Figure 16, discussion on the possibility of Stubbornness Probability, 1. **Agent rewiring trends** From the initial diagram of the network, many nodes are connected to a central node. In particular, node 0 has many input edges. Nodes with opinion values greater than 0.5 have a 50% chance of being rewired. This allows us to observe a tendency for certain nodes to be rewired frequently. 2. **Consideration as a process of consensus building** According to the "Stubbornness Probability Over Steps" graph, the probability of stubbornness at each step is variable. This can affect the efficiency of information propagation and consensus building within the network. As the propagation probability increases, we can observe that the process of consensus building may proceed more smoothly because each agent can receive information more easily. 3. **What is the case in society, opinion formation** When an agent plays a central role in a network, there is a greater likelihood that his/her opinions and information will influence other agents. We can envision a simulation hypothesis that can be thought of as an indicator of how Figure 16: Stubbornness Probability Over Steps \(N=20\) Figure 15: Stubbornness Probability \(N=20\) the influence of a leader or central figure in a society affects information transfer and opinion formation. 4. **What is the case for the filter bubble phenomenon?** When a central agent conveys only consistent opinions or information, other agents in the network may be influenced by that opinion or information. This may lead to the formation of a filter bubble, making it difficult for diverse opinions and information to propagate within the network. 5. **Consider the root tendency of rewiring.** It is shown that nodes with opinion values of 0.5 or higher have a decreasing probability of being selected for rerouting. This indicates that the rerouting routes are adjusted based on opinion values. In addition, the rerouting is mainly done between the four selected nodes. This confirms that connections between certain agents in the network may be strengthened. ### Propagation probability trends This calculation allows for the measurement of the tendency towards stubbornness in the entire network after each step, acting inversely to the propagation probability trends. Given a graph \(G\) with nodes \(N\) and a set of opinions \(O\) where each \(o_{n}\) corresponds to the opinion of node \(n\), and each node's opinion is influenced by its predecessors. The influence factor \(\alpha\) represents the extent to which a node is affected by the opinions of its neighboring nodes. The updated opinion of a node is calculated as follows: \[o_{n}^{\text{new}}=(1-\alpha)\cdot o_{n}+\alpha\cdot\left(\frac{\sum_{i\in \text{pred}(n)}o_{i}}{|\text{pred}(n)|}\right) \tag{7}\] where: \(o_{n}^{\text{new}}\) is the new opinion of node \(n\), \(\alpha\) is the influence factor dictating the degree of influence, \(\text{pred}(n)\) denotes the predecessors of node \(n\), \(o_{i}\) represents the opinion of the \(i\)-th predecessor node, \(|\text{pred}(n)|\) is the number of predecessors of node \(n\). This equation ensures that each node's opinion is adjusted by considering a weighted average of its own opinion and the opinions of its predecessors. ## 5 Modeling Opinion Dynamics and Network Topology Evolution in Social Networks In this study case of social networks, understanding how opinions spread and evolve is crucial. This document outlines a simulation model that combines opinion dynamics with the evolution of the network topology, specifically focusing on how these aspects influence the distribution of opinions (node density) across the network in each step. ### Model Description The model is designed to simulate a social network using directed graphs, where nodes represent individuals, and edges represent the influence between them. Opinions are numerical values assigned to each node, and the network topology evolves through a process of rewiring, influenced by the opinions of the nodes. ### Parameters The main parameters governing the model are as follows: **N**: The number of nodes in the network. **rewiring_steps**: The number of steps in the simulation during which edges may be rewired. **rewire_prob**: The probability of an edge being rewired in each step. **influence_factor**: A factor representing how much a node's opinion is influenced by its neighbors. #### 5.2.1 Network Initialization and Edge Creation The network is initialized with \(N\) nodes and a set number of edges created randomly between them. The _create_random_edges_ function handles the initial edge creation, ensuring a randomly generated network topology. \[\text{create\_random\_edges}(G,N):G(V,E)\to G(V,E^{\prime}) \tag{8}\] where \(G\) is the graph representing the network, \(V\) is the set of nodes, \(E\) and \(E^{\prime}\) are the sets of edges before and after the function execution, respectively. #### 5.2.2 Opinion Update Each node's opinion gets updated based on the opinions of its neighbors. This mechanism is represented by the _update_opinions_ function. \[\text{new\_opinions}[i]= \ (1-\text{influence\_factor})\cdot\text{opinions}[i]\] \[+ \left(\frac{\sum(\text{opinions}[j])}{|N(i)|}\right)\cdot\text{ influence\_factor} \tag{9}\] where \(i\) indexes the current node, \(j\) indexes the neighbors of node \(i\), \(N(i)\) represents the set of neighbors, and opinions represents the opinion values. ### Edge Rewiring Edges are rewired based on the _rewire_edges_ function, which depends on the nodes' opinions and the rewiring probability. \[\text{rewire\_edges}:G(V,E)\to G(V,E^{\prime\prime}),\] where \(G\) is the graph, \(V\) is the set of nodes, \(E\) and \(E^{\prime\prime}\) are the sets of edges before and after rewiring. where \(E^{\prime\prime}\) represents the set of edges after possible rewiring. #### 5.3.1 Opinion Distribution Calculation The distribution of opinions (node density) within the network is calculated using a histogram method, which divides the range of opinions into bins and counts the number of nodes with opinions within each bin. \[\text{calculate\_opinion\_density}(G,\text{opinions}):\\ G(V)\times\text{opinions}\rightarrow\text{density} \tag{10}\] where density is the normalized count of nodes for each opinion bin, representing the probability density of the opinions across the network. ### Calculation of Opinion Distribution Change Rate In the analysis of opinion dynamics, understanding the rate of change in opinion distribution is crucial as it provides insights into the volatility or stability of opinion formation within the network. This section describes the mathematical computation used to ascertain the change rate in opinion distribution across consecutive steps in the simulation. #### 5.4.1 Change Rate Formula The change rate of opinion distribution between two consecutive steps is calculated using the formula: \[\text{change\_rate}=\frac{\text{current\_density}-\text{previous\_density}}{ \text{previous\_density}+\epsilon} \tag{11}\] where: current_density is the opinion distribution in the current step. previous_density is the opinion distribution in the previous step. \(\epsilon\) is a very small number (e.g., \(1e-10\)) to prevent division by zero. #### 5.4.2 Parameters and Variables The change rate calculation uses the following parameters and variables: **current_density**: The array representing the density of opinions across different bins in the current step, computed as the number of nodes in each opinion bin divided by the total number of nodes. **previous_density**: The array representing the density of opinions across different bins in the previous step, computed in the same manner as current_density. **epsilon**: A small constant to prevent division by zero during the computation of the change rate. This is necessary because the previous density could be zero for some opinion bins, making the denominator zero. ### Calculation of the Probability of Unchanged Opinions The change rate indicates the relative change in the opinion distribution of the network. Positive values represent an increase in the density of certain opinions, while negative values indicate a decrease. Analyzing these change rates helps in understanding the dynamics and possibly predicting trends in opinion changes. In the network opinion dynamics simulation, one of the metrics of interest is the probability that opinions do not change between consecutive steps. This document outlines the formula used to calculate this probability and describes the parameters involved. #### Probability of Unchanged Opinions The probability of unchanged opinions, referred to as the static opinion probability, is computed at each step, considering the change rates in opinion distribution and certain probabilities defining agent behaviors. The formula for the static opinion probability is given by: \[P_{\text{static}}=(1-|R_{\text{change}}|)\cdot(1-P_{\text{prop}})\cdot(1-P_{ \text{stubborn}})\cdot(1-P_{\text{update}}) \tag{12}\] where each term is defined as follows: \(P_{\text{static}}\): The probability of an opinion remaining unchanged in the current step. \(R_{\text{change}}\): The rate of change in opinion distribution from the previous step to the current step, calculated as the difference in opinion densities divided by the sum of the opinion densities plus a small constant (\(\varepsilon\)) to prevent division by zero. \(P_{\text{prop}}\): The probability of an opinion propagating from one agent to another. It reflects the likelihood that an agent's opinion will be adopted by others in the network. \(P_{\text{stubborn}}\): The probability associated with agents maintaining their current opinions regardless of influence from their neighbors. It represents the level of stubbornness among agents in the network. \(P_{\text{update}}\): The probability of an agent updating its opinion based on influences within the network. This parameter controls how frequently agents reconsider and potentially change their opinions. The product in the formula represents the multiplicative effect of various factors that could contribute to an opinion remaining unchanged. A higher rate of change (\(R_{\text{change}}\)) implies a more dynamic opinion landscape where changes are more frequent, thereby reducing the probability of static opinions. Conversely, higher propagation, stubbornness, or update probabilities contribute to maintaining the status quo, increasing the likelihood that opinions remain static during each step of the simulation. From Figure 17 and Figure 18, discussion on the possibility of Opinions by Steps value, 1. **Agent rewiring propensity** The rewiring probability (rewire_prob) is set at 0.5. This indicates that edges between nodes with opinion differences greater than 0.5 have half the probability of being rewired. In other words, it implies that a pair of agents with a large difference of opinion has a 50% reduced probability of being connected to each other. 2. **Consideration as a process of consensus building** From the graph, we can see that the rate of change in opinion increases at several steps. In particular, the sharp increase or decrease in the rate of change in several Opinion Bins suggests that convergence or dispersion of opinions may be underway. With a stubbornness_prob of 0.2, agents have a 20% chance of sticking to their opinions. This can suggest that it may be difficult to reach a complete consensus. 3. **What is the case in society and opinion formation** The consequences of this case are appropriate for situations where people tend to break off relations with those who hold strongly different opinions. For example, the results suggest that this may be useful when considering the formation or change of opinions on topics that people feel strongly about, such as political opinions or religious beliefs. 4. **What is the case for the filter bubble phenomenon?** A filter bubble is a phenomenon in which individuals share and exchange information only with those who share the same opinions and ideas, thereby reinforcing certain opinions and beliefs. In this model,rewire_prob rewires edges between agents with different opinions, suggesting the possibility of the filter bubble phenomenon. 5. **Please consider the root tendency of rewiring** The code states that edges between nodes with opinion differences greater than 0.5 may be rewired. This rewiring trend suggests that agents with large gaps in opinion may break connections and form new connections between agents with close opinions. This suggests that clusters of agents with the same opinions are more likely to form. ### Rewire and Calculate Function The functionrewire_and_calculate contains probabilistic operations based on random numbers to decide on removing or adding edges to the nodes in the network. Fig. 17: Change Rate of Opinion Distribution per Step \(N=20\) Fig. 18: Probability of Unchanged Opinions per Step \(N=20\) 1. **Edge Removal:** An edge between a node and one of its neighbors is removed based on a probability. The corresponding conditional expression is: \[P(\text{remove\_edge})=\begin{cases}1&\text{if rand()}<p\\ 0&\text{otherwise}\end{cases}\] (13) Explanation: An edge from the current node to a randomly selected neighbor is removed with a probability of \(p\). 2. **Edge Addition:** An edge is added between a node and a non-neighbor node based on a probability. The corresponding conditional expression is: \[P(\text{add\_edge})=\begin{cases}1&\text{if rand()}<q\\ 0&\text{otherwise}\end{cases}\] (14) Explanation: An edge between the current node and a randomly selected non-neighbor node is added with a probability of \(q\). 3. **Network Density:** The density of the network is calculated using: \[\text{density}=\frac{\text{number of actual edges}}{\text{number of potential edges}}\] (15) Explanation: Network density gives the ratio of the actual number of edges to the number of potential edges in the graph. 4. **Propagation Probability:** The propagation probability is randomly generated in the given range: \[\text{propagation\_probability}=\text{rand()}\times a+b\] (16) Explanation: This is a placeholder for the propagation probability. The value is generated randomly between \(a\) and \(b\). ### Network Density and Propagation Probability Over Steps The function appends the calculated density and propagation probability for each rewiring step. These values are then plotted over steps to visualize the changes. The relevant expressions from the code are: \[\text{densities}[i]=\text{density at step }i \tag{17}\] \[\text{propagation\_probabilities}[i]=\text{propagation probability at step }i \tag{18}\] Explanation: For each step in the rewiring process, the density of the network and the propagation probability are recorded. These are then used for plotting. ### Correlation Between Density and Propagation Probability The scatter plot illustrates the relationship between the network density and propagation probability. Each point represents a rewiring step. The relevant expressions are: \[(x_{i},y_{i})=(\text{density at step }i,\text{propagation probability at step }i) \tag{19}\] Explanation: Each point in the scatter plot represents the density and propagation probability of the network at a particular rewiring step. ## 6 Discussion From Figure 19, discussion Density and Propagation Probability by Steps value, 1. **Agent rewiring trends** The graph on the left shows that the density of the network is consistently decreasing over time. On the other hand, the propagation probability is very variable, but generally high, with large increases and decreases at specific steps. This can suggest that agents are more likely to break off relationships with other agents and form relationships with new agents. 2. **Consideration as a process of consensus formation** Variations in propagation probability may indicate the process of opinion propagation and consensus formation among agents. A sudden increase or decrease in propagation probability at a particular step may suggest that convergence or dispersion of opinions is underway. 3. **What is the case in society, opinion formation** In a society, this case may apply when people tend to disassociate themselves from those with strongly differing opinions. The results could be useful and suggestive when considering the formation and change of opinions on topics that people feel strongly about, such as political opinions or religious beliefs. Figure 19: Network Density and Propagation Probability Over Steps and Correlation Between Density and Propagation Probability \(N=20\) 4. **What is the case for the filter bubble phenomenon?** A filter bubble refers to a phenomenon in which information is shared only among people who share the same opinions and ideas, reinforcing certain opinions and beliefs. The relationship between the data points in the graph on the right suggests the possibility of rewiring of relationships between agents with different opinions, i.e., the filter bubble phenomenon. 5. **Consider the root tendency of rewiring** The consistent decrease in network density shown in the left graph may indicate a tendency for connections between agents with widely differing opinions to be broken and for new connections to form between agents with the same opinions. This indicates that clusters of agents with the same opinions are likely to form. ## 7 Conclusion, Perspect This study presents a theory of opinion dynamics that considers each person's opinion as a continuous value rather than a discrete value. Opinions are represented as real numbers ranging from positive to negative. Trust and distrust are introduced as coefficients for each person pair. A mathematical model was constructed that incorporates external pressure in addition to the influence of opinion exchange within each group. By using this theory, we aim to formulate hypotheses and mathematically represent many phenomena that can occur in group societies. The filter bubble case, which is based on the non-consensus-based opinion dynamics theory in this paper, allows us to compute the dynamics of a complex system in which people have a mixture of trust and doubt. It can also account for situations in which opinions become increasingly radical because there is no upper limit to opinions. Simulations of large numbers of people are also available. In the future, we will compare and examine whether this theory is consistent with data on speech in actual political and social issues, and what cases we expect to see. In the future, we will compare and examine what cases we assume this theory is consistent with actual data on speech in actual political and social issues. ## Aknowlegement This research is supported by Grant-in-Aid for Scientific Research Project FY 2019-2021, Research Project/Area No. 19K04881, "Construction of a new theory of opinion dynamics that can describe the real picture of society by introducing trust and distrust". It is with great regret that we regret to inform you that the leader of this research project, Prof. Akira Ishii, passed away suddenly in the last term of the project(2021). Prof. Ishii was about to retire from Tottori University, where he was affiliated with at the time. However, he had just presented a new basis in international social physics, complex systems science, and opinion dynamics, and his activities after his retirement were highly anticipated. It is with great regret that we inform you that we have to leave the laboratory. We would like to express our sincere gratitude to all the professors who gave me tremendous support and advice when We encountered major difficulties in the management of the laboratory at that time. First, Prof. Isamu Okada of Soka University provided valuable comments and suggestions on the formulation of the three-party opinion model in the model of Dr. Nozomi Okano's (FY2022) doctoral dissertation. Prof.Okada also gave us specific suggestions and instructions on the mean-field approximation formula for the three-party opinion model(Equation (13)), Prof.Okada's views on the model formula for the social connection rate in consensus building, and his analytical method. We would also like to express our sincere gratitude for your valuable comments on the simulation of time convergence and divergence in the initial conditions of the above model equation, as well as for your many words of encouragement and emotional support to our laboratory. We would also like to thank Prof.Masaru Furukawa of Tottori University, who coordinated the late Prof.Akira Ishii's laboratory until FY2022, and gave us many valuable comments as an expert in magnetized plasma and positron research. In particular, we would like to thank Prof.Hidehiro Matsumoto of Media Science Institute, Digital Hollywood University. Prof.Hidehiro Matsumoto is Co-author of our paper("Case Study On Opinion Dynamics In Dyadic Risk On Possibilities Of Trust-Distrust Model."), for managing the laboratory and guiding us in the absence of the main researcher, and for his guidance on the elements of the final research that were excessive or insufficient with Prof.Masaru Furukawa. And in particular, Prof.Masaru Furukawa of Tottori University, who is an expert in theoretical and simulation research on physics and mathematics of continuum with a focus on magnetized plasma, gave us valuable opinions from a new perspective. His research topics include irregular and perturbed magnetic fields, MHD wave motion and stability in non-uniform plasmas including shear flow, the boundary layer problem in magnetized plasmas, and pseudo-annealing of MHD equilibria with magnetic islands. We received many comments on our research from new perspectives and suggestions for future research. We believe that Prof.Furukawa's guidance provided us with future challenges and perspectives for this research, which stopped halfway through. We would like to express sincere gratitude to him. We would like to express my sincere gratitude to M Data Corporation, Prof.Koki Uchiyama of Hotlink Corporation, Prof.Narihiko Yoshida, President of Hit Contents Research Institute, Professor of Digital Hollywood University Graduate School, Hidehiko Oguchi of Perspective Media, Inc. for his valuable views from a political science perspective. And Kosuke Kurokawa of M Data Corporation for his support and comments on our research environment over a long period of time. We would like to express our gratitude to Hidehiko Oguchi of Perspective Media, Inc. for his valuable views from the perspective of political science, as well as for his hints and suggestions on how to build opinion dynamics. We are also grateful to Prof.Masaru Nishikawa of Tsuda Uni-versity for his expert opinion on the definition of conditions in international electoral simulations. We would also like to thank all the Professors of the Faculty of Engineering, Tottori University. And Prof.Takayuki Mizuno of the National Institute of Informatics, Prof.Fujio Toriumi of the University of Tokyo, Prof.Kazutoshi Sasahara of the Tokyo Institute of Technology, Prof.Makoto Mizuno of Meiji University, Prof.Kaoru Endo of Gakushuin University, and Prof.Yuki Yasuda of Kansai University for taking over and supporting the Society for Computational Social Sciences, which the late Prof.Akira Ishii organized, and for their many concerns for the laboratory's operation. We would also like to thank Prof.Takuju Zen of Kochi University of Technology and Prof.Serge Galam of the Institut d'Etudes Politiques de Paris for inviting me to write this paper and the professors provided many suggestions regarding this long-term our research projects. We also hope to contribute to their further activities and the development of this field. In addition, we would like to express our sincere gratitude to Prof.Sasaki Research Teams for his heartfelt understanding, support, and advice on the content of our research, and for continuing our discussions at a time when the very survival of the research project itself is in jeopardy due to the sudden death of the project leader. We would also like to express our sincere gratitude to the bereaved Family of Prof.Akira Ishii, who passed away unexpectedly, for their support and comments leading up to the writing of this report. We would like to close this paper with my best wishes for the repose of the soul of Prof.Akira Ishii, the contribution of his research results to society, the development of ongoing basic research and the connection of research results, and the understanding of this research project. References
2303.01134
Error mitigation of entangled states using brainbox quantum autoencoders
Current quantum hardware is subject to various sources of noise that limits the access to multi-qubit entangled states. Quantum autoencoder circuits with a single qubit bottleneck have shown capability to correct error in noisy entangled state. By introducing slightly more complex structures in the bottleneck, the so-called brainboxes, the denoising process can take place faster and for stronger noise channels. Choosing the most suitable brainbox for the bottleneck is the result of a trade-off between noise intensity on the hardware, and the training impedance. Finally, by studying R\'enyi entropy flow throughout the networks we demonstrate that the localization of entanglement plays a central role in denoising through learning.
Joséphine Pazem, Mohammad H. Ansari
2023-03-02T10:30:52Z
http://arxiv.org/abs/2303.01134v1
# Error mitigation of entangled states using brainbox quantum autoencoders ###### Abstract Current quantum hardware is subject to various sources of noise that limits the access to multi-qubit entangled states. Quantum autoencoder circuits with a single qubit bottleneck have shown capability to correct error in noisy entangled state. By introducing slightly more complex structures in the bottleneck, the so-called brainboxes, the denoising process can take place faster and for stronger noise channels. Choosing the most suitable brainbox for the bottleneck is the result of a trade-off between noise intensity on the hardware, and the training impedance. Finally, by studying Renyi entropy flow throughout the networks we demonstrate that the localization of entanglement plays a central role in denoising through learning. ## I Introduction Classical machine learning methods (ML) can identify features in statistics of data and reproduce them [1; 2]. Measuring entangled states on quantum systems can sample classical data out of complex probability distributions [3]. Recognition of statistical patterns in such data is challenging for classical methods. Therefore, quantum machine learning techniques (QML) may accelerate or enable the processing of these distributions to recognize such statistical patterns [4; 5; 6; 7]. The quantum speedups, however, can only be characterized in perfect gates, perfect states and measurements; none of which are so perfect in state-of-the-art devices [8; 9; 10; 11]. The Noisy Intermediate Scale Quantum (NISQ) processors [12] due to their fragility can be reasonably controlled only in the scale of a few tens of qubits [13; 14]. In the absence of fault-tolerant processors, error mitigation requires in-depth characterization of the device and post-processing [15; 10]. Such improvements takes place in the classical simulations of quantum circuits. Error mitigation of multi-qubit states is an active research topic and can be approached by increasing coherence time of qubits [16] or making their net interaction free from unwanted crosslinks [17; 18]. These attempts will be useful when multi-qubit states are achieved with high fidelity. A crucial task towards this goal is to show the availability of quantum resources such as entanglement on a device [19; 20; 21; 22]. The power of QML can be leveraged to address the noise impinging on quantum processors. Quantum Neural Networks can contribute to perfecting qubit states on NISQ processors. For this purpose, training tailors the network map to withstand noise and recover the desired quantum features. They are thereby candidates to prove a quantum advantage on near-term devices [23; 24; 25; 26; 27; 28]. Autoencoders are a type of neural network that enable the compression of information in smaller layer, the latent space, between input and output layers and are often used to denoise information [29; 30; 31; 32; 33]. Quantum autoencoders (QAEs) can tackle the problem of producing ideal states using real-device noisy quantum gates [34; 35; 36; 37; 38]. Noisy devices are unable to prepare ideal entanglement. QAEs, however, can be trained on noisy data in an unsupervised fashion so that they produce ideal states as outputs. To verify this concept, an autoencoder with a single-qubit latent space is trained to reconstruct a perfect Greenberger-Horne-Zeilinger (GHZ) state [39], i.e. \((|0\rangle^{\otimes m}+|1\rangle^{\otimes m})/\sqrt{2}\) in the presence of random bit and phase flip as well as small unitary noise [38]. In this paper, we use brainbox QAE (BB-QAE) as a generalized QAE in which a small network replaces a single-qubit latent space. These brainbox circuits differ by the number of qubits and their layouts. They can be composed of one or many layers. The morphology of the BB-QAE and brainbox are shown in Figure 1. We aim at denoising GHZ states using different brainboxes and we make a close comparison between them. We show that a strong bit-flip noise beyond the tolerance of single Figure 1: Architecture of the brainbox quantum autoencoder with symmetric four-qubit inputs/output layers. The left (red) partition of the network is the encoder, where information of the input is compressed until the brainbox bottleneck by reducing the number of qubits. The right (blue) partition is a decoder that reconstitutes the inputs on the output layer. The brainbox is represented by the set of qubit numbers in a row from left to right, i.e. \((n_{1},\cdots,n_{K})\). For example, we denote (1,1,1) the \(\mathtt{ooo}\)-QAE,(2) the \(\mathtt{\char 37}\)-QAE and (1,2) the \(\mathtt{\char 37}\)-QAE. qubit QAE can be well-tolerated by a rather different brainbox. For example, the noise intensities observed on some qubits of the IBM Eagle chip can be counter-acted. Moreover, we study entropy evolution in the neural network and show that entanglement can be rearranged in the network during the training and this is key to the network's success in tolerating strong noisy flips. ## II Training of the quantum autoencoder Quantum autoencoder (QAE) network consists of a set of interconnected qubits in layers with a bottleneck in the middle (see Fig.1). The first (last) layer of the network represents the input (output) register. The edges connecting qubits in adjacent layers represent a quantum map from one layer to the next. There is no connection between qubits of the same layer, meaning that they may be independent on the hardware too. The network's bottleneck is a layer with fewer qubits compared to input and output layers. From the input layer to the bottleneck, the encoder selectively retains information from the input layer to build a good encoding in the bottleneck. Initialized in the computational ground state, the decoder recovers the inputs from the state encoded in the bottleneck. Optimization of the encoder's and decoder's maps relies on the comparison between input and evolved states. Our QAE is a dissipative quantum neural networks (DQNN) organized in \(L\) layers. Each layer \(l\) contains \(N_{l}\) qubits, and each qubit in layer \(l\) is coupled to all qubits in layer \(l+1\). Thus, we univocally denote the network's topology as \((N_{1},\cdots,N_{L})\). In the middle of the symmetric structure of the QAE, we use a small sub-network instead of single-qubit layer and call it brainbox bottleneck (BB). It can be either mirror symmetric as the QAE, or asymmetric, as depicted in Figure (1). Varying the morphology of BBs helps to understand how the bottleneck's structure impacts outcome results on the output layer. The quantum map on the QAE is constructed starting from the input layer and propagates the state forwards, layer by layer, towards the output layer. The unitary \(U_{j}^{l}\) acts on all qubits in layer \(l-1\) and \(j\)-th qubit in layer \(l\). It changes the state of the \(j\)-th qubit in layer \(l\). Therefore the quantum map that updates qubits in layer \(l\) looks like \(\mathcal{U}^{l}\equiv\prod_{j=1}^{N_{l}}U_{j}^{l}\). For example consider there are \(N_{5}\) qubits in layer 5 and \(N_{6}\) on layer 6. The density matrix of layer 6 is initialized in the computational ground state \(|0\rangle\) and its transformation depends on the state on layer 5, i.e. \(\rho_{(6)}=\mathrm{Tr}_{(5)}\{\mathcal{U}^{(6)}\left(\rho^{(5)}\otimes|0 \rangle\langle 0|^{\otimes N_{6}}\right)\mathcal{U}^{(6)}\}^{\dagger}\). The trace isolates the state on layer \(l\) and dissipation equips the network with forgetfulness, a necessary condition to learning [40]. Therefore one can easily conclude that the output density matrix can be generated as follows: \[\rho^{out}=\prod_{l=2}^{L}\underset{(l-1)}{\mathrm{Tr}}\left\{\mathcal{U}^{l} \left(\rho^{l-1}\otimes|0\rangle\langle 0|^{\otimes N_{l}}\right)\mathcal{U}^{ \dagger}\right\} \tag{1}\] with \(\rho^{l-1}\) denoting the density matrix of the layer \(l-1\) and all qubits in layer \(l\) are in the ground state. The QAE has been trained with a (1)-BB structure to enable the reconstruction of a noise-free multiqubit entangled states [38]. The study attempts to prepare ideal GHZ-states \(|\Psi_{\mathrm{in}}\rangle=(|00\cdots 0\rangle+|11\cdots 1\rangle)/\sqrt{2}\). But noisy hardware is simulated by statistically exposing each qubit to the bit-flip noise channel \(\mathcal{N}(\rho_{\mathrm{in}})\) with flip probability \(p\): \[\mathcal{N}(\rho_{\mathrm{in}})=\mathcal{E}_{N_{\mathrm{in}}}(\cdots(\mathcal{ E}_{1}(\rho_{\mathrm{in}},p),p)\cdots) \tag{2}\] with \(\mathcal{E}_{i}(\rho_{\mathrm{in}},p)=(1-p)\,\rho_{\mathrm{in}}+\,p\,X_{i} \rho_{\mathrm{in}}X_{i}\) being the bit-flip channel for the qubit \(i\) and \(X_{i}\) being the flip Pauli operator. For single-qubit bottlenecks, Ref[38] and Ref[41] show that the noise tolerance of the (1)-QAE is low (\(p<0.3\)). We continue the analysis on BB-QAEs with larger brainbox bottlenecks. The quantum map of the BB-QAE is divided in two parts: the encoder and the decoder. In the left wing of the network, the map \(\mathcal{E}(\rho^{in})\) of the encoder is applied on the noisy inputs and hidden layers, and compresses states in the latent space, in the brainbox [34; 35; 36; 37]. For a brainbox bottleneck with \(K\) layers, we note different configurations by \((n_{1},\cdots,n_{K})\). For example (1,1) is a linear chain of two qubits \(\circ\circ\). In the right wing of the network, the decoder map \(\mathcal{D}\) reconstructs states in their original dimension, thanks to the information encoded in the last layer of the brainbox. The output state is then \(\rho_{x}^{\mathrm{out}}=\mathcal{D}(\rho_{x}^{\mathrm{latent}})=\mathcal{D}( \mathcal{E}(\mathcal{N}_{x}(\rho_{\mathrm{GHZ}},p))\), where \(\mathcal{N}_{x}(\rho)\) denotes a discrete noise realization \(x\) of the bit-flip channel, that is a combination of flipped/not flipped on all qubits of the input layer. The aim of the quantum map is to make the output quantum state as similar as possible to the ideal target state. In other words, a successful denoising strategy on a QAE should wash out the statistical noise encoded on the input layer from the output state. This can be measured by evaluating the fidelity of the output state \(\rho^{\mathrm{out}}\) with the ideal state \(\rho_{\mathrm{GHZ}}\): \[F_{x}(\rho_{x}^{\mathrm{out}},\rho_{\mathrm{GHZ}})= \langle\Psi_{\mathrm{GHZ}}|\,\rho_{x}^{\mathrm{out}}|\Psi_{ \mathrm{GHZ}}\rangle\] \[= \mathrm{Tr}\left\{\rho_{\mathrm{GHZ}}\,\rho_{x}^{\mathrm{out}} \right\}. \tag{3}\] At each training step \(n\), the average fidelity over all \(N_{data}\) states \(\{\mathcal{N}_{x}(\rho_{\mathrm{GHZ}},p)\}_{x=1}^{N_{\mathrm{data}}}\) defines the objective function for the network: \[F(n)=\frac{1}{N_{\mathrm{data}}}\sum_{x=1}^{N_{\mathrm{data}}}F_{x}\left(\rho_ {x}^{\mathrm{out}}(n),\rho_{\mathrm{GHZ}}\right). \tag{4}\] The maximization of this function instructs the network how to perform its task. First initialized at random, the interlayer unitaries \(\{U_{j}^{l}\}\) are updated layerwise and iteratively with the parameter matrix multiplication method [27; 28]: \[U_{j}^{l}(n+\varepsilon)\gets e^{i\varepsilon K_{j}^{l}(n)}U_{j}^{l}(n), \tag{5}\] where \(K_{j}^{l}(n)\) is the parameter matrix derived from \(F\)[38]. This update rule is inspired by the gradient descent algorithms [42] and understands gradients as the derivative of \(F\) with respect to each unitary. After \(N_{\text{it}}\) updates of the quantum map, the objective function converges to 1 if the training is successful or takes smaller positive values otherwise. must be improved. For this aim, we compared the results of (4,2,BB,2,4) and (6,2,BB,2,6) networks. We plot the respective thresholds in Fig.2(b). By increasing the number of input qubits from 4 to 6 while aiming at GHZ-states, the noise tolerance on a simple single-qubit bottleneck shows a large drop off by 0.1 from \(p^{*}=0.3\) to 0.2. This raises concerns about the scalability of denoising: by adding more qubits to the inputs, the noise tolerance shrinks, in other words the QAE becomes more fragile and unable to recover the ideal target state. For a given probability \(p\), the number of combinations of flipped/intact qubits in the input states grows exponentially with their size. With a small training data set, this suppresses the tolerance threshold. In the limit of infinite size data set, the distribution of GHZ and non-GHZ states is such that the amount of GHZ states is always larger than that of non-GHZ except at \(p=0.5\). With a simple majority rule, the network can identify the GHZ state as the target state. As the size of the training set becomes finite, deviations from the ideal distribution alter the training. Dashed and solid lines in Fig. (3) compare distributions for ideal states and the next most probable noisy state for training sets with infinitely many and 200 states respectively. In limited data set, the ideal GHZ state occurs less often in the finite data set than the next most probable noisy state in the vicinity of \(p\sim 0.4\). Thus for such data set, the QAE training can only help to boost tolerance threshold up to where GHZ state constitute a majority of the training data. While the training data ultimately imposes an upper bound on the tolerance that can possibly be achieved, the (1)-QAE performs sub-optimally and its tolerance does not depend on the training data. For the multi-qubit BB, he scaling of the generalization error with the size of the training data set is consistent with the results in [43]. For this study, a size-200 training set already shows disparities in improvements in tolerance due to the network topology. Fig.2(b) shows that in a network of 4-qubit inputs all brainbox beyond (1) and (1,1) equivalently perform with higher tolerance. Adding two qubits to the input results in the reduction of noise tolerance by 0.1 unit, however in this case still all BBs except the single-qubit bottleneck (1) reveal higher tolerance. Another important lesson from the study is that qubit configurations in BBs contribute to the tolerance. For example in the case of 4-qubit input, a brainbox with two qubits in separate layers (1,1) yields a sub-optimal tolerance at 0.35, while stacking them in a single layer (2) saturates the data limit at 0.4. ### Training impedance In section III.1, we found that most of multi-qubit brainboxes we used in the (4,2,BB,2,4) and (6,2,BB,2,6) network maximize the achievable bit-flip noise tolerance \(p^{*}\). However some BB topologies make the training less costly. Let us mark the step \(n(F)\) at which the training achieves a fidelity above \(F\) in the output. Consider that each BB-QAE is trained for \(N_{\text{it}}\) iterations. We define the _training impedance_\(Z(F)=n(F)/N_{\text{it}}\). In Fig. 4 we evaluate \(Z(0.99)\) for several networks at different noise probabilities \(p\). The result indicates that training impedance depends on the fidelity limit, training noise probability, brainbox, and input qubits, i.e. \(Z=Z(F,p,\text{BB},\text{N}_{\text{in}})\). Results for (4,2,BB,2,4) networks are summarized in Fig. 4. For noise probabilities \(p\leq 0.3\) the impedance factor \(Z(0.99)\) in all BB networks remains relatively small nearly between 0.23 to 0.30; meaning that all networks at these noise probabilities can easily find their way to Figure 4: Training impedance for the optimization of (4,2,BB,2,4) networks. As the noise intensities grow, the optimization is more demanding. Some BBs show to be less efficient to rapidly gain high fidelity in the output state. Figure 3: Training data set: The distribution of 4-qubit GHZ and non-GHZ states in infinitely many samples (dashed line) versus finite 200 samples (solid lines). In the infinite sample case the distribution of GHZ for all noise probabilities \(p\) dominates, while artefacts in the finite data set prevents the dominance of GHZ for strongly noisy channels. fidelity above 0.99 within the first third of the training. Some BBs such as (2), (3), (1,2), (1,1,2), (1,2,1) are slightly slower in gaining high fidelity. However the very same network under harder noise of \(p\geq 0.35\) have an advantage during the training and optimization is almost 5% faster than in other networks. The selection of a suitable brainbox is based on the trade-off between the gain in fidelity and the loss in computational speed. At low noise intensities such as \(p=0.1\), linear brainboxes (1,1),(1,1,1) and (2,1) accelerate the training compared to the single qubit box (1). Longer brainboxes also protect the network against over-fitting (see section III.3). Between p=0.2 and 0.3, multi-qubit brainboxes cause a small computational overhead that is minimized by the linear architectures. Above the (1)-QAE's tolerance threshold, wide brainbox structures such as (2), (3), and (1,2) improve the training efficiency compared to the linear ones. Thanks to a larger amount of parameters, they efficiently capture subtle patterns in the training states, as in the over-parametrized regime [44, 45]. Similar graphs for (6,2,BB,2,6) networks are shown in Appendix A. ### Cross-testing In previous sections, the testing data set was generated under the same noise channel as during the training of the quantum map. A generalization of this approach has been described in Ref. [41], in which the QAE is trained using a noise channel with parameter \(p\) and is tested with the same channel with different parameter \(p^{\prime}\). In this section, we evaluate the BB-QAEs with a generalized cross-test: the testing data originates either either from the same noise channel with different intensity, or from a different noise channel. We consider two BB-QAEs with brainboxes (1) and (2,1). Though these two brainboxes have similar impedance factors (see Fig.4), they differ by their tolerance threshold (Fig.4). We train them both with bit-flip noise at intensities \(p_{\text{train}}=0.05\) and \(0.3\). After the training is completed, we use the final map to test noisy input GHZ states generated by one of the following three noise channels with independent noise intensities \(p_{\text{test}}\): (1) bit-flip channel defined in Eq.2, (2) depolarizing channel \(\mathcal{E}_{i}^{dep}(\rho,p_{\text{test}})=(1-3p_{\text{test}}/4)\rho+p_{ \text{test}}/(X_{i}\rho X_{i}+Y_{i}\rho Y_{i}+Z_{i}\rho Z_{i})\), which can add a relative phase between \(|00\cdots 0\rangle\) and \(|11\cdots 1\rangle\) of GHZ-states and can rotate each qubit around an arbitrary axis, and (3) erasure channel that by probability \(p_{\text{test}}\) replaces the state of a single qubit in the GHZ state with a random state \(\alpha|0\rangle+\beta|1\rangle\), otherwise it remains unchanged, [46, 47]. In the latter, since all \(\alpha\)'s and \(\beta\)'s are different for each noise realization, the map is challenged to reconstruct GHZ-states starting from any possible pure quantum state. In Fig.5, we evaluate the generalization error with the reconstruction error \(R(\{\rho_{x}^{\text{out}}\},\rho_{\text{GHZ}})=1-1/N_{\text{test}}\sum_{x=1} ^{N_{\text{test}}}F_{x}(\rho_{x}^{\text{out}},\rho_{\text{GHZ}})\) Figure 5: Cross-tests results for two networks: (4,2,1,2,4) and (4,2,2,1,2,4) associated to (1) and (2,1) brainbox subnetworks. Three noise channels were implemented with noise intensities \(p_{test}\): the bit-flip channel (full lines), the depolarizing channel (dashed lines) and the erasure channel (dotted lines). The (1)-QAE shows more sensitivity noise in the test states: for the same training probability \(p_{train}\), the reconstruction error fluctuates and larger errors occur on unfamiliar noise channels. In contrast, the map optimized by the (2,1)-QAE treats all noise channels and intensities equally. The outputs of the BB-QAE lose dependency on the noise it was trained with. In addition, the reconstruction error over the weak noise regime is lower compared to the (1)-QAE. where \(N_{test}=200\) is the number of states in the testing data set. For both network morphologies, training with weak noise yields almost perfect generalization to all three noise channels over a large range of probabilities. In figure 5(a,b), reconstruction error is kept in the negligible range. We repeat the same cross-testing procedure at the tolerance threshold of the (1)-QAE. In figure 5(a), this network recovers from the bit-flip channel with reconstruction error close to \(0.001\). In contrast, states affected by the erasure and depolarizing channels cannot land on ideal GHZ state with high fidelity ( higher than \(99.9\%\)). This is a sign of overfitting, since the discrete states in the former case are already represented in the training data set. The two remaining noise channels add states that are new to the network. In this respect, the noise tolerance measure in Fig.2 is deceitful to the extend that the last optimized map works solely on the training states. Training the (2,1)-QAE with \(p_{\text{train}}=0.3\), ie below its tolerance threshold, enables the full recovery of erroneous states irrespective of the noise channel tested, at all \(p_{\text{test}}\). This is possible due to the fact that the extended network has access to the dominating fraction of ideal GHZ states, which brings advantages in the cross-tests as well. One can think of the BB structure as a magnifying glass that makes it possible to distinguish targets from noise even when they are close to one another, by creating a better encoding of inputs in its last layer. ### Renyi entropy flow A key property to measure in engineered quantum systems is entanglement: in contrast to their classical counterparts, quantum algorithms can generate large amounts of entanglement between parts of the system [48; 49]. Entanglement during the learning phase in a QAE changes internally across layers. It allows delocalizes information in the network and steers the training towards the optimal condition of having a separable output. In order to observe its contribution to the training, some measures of entanglement have been tested, such as entanglement witnesses [50] and von Neumann entropy [51]. Similar to any many-body quantum system, measuring the entropy of different partitions provides a way to probe its entanglement structure. Here, we evaluate the second-order Renyi entropy since it can capture long-range entanglement [52; 53; 54] as well as dissipation mechanisms [55; 56]. Renyi entropy can serve as a measure for probing and characterizing brain-box bottlenecks. A slow entropy growth in a layer or in a part of the network can be used to identify localization in a subset of the network [57]. For a bipartite system \(\mathcal{S}\) with subsystems A and B and total density matrix \(\rho\), second order Renyi entropy is \(S^{(2)}(\rho)=-\log{(\text{Tr}\{\rho^{2}\})}\). When equal to zero, it indicates that \(\mathcal{S}\) is pure and independent from any environment. Typically, entropy of the whole BB-QAE is zero at all iterations because the system is isolated from the environment and therefore in a pure state. Moreover, second order Renyi entropy can be evaluated for any subsystem in \(\mathcal{S}\), eg. A, based on the associated partial density matrix \(\rho_{A}=\text{Tr}_{B}(\rho)\): \(S_{A}^{(2)}(\rho_{A})=-\log{(\text{Tr}_{A}\{(\rho_{A})^{2}\})}\). Consequently, at each training step, in a BB-QAE with \(L\) layers, the entropy of layer \(l\) reflects the presence of entanglement between the layer \(l\) and the remaining \(L-1\) layers in the network. The second order Renyi entropy in layer \(l\) is defined as \[S_{l}^{(2)}=-\log{(\underset{l}{\text{Tr}}\{(\rho_{l})^{2}\})}, \tag{6}\] with the partial density matrix of layer \(l\) being \(\rho_{l}=\text{Tr}_{k\neq l}\{\rho\}\) for \(k=1,\cdots L\) and \(\rho\) is the state of the whole BB-QAE. In particular, at each iteration, the entropy of layer \(l\) can be evaluated using Eq.(6) after applying the respective unitary \(\mathcal{U}^{l}\). During the training, we compare the evolution of layer-wise entropy in a (1)-QAE for both weak (\(p=0.1\)) and strong (\(p=0.4\)) noise in the input GHZ states (see Fig.10 in Appendix B). During the learning phase, entropy is redistributed within the network. In the first steps, it undergoes steep growth, especially in the last layer. In the subsequent iterations, entanglement vanishes exponentially in the decoder's layers, while it is only slightly suppressed in the encoder, resulting in entropy inversion. Entropy after optimization is compared for (1)- and (2)-QAEs below and above the tolerance threshold, at \(p=0.1\) and \(p=0.45\) respectively (Fig.6). In a BB-QAE with bit-flipped GHZ state on the initial layer, successful denoising not only raises fidelity of the output states, but also improves its separability. Therefore, training inverts entropy in the network and shifts noise from the decoder to the encoder. The bottleneck seals it away from the output layer. In contrast, failure to denoise the inputs can take two forms. In Fig.6(a), instead of concentrating noise in the encoder, the training yields high entanglement in the last two layers, while the encoder remains almost independent. As in Fig.6(b), the inversion of entropy can be favorized by using larger BB structures. In this case, the training improves noise concentration, but the bottleneck seal seems too porous to lock noise out of the decoder, resulting in poor denoising. ## IV Conclusion We have presented an in-depth study of various brain-box structures for the bottleneck in a quantum autoencoder used to denoise entangled quantum states. Training a QAE single-qubit bottleneck has been studied in Ref. [38]. This bottleneck can come with only limited tolerance against bit-flip, depolarizing, and random unitary noise channels. Scaling the inputs size from 4 to 6 qubits makes the training more greedy in data, and deteriorates the denoising performance rapidly. We identified two mechanisms behind the limitation of noise tolerance. (1) The finite size of the training data set causes statistical deviations from the ideal noisy state distribution expected from the bit-flip channel. It imposes an upper bound on the maximum tolerance the BB-QAE can achieve. This upper bound depends on each training data set. (2) The study of Renyi entropy shows that the single-qubit bottleneck is unable to seal noise away from the output state, and therefore to carry out its denoising task. We compared the simple QAE with multi-qubit brain-box bottlenecks, most of which brought significant elevation of tolerance. When qubits are added to the input and output layers, the relative improvements are maintained. If a brainbox bottleneck can endure stronger noise compared to another brainbox, adding more qubits to input state maintains the superiority of the former one. Some bottlenecks show similar tolerance threshold against noise. This raises an important question: What other features can make a brainbox more suitable than the other ones? To address this question, we compare training impedance between brainboxes. For this purpose, we evaluate the training impedance \(Z(0.99)\), which indicates what minimum percentage of the training process is required to achieve a fidelity above 99% in the output. The result has been summarized in Fig. (4) and show that the training impedance depends not only on the bottleneck, but also on the training noise probability \(p\). Below bit-flip probabilities \(p=0.3\), linear brainboxes such as (1,1) are favorable to a more efficient training. In contrast, between \(p=0.3\) and \(p=0.4\), non-linear brainboxes such as (2) or (2,1) are most economical to train. We evaluate the Renyi entropy of network layers at each optimization step to show how nonlocal entanglement between layers evolves and impacts the outputs fidelity. Results show that in networks below their tol Figure 6: Layerwise Rényi entropy evaluated after applying the last step of the denoising map. Darker colors indicates larger entropy of noisy mixed states. We study a single-qubit brainbox in (a) and a double-qubit layer brainbox in (b). Entropy at low noise strength of \(p=0.1\) decreases toward the output layer so that one can expect noise is localized in the encoder and is blocked away from the brainbox. However in the case of \(p=0.45\) which is much stronger than the networks’ tolerance threshold, input noise from the input layer leaks out at the bottleneck and noise accumulates in the decoder and output layer. erance threshold, entropy becomes localized in the encoder of the BB-QAE, so that much less noise passes through the bottleneck to the decoder. This usually leads to outputs states that have high fidelity with the target and that are separable from the network. Some examples were given in Fig.6: in successful training, noise is blocked off from the bottleneck, while in unsuccessful training noise penetrates through the bottleneck. The absence of separability of the output indicates the presence of layer-to-layer stray coupling between hidden and output layers, which eventually does not allow its fidelity to rise higher. In connection to NISQ devices, QAEs are resilient to input layer noise and therefore they provide the potential to generate ideal entanglement on noisy gates and qubits. A QAE with complex bottleneck and more qubits and parameters in general seem advantageous for denoising, because such a complex structure provides the possibility to separate encoder and decoder. However detailed analysis shows that less resourceful brainboxes can be found with the same performance as a complex one. Testing the network with the depolarizing and erasure channel proves that some bottlenecks can keep their superiority over the whole trainable range. We expect that these differences will remain when selecting different quantum target states. One of the main obstacles against implementing QAEs in scaled up input states is the required high connectivity in the network that is inaccessible on the current processors. An alternative is to train a map with missing connections [38]. ## Acknowledgement The authors thank Maria Schuld and Pia Doring for fruitful discussions. MA acknowledges that a part of this manuscript was motivated during the support from Intelligence Advanced Research Projects Activity (IARPA) under contract W911NF-16-0114.
2301.10640
Adaptive enrichment trial designs using joint modeling of longitudinal and time-to-event data
Adaptive enrichment allows for pre-defined patient subgroups of interest to be investigated throughout the course of a clinical trial. Many trials which measure a long-term time-to-event endpoint often also routinely collect repeated measures on biomarkers which may be predictive of the primary endpoint. Although these data may not be leveraged directly to support subgroup selection decisions and early stopping decisions, we aim to make greater use of these data to increase efficiency and improve interim decision making. In this work, we present a joint model for longitudinal and time-to-event data and two methods for creating standardised statistics based on this joint model. We can use the estimates to define enrichment rules and efficacy and futility early stopping rules for a flexible efficient clinical trial with possible enrichment. Under this framework, we show asymptotically that the familywise error rate is protected in the strong sense. To assess the results, we consider a trial for the treatment of metastatic breast cancer where repeated ctDNA measurements are available and the subgroup criteria is defined by patients' ER and HER2 status. Using simulation, we show that incorporating biomarker information leads to accurate subgroup identification and increases in power.
Abigail J. Burdon, Richard D. Baird, Thomas Jaki
2023-01-25T15:22:15Z
http://arxiv.org/abs/2301.10640v2
# Adaptive enrichment trial designs using joint modeling of longitudinal and time-to-event data ###### Abstract Adaptive enrichment allows for pre-defined patient subgroups of interest to be investigated throughout the course of a clinical trial. Many trials which measure a long-term time-to-event endpoint often also routinely collect repeated measures on biomarkers which may be predictive of the primary endpoint. Although these data may not be leveraged directly to support subgroup selection decisions and early stopping decisions, we aim to make greater use of these data to increase efficiency and improve interim decision making. In this work, we present a joint model for longitudinal and time-to-event data and two methods for creating standardised statistics based on this joint model. We can use the estimates to define enrichment rules and efficacy and futility early stopping rules for a flexible efficient clinical trial with possible enrichment. Under this framework, we show asymptotically that the familywise error rate is protected in the strong sense. To assess the results, we consider a trial for the treatment of metastatic breast cancer where repeated ctDNA measurements are available and the subgroup criteria is defined by patients' ER and HER2 status. Using simulation, we show that incorporating biomarker information leads to accurate subgroup identification and increases in power. Efficient designs, enrichment, joint modeling, longitudinal data, time-to-event data. ## 1 Introduction Adaptive enrichment clinical trials enable the efficient testing of an experimental intervention on specific patient subgroups of interest (see Burnett et al. (2020) and Pallmann et al. (2018)). Suppose that a particular subgroup of patients is identified as responding particularly well to treatment, then we can focus resources and inferences by recruiting additional patients from this benefitting subgroup. If, in this subgroup, patients respond overwhelmingly well to treatment, then there is potential to stop the trial early for efficacy demonstrating that the experimental treatment is superior to control in this subgroup. Patients who do not appear to benefit are removed from the experimental treatment with potentially harmful side effects. Further, we allow the possibility that all subgroups positively respond to treatment and the full population is selected, or the trial stops early for efficacy declaring positive trial outcomes in all subgroup populations. Finally, it may be that the treatment is futile for all patients and upon observing this scenario, we would terminate the trial at the first interim analysis as discussed by Burnett and Jennison (2021). We shall develop methods which can be applied for any trial which uses a time-to-event (TTE) outcome as the primary endpoint. In recent years, there has been an uptake in enrichment trials which consider TTE data, but this is still low compared to continuous endpoints, as reported by Ondra et al. (2016). The "threshold selection" rule is such that a subgroup is selected if its standardised test statistic is greater than some threshold boundary. Similarly to Magnusson and Turnbull (2013), we combine this with an error spending boundary to clearly predefine the rules of subgroup selection and stopping decisions before the trial commences. It is common that in trials measuring a long-term TTE endpoint, such as overall survival (OS), investigators also collect repeated measures on biomarkers which may be predictive of the primary endpoint. Our aim is to leverage this additional information to improve interim decision making such as subgroup selection and early stopping rules. We present a joint model for longitudinal and TTE data and base an enrichment trial design on the treatment effect parameter in the joint model. We then show, using simulation studies, that this results in higher power (using the same number of patients) as the equivalent trial which ignores the biomarker observations. Our simulation results are based on data from a study which measured OS and plasma circulating tumour DNA (ctDNA) levels (see Dawson et al. (2013)). To define subgroups, we hypothesise that patients who are HER2 negative will benefit more than patients who are HER2 positive from the experimental treatment. ## 2 Adaptive enrichment schemes for clinical trials with subgroup selection ### Set-up and notation Assume there are \(J\) mutually disjoint subgroups and \(K\) analyses throughout the trial. Let \(S_{j}\) denote the \(j^{th}\) subgroup, \(F=\cup_{j=1}^{J}S_{j}\) the full population and \(\emptyset\) the empty set which will be used when no subgroup has been selected. Throughout this report we shall consider the case where \(J=2\) and \(K=2\) for simplicity in notation and exposition. Extensions to more interim analysis and subgroups can be made following the same logic. We consider a trial design based on a statistical model where the treatment effect in subgroup \(j\) is \(\theta_{j}\). Let the prevalence of \(S_{1}\) in \(F\) be given by \(\lambda\), so that \(\theta_{F}=\lambda\theta_{1}+(1-\lambda)\theta_{2}\). We shall test the hypotheses \(H_{0,j}:\theta_{j}\leq 0\) against \(H_{A,j}:\theta_{j}>0\) for \(j=1,2,F\). At analysis \(k\), let \(\hat{\theta}_{j}^{(k)}\) be the treatment effect estimate and let \(\mathcal{I}_{j}^{(k)}=1/Var(\hat{\theta}_{j}^{(k)})\) be the information level in subgroup \(j=1,2\). In the full population, we have \(\hat{\theta}_{F}^{(k)}=\lambda\hat{\theta}_{1}^{(k)}+(1-\lambda)\hat{\theta}_ {2}^{(k)}\) and \(\mathcal{I}_{F}^{(k)}=\left(\lambda^{2}/\mathcal{I}_{1}^{(k)}+(1-\lambda)^{2} /\mathcal{I}_{2}^{(k)}\right)^{-1}\). The standardised \(Z-\)statistic at analysis \(k\) for subgroup \(j\) is given by \(Z_{j}^{(k)}=\hat{\theta}^{(k)}(\mathcal{I}_{j}^{(k)})^{1/2}\). We shall consider different analysis methods for calculating \(Z-\)statistics, including a joint modeling approach, and these methods all result in the sequence \(Z_{j}^{(1)},Z_{j}^{(2)}\) having the "canonical joint distribution" (CJD) given in Section 3.1 of Jennison and Turnbull (2000). The distribution of the standardised statistics across analyses is given by \[\begin{bmatrix}Z_{j}^{(1)}\\ Z_{j}^{(2)}\end{bmatrix}\sim N\left(\begin{bmatrix}\theta_{j}^{(1)}\sqrt{ \mathcal{I}_{j}^{(1)}}\\ \theta_{j}^{(2)}\sqrt{\mathcal{I}_{j}^{(2)}}\end{bmatrix},\begin{bmatrix}1& \sqrt{\mathcal{I}_{j}^{(1)}/\mathcal{I}_{j}^{(2)}}\\ \sqrt{\mathcal{I}_{j}^{(1)}/\mathcal{I}_{j}^{(2)}}&1\end{bmatrix}\right)\] for \[j=1,2,F \tag{1}\] The testing procedure for this adaptive enrichment trial is described in Figure 1. At analysis \(k\), let \((a_{k},b_{k})\) be an interval that splits the real line into three sections. We stop for futility if \(Z_{j}^{(k)}\) is below \(a_{k}\), stop for efficacy if \(Z_{j}^{(k)}\) is above \(b_{k}\) and otherwise continue to analysis \(k+1\). The constants \(a_{1},a_{2},b_{1}\) and \(b_{2}\) are assumed known for now and we shall discuss the calculation of these values in Section 2.3. ### The threshold selection rule We shall use the threshold selection rule to decide which subgroup, if any, to enrich. There are a collection of rules which can be used for selection, for example Chiu et al. (2018) use the maximum test statistic and Burnett and Jennison (2021) present a Bayes optimal rule. The definition of the threshold selection rule is as follows; for some constant \(\zeta\), select all groups \(j\in\{1,2,F\}\) such that \(Z_{j}^{(1)}>\zeta\) (Figure 1). This ensures that only subgroups which have a large enough treatment effect are followed to the second analysis. The threshold selection rule leads to an efficient enrichment trial design because we can find analytical forms for the Type 1 and Type 2 errors and are therefore able to maximise power. A practical advantage is the simplicity of the rule in application. The novel aspect of this work will be to apply this rule in the joint modeling setting. To calculate \(\zeta\), we impose some restrictions which can be customised at the design stage of the trial. First set the configuration of parameters under the global null as \(\mathbf{\Theta}_{G}:\{\theta_{1}=\theta_{2}=\theta_{F}=0\}\) and the alternative as \(\mathbf{\Theta}_{A}:\{\theta_{1}=\delta,\theta_{2}=0,\theta_{F}=\lambda\delta\}\). This represents that we believe there is an important effect of treatment in one subgroup \(S_{1}\). We require that, under \(H_{A}\), it is equally likely to select the full population or no subgroup, and that the subgroup which truly responds well to treatment, is selected a high proportion of times. Let \(\mathbb{P}(W=w;\mathbf{\Theta}_{A})\) be the probability of selecting subgroup \(w\in\{1,2,F,\emptyset\}\) under \(H_{A}\), and this value can be calculated by considering the densities of \(Z_{1}^{(1)}\) and \(Z_{2}^{(1)}\) given by Equation (1). Hence, for some \(\psi\), we solve the simultaneous equations \(\mathbb{P}\left(W=S_{1};\mathbf{\Theta}_{A}\right)=\psi\) and \(\mathbb{P}\left(W=F;\mathbf{\Theta}_{A}\right)=\mathbb{P}\left(W=\emptyset;\mathbf{ \Theta}_{A}\right)\). Under this setup, with \(\psi=0.6\) and \(\delta=-0.5\), we therefore need \(\zeta=0.754\) and \(\mathcal{I}_{1}^{(1)}=9.08\). In order to calculate error rates, we need the joint distribution of the selected test statistic \(Z_{W}^{(1)}\) and the population index \(W\). For a general configuration of parameters \(\mathbf{\Theta}\), the joint probability density function is given by \[f_{Z_{W}^{(1)},W}\left(z_{w}^{(1)},w;\mathbf{\Theta}\right)=\mathbb{P}\left(W= w;\mathbf{\Theta}\right)f_{Z_{W}^{(1)}|W}\left(z_{w}^{(1)}|W=w;\mathbf{\Theta}\right) \tag{2}\] where \(f_{Z_{W}^{(1)}|W}(z_{w}^{(1)}|W=w;\mathbf{\Theta})\) describes the conditional distribution of the test statistic \(Z_{W}^{(1)}\) given that subgroup \(w\) has been selected, as in Chiu et al. (2018). We now derive explicit forms for the joint densities \(f_{Z_{W}^{(1)},W}(z_{w}^{(1)},w;\mathbf{\Theta})\) for \(w=1,2\) when the threshold rule is used for subgroup selection and in the supplementary materials, we derive \(f_{Z_{F}^{(1)},F}(z_{F}^{(1)},F;\mathbf{\Theta})\). At the first interim analysis, the test statistics are such that \(Z_{j}^{(1)}\sim N(\theta_{j}(I_{j}^{(1)})^{1/2},1)\) for \(j=1,2\) and \(Z_{1}^{(1)}\) and \(Z_{2}^{(1)}\) are independent. The conditional distribution \(f_{Z_{W}^{(1)}|W}(z_{w}^{(1)}|W=w;\mathbf{\Theta})\) is given by a truncated normal distribution bounded below by \(\zeta\). Hence using Equation (2), we have \[f_{Z_{1}^{(1)},1}(z_{1}^{(1)},1;\mathbf{\Theta})= \Phi\left(\zeta-\theta_{2}\sqrt{\mathcal{I}_{2}^{(1)}}\right)\phi \left(z_{1}^{(1)}-\theta_{1}\sqrt{\mathcal{I}_{1}^{(1)}}\right)\] \[f_{Z_{2}^{(1)},2}(z_{2}^{(1)},2;\mathbf{\Theta})= \Phi\left(\zeta-\theta_{1}\sqrt{\mathcal{I}_{1}^{(1)}}\right)\phi \left(z_{2}^{(1)}-\theta_{2}\sqrt{\mathcal{I}_{2}^{(1)}}\right)\] \[f_{Z_{F}^{(1)},F}(z_{F}^{(1)},F;\mathbf{\Theta})= \frac{\sqrt{\mathcal{I}_{1}^{(1)}\mathcal{I}_{2}^{(1)}}}{\lambda (1-\lambda)\mathcal{I}_{F}^{(1)}}\int_{-\infty}^{\infty}\phi\left(\frac{ \sqrt{\mathcal{I}_{1}^{(1)}}(u-\lambda\sqrt{\mathcal{I}_{F}^{(1)}})}{ \lambda\sqrt{\mathcal{I}_{F}^{(1)}}}\right)\phi\left(\frac{\sqrt{\mathcal{I}_ {2}^{(1)}}(z_{F}^{(1)}-u-(1-\lambda)\sqrt{\mathcal{I}_{F}^{(1)}})}{(1-\lambda )\sqrt{\mathcal{I}_{F}^{(1)}}}\right)du\] where \(\phi(\cdot)\) and \(\Phi(\cdot)\) denote the probability density and cummulative distribution functions respectively of a standard normal random variable. ### Calculation of Type 1 error and power We now consider the possible pathways of the enrichment trial. Then, given the definition of the \(Z-\)statistics, the threshold selection rule and the joint density function \(f_{Z_{F}^{(i)},W}(z_{F}^{(1)},F;\mathbf{\Theta})\), we are equipped to determine error rates for the study. The family wise error rate (FWER), denoted by \(\alpha\), is defined as the probability of rejecting one or more Figure 1: Flow chart for enrichment trial design including number of events observed at each stage. null hypotheses \(H_{j}\) and power is denoted by \(1-\beta\). We shall apply this method in Section 3.2 in order to create an enrichment trial using the joint model for longitudinal and TTE data. Let \(H_{G}\) be the global null hypothesis \(\theta_{1}=\theta_{2}=\theta_{F}=0\). There are many pathways which lead to rejecting \(H_{G}\). Examples include select \(F\) and reject \(H_{0,F}\) immediately or select \(S_{1}\) then reject \(H_{0,1}\) at the second analysis. Considering all options, we have \[\alpha=\sum_{w\in S}\left\{\int_{b_{1}}^{\infty}f_{Z^{(1)}_{W},W}\left(z^{(1)} _{w},w;\mathbf{\Theta}_{G}\right)dz^{(1)}_{w}+\int_{a_{1}}^{b_{1}}\int_{b_{2}} ^{\infty}f_{W,2|1}\left(z^{(2)}_{w}|z^{(1)}_{w};\mathbf{\Theta}_{G}\right)dz^{ (2)}_{w}dz^{(1)}_{w}\right\}. \tag{3}\] Here, we have specified that we will only test the hypothesis corresponding to the selected subgroup, since it has the highest chance of being significant. For alternative configurations testing all hypotheses, fixed sequence testing (Westfall and Krishen [2001]) or other alpha propagation methods can be applied. As in Chiu et al. [2018], we define power as the conditional probability of rejecting \(H_{0,1}\) given that subgroup \(S_{1}\) is selected. Here, \(S_{1}\) can be arbitrarily interchanged for \(S_{2}\). This reflects the belief that a "successful" trial is one where the benefitting subgroup is selected and also reports a positive trial outcome. Type 2 error rates are calculated as \[\beta=\int_{-\infty}^{a_{1}}f_{Z^{(1)}_{W},W}\left(z^{(1)}_{w},w;\mathbf{ \Theta}_{A}\right)dz^{(1)}_{w}+\int_{a_{1}}^{b_{1}}\int_{-\infty}^{a_{2}}f_{W,2|1}\left(z^{(2)}_{w}|z^{(1)}_{w};\mathbf{\Theta}_{A}\right)dz^{(2)}_{w}dz^{( 1)}_{w}. \tag{4}\] It is now clear that the boundary points \(a_{1},a_{2},b_{1}\) and \(b_{2}\) can be calculated to satisfy prespecified requirements of FWER \(\alpha\) and power \(1-\beta\) under \(\mathbf{\Theta}_{A}\). Further, to ensure that we have four equalities for the four boundary points, we make additional requirements that \(\alpha^{(k)}\) is the Type 1 error "spent" and \(\beta^{(k)}\) is the Type 2 error spent at analysis \(k\) where \(\alpha^{(1)}+\alpha^{(2)}=\alpha\) and \(\beta^{(1)}+\beta^{(2)}=\beta\). Then solve \[\alpha^{(1)} =\sum_{w\in S}\int_{b_{1}}^{\infty}f_{Z^{(1)}_{W},W}\left(z^{(1)} _{w},w;\mathbf{\Theta}_{G}\right)dz^{(1)}_{w}, \beta^{(1)} =\int_{-\infty}^{a_{1}}f_{Z^{(1)}_{W},W}\left(z^{(1)}_{w},w; \mathbf{\Theta}_{A}\right)dz^{(1)}_{w}\] \[\alpha^{(2)} =\sum_{w\in S}\int_{a_{1}}^{b_{1}}\int_{b_{2}}^{\infty}f_{W,2|1} \left(z^{(2)}_{w}|z^{(1)}_{w};\mathbf{\Theta}_{G}\right)dz^{(2)}_{w}dz^{(1)}_ {w}, \beta^{(2)} =\int_{a_{1}}^{b_{1}}\int_{-\infty}^{a_{2}}f_{W,2|1}\left(z^{(2)} _{w}|z^{(1)}_{w};\mathbf{\Theta}_{A}\right)dz^{(2)}_{w}dz^{(1)}_{w}.\] The decomposition of the error rates also ensures that the boundary points \(a_{1}\) and \(b_{1}\) can be calculated at the first analysis before observing the information levels at the second analysis. Hence, there may be the opportunity to stop the trial early without needing to calculate the information levels at the second analysis. This is particularly helpful in trials which use TTE endpoints because the information levels are estimated using the data. There are many options for the break-down of the error rates. For the models considered, we shall use an error spending design by Gordon Lan and DeMets [1983]. In the group sequential setting (without subgroup selection), the error spending test requires specifying the maximum information \(\mathcal{I}_{max}\) and then error is spent according to the proportion of information \(\mathcal{I}^{(k)}/\mathcal{I}_{max}\) observed at analysis \(k\). For the enrichment trial, we propose a similar structure considering \(\mathcal{I}_{max}\) to be the maximum information in the full population. Specifically, we shall use the functions \(f(t)=\min\{\alpha t^{2},\alpha\}\) and \(g(t)=\min\{\beta t^{2},\beta\}\) to determine the amount of error to spend. Then we set \(\alpha^{(1)}=f\left(\mathcal{I}^{(1)}_{F}/\mathcal{I}_{max}\right)\), \(\alpha^{(2)}=f\left(\mathcal{I}^{(2)}_{F}/\mathcal{I}_{max}\right)-f\left( \mathcal{I}^{(1)}_{F}/\mathcal{I}_{max}\right)\), \(\beta^{(1)}=g\left(\mathcal{I}^{(1)}_{F}/\mathcal{I}_{max}\right)\) and \(\beta^{(2)}=g\left(\mathcal{I}^{(2)}_{F}/\mathcal{I}_{max}\right)-g\left( \mathcal{I}^{(1)}_{F}/\mathcal{I}_{max}\right)\). We shall discuss the calculation of \(\mathcal{I}_{max}\) in the TTE (or joint modeling) setting in Section 2.5. By construction, the FWER is protected in the weak sense. That is, under \(H_{G}:\theta_{1}=\theta_{2}=\theta_{F}=0\) we have FWER \(\alpha\) exactly by Equations (3) and (4). We can show that asymptotically we also have strong control of the FWER, which is the probability of rejecting one or more true null hypotheses. We prove this by showing that FWER is maximised under the global null. **Theorem 1**.: _For global null hypothesis \(H_{G}\) and any \(\Theta\), we have_ \[\limsup_{n\rightarrow\infty}\mathbb{P}(\text{Reject at least one true }H_{j}|\mathbf{\Theta})\leq\limsup_{n\rightarrow\infty}\mathbb{P}(\text{reject at least one }H_{j}|H_{G}).\] Proof.: See the supplementary materials. ### Trials with unpredictable information increments: events based analyses To complete the calculation of the boundary points \(a_{2}\) and \(b_{2}\) in Equations (3) and (4), it remains to find the information level at analysis \(2\) for the subgroups that have ceased to be observed. That is, suppose that \(w\in 1,2,F\) is the subgroup that has been selected and the trial continues to analysis \(2\), then \(\mathcal{I}_{w}^{(2)}\) is observed. However, we also need to know \(\mathcal{I}_{j}^{(2)}\) for all \(j\neq w\), which is the information that would have been observed if subgroup \(j\) were selected. Many enrichment trial designs focus on the simple example where the outcome measure is normally distributed with known variance. Hence, if the number of patients to be recruited is prespecified, then \(\mathcal{I}_{j}^{(2)}\) can be calculated in advance of the trial. However, in trials where the primary endpoint is a TTE variable, information is estimated using the data. We find that we can accurately forward predict the information levels at future analyses when we know the number of observed events. Hence, instead of prespecifying the number of patients to recruit, we shall prespecify the number of observed events. For subgroup \(j=1,2\), let \(d_{j}^{(k)}\) be the number of events observed in subgroup \(j\) before analysis \(k\). We specify that if no early stopping occurs, then the total number of oberved events in the selected subgroup is the same regardless of which subgroup has been selected so that \(d_{1}^{(2)}=d_{2}^{(2)}=d_{F}^{(2)}=d^{(2)}\). Figure 1 identifies when the analyses are performed. Note that these values are set as design options and so will be known before commencement of the trial. We shall discuss how to choose these values in Section 2.5. Further, we relate number of events and information so that we can predict the information level at the second analysis for the unobserved subgroups. Freedman (1982) proves that, in the context of survival analysis, the variance of the log-rank statistic under \(H_{G}\) is such that \(\mathcal{I}_{j}^{(k)}\approx d_{j}^{(k)}/4\). For analysis methods using test statistics other than the log-rank, we shall extend on this idea and assume that \(\mathcal{I}_{j}^{(k)}=d_{j}^{(k)}/m_{j}\) where \(m_{j}\) is an arbitrary constant. In the supplementary materials, we show simulation evidence for these relationships. Since each \(\mathcal{I}_{j}^{(1)}\) is observed for \(j=1,2,F\), we shall use the proportionality relationship to predict the information at the second analysis for the subgroup which is no longer observed. For \(j\neq w\) where \(w\) is the subgroup that has been selected at the first interim analysis, we can predict \(\mathcal{I}_{j}^{(2)}\) using \(\mathcal{I}_{j}^{(2)}=d_{j}^{(1)}d^{(2)}/\mathcal{I}_{j}^{(1)}\). ### Trial design -- number of events We have so far presented the calculation of the boundary points for a trial where the number of events at the interim and final analyses are known prior to commencement. We now discuss the design of the trial, in particular, determining the constants \(m_{j}\) and information levels \(\mathcal{I}_{j}^{(1)}\) for \(j=1,2,F\) and maximum information level \(\mathcal{I}_{max}\). These in turn mean that the required numbers of events \(d_{j}^{(1)}\) for \(j=1,2,F\) and \(d^{(2)}\) can be planned. The driving design feature is that we will plan the trial to have power \(1-\beta\) under the parameterisation \(\boldsymbol{\Theta}_{A}\). We now describe a simulation scheme to determine the constants \(m_{j}\) for \(j=1,2,F\). Under the parameterisation \(\boldsymbol{\Theta}_{A}\), simulate a data set of \(5000\) patients Let \(t_{j,1},\ldots,t_{j,n_{j}}\) be the event times in subgroup \(j\) **for**\(t_{j,s}=t_{j,1},\ldots,t_{j,n_{j}}\)**do** Right-censor all patients at time \(t_{j,s}\) Calculate \(\mathcal{I}_{j,s}^{(1)}\) based on data up to time \(t_{j,s}\) **endfor** Fit a linear model, without an intercept term, to the points \((t_{j,1},\mathcal{I}_{j,1}^{(1)}),\ldots,(t_{j,n_{j}},\mathcal{I}_{j,n_{j}}^{ (1)})\) Use this linear model to estimate the value of \(m_{j}\). It is now possible to calculate the required number of events at the first interim analysis. By Section 2.2, we require \(\mathcal{I}_{1}^{(1)}=9.08\) which equates to \(d_{1}^{(1)}=9.08m_{1}\) events in subgroup \(S_{1}\). Further, we find that \(m_{2}=(1-\lambda)m_{1}/\lambda\) and \(m_{F}=m_{1}/\lambda\) which equates to \(d_{2}^{(1)}=(1-\lambda)d_{1}^{(1)}/\lambda\) and \(d_{F}^{(1)}=d_{1}^{(1)}/\lambda\). The design of the trial does not require us to plan \(d_{2}^{(1)}\) and \(d_{F}^{(1)}\), but this provides us with estimates of the number of events that will be observed at the first analysis. We can also determine the timing of the final analysis at \(K=2\). Consider the sequence of information levels given by \((\mathcal{\tilde{I}}_{j}^{(1)},\mathcal{\tilde{I}}_{j}^{(2)})=\left(d_{j}^{(1) }/m_{j},m_{F}\mathcal{I}_{max}/m_{j}\right)\) for \(j\in\{1,2,F\}\). The value of \(\mathcal{I}_{max}\) is calculated such that boundary points satisfy \(a_{K}=b_{K}\) when the information levels \(\mathcal{\tilde{I}}_{j}^{(k)}\) replace \(\mathcal{I}_{j}^{(k)}\) in Equations (3) and (4) for \(k=1,2\) and \(j=1,2,F\). This is done using an iterative search method. Then, returning to the definition of \(\mathcal{I}_{max}\), the total number of events can be found by solving \(\mathcal{I}_{F}^{(2)}=\mathcal{I}_{max}\) for \(d^{(2)}\). ## 3 Joint modeling of longitudinal and time-to-event data ### The joint model The joint model that we consider is an adaptation of Equation (2) of Tsiatis and Davidian (2001) (who shall hereto be referred to as "TD" for short). There are two processes in this model which represent the survival and longitudinal parts separately, and these processes are linked using random effects. Suppose that \(Z_{ji}\) is the indicator function that patient \(i\) in subgroup \(j\in\{1,2,F\}\) recneieves the experimental treatment, \(X_{ji}(t)\) is the true value of the biomarker at time \(t\) and \(W_{ji}(t)\) is the observed value of the biomarker. Then the longitudinal model takes the form \[X_{ji}(t)=b_{0ji}+b_{1ji}t+b_{2j}Z_{ji}t,\hskip 28.452756ptW_{ji}(t)=X_{ji}(t)+ \epsilon_{ji}(t) \tag{5}\] where \(\mathbf{b}_{ji}=(b_{ji0},b_{ji1})\) is a vector of patient specific random effects, \(b_{2j}\) is a fixed parameter and \(\epsilon_{ji}(t)\) is the measurement error. We make the assumptions that if the longitudinal data for patient \(i\) in subgroup \(j\) is measured at times \(v_{ji1},\dots v_{jim_{ji}}\), then \(\epsilon_{ji}(v_{jis})|\mathbf{b}_{ji}\sim N(0,\sigma_{j}^{2})\) for \(s=1,\dots,m_{ji}\) and \(\epsilon_{ji}(v)\) and \(\epsilon_{ji}(v^{\prime})\) are independent for \(v\neq v^{\prime}\). This model differs slightly from Equation (2) of TD because of the inclusion of the treatment effect in the longitudinal model. This reflects that longitudinal observations may be affected by treatment. We consider a random effects model where \(\mathbf{b}_{j1},\dots,\mathbf{b}_{jn}\) are independent and identically distributed with the following distribution \[\begin{bmatrix}b_{0ji}\\ b_{1ji}\end{bmatrix}\sim N\left(\begin{bmatrix}\mu_{1j}\\ \mu_{2j}\end{bmatrix},\begin{bmatrix}\phi_{1j}&\phi_{12j}\\ \phi_{12j}&\phi_{2j}\end{bmatrix}\right). \tag{6}\] The model for the survival endpoint is a Cox proportional hazards model. Let \(\eta_{j}\) be the treatment effect in subgroup \(j\in\{1,2,F\}\) and let \(\gamma_{j}\) be a scalar coefficient. The baseline hazard function and hazard function for subgroup \(j\) are given by \[h_{0j}(t) =\begin{cases}c_{j}&\text{ if }t\leq 1\\ 5c_{j}/3&\text{ if }t>1\end{cases} \tag{7}\] \[h_{ji}(t) =h_{0j}(t)\exp\{\gamma X_{ji}(t)+\eta_{j}Z_{ji}\} \tag{8}\] We have chosen to model the baseline hazard function as piecewise constant with a single parameter for simplicity. This is motivated by the dataset, presented in Section 5.1, where we see a sharp difference in the baseline hazard at 1 year. Note that it is the true underlying trajectory \(X_{ji}(t)\) which is included as a covariate in the proportional hazards model, whereas the measurements \(W_{ji}(t)\) with added error are observed. Here, the coefficients \(b_{2j}\) and \(\eta_{j}\) both represent treatment effects where \(\eta_{j}\) is the direct effect of treatment acting on survival and \(b_{2j}\) is the indirect effect. Together, Equations (5)-(7) define the joint model and defines the working model from which we shall perform simulation studies in Section 5.1. ### Conditional score In the fixed sample setting, TD present the "conditional score" method for fitting the joint model to the data. The method adapts the general theoretical work by Stefanski and Carroll (1987) who find unbiased score functions by conditioning on certain sufficient statistics. The conditional score methodology builds upon the theory of counting processes. The survival counting process is a step function jumping from 0 to 1 at the failure time for an uncensored observation. We present multi-stage adaptations of some functions presented in TD. Let \(t_{ji}^{(k)}\) be the observed event time and let \(\delta_{ji}^{(k)}\) be the observed censoring indicator for patient \(i\) in subgroup \(j\in\{1,2,F\}\) at analysis \(k\). This censoring event includes "end of study" censoring for the total follow-up time at each analysis. For the conditional score, to be included in the at-risk set at time \(t\) the patient must have at least two longitudinal observations to fit the regression model. At analysis \(k\), we define the at-risk process, \(Y_{ji}^{(k)}(t)=\mathbb{I}\{t_{ji}^{(k)}\geq t,v_{ji2}\leq t\}\), counting process, \(N_{ji}^{(k)}(t)=\mathbb{I}\{t_{ji}^{(k)}\leq t,\delta_{ji}^{(k)}=1,v_{ji2}\leq t\}\) and function \(dN_{ji}^{(k)}(t)=\mathbb{I}\{t\leq t_{ji}^{(k)}<t+dt,\delta_{ji}^{(k)}=1,v_{ji2 }\leq t\}\) for the joint model. An object of importance is the sufficient statistic. For patient \(i\) in subgroup \(j\), let \(v_{ji}(u)\) be set of all time points for measurements of the biomarker, up to and including time \(u\). Let \(\hat{X}_{ji}(u)\) be the ordinary least squares estimate of \(X_{ji}(u)\) based on the set of measurements taken at times \(v_{ji}(u)\). That is, calculate \(\hat{b}_{0ji}(u),\hat{b}_{1ji}(u)\) and \(\hat{b}_{j2}(u)\) based on measurements taken at times \(v_{ji}(u)\), then \(\hat{X}_{ji}(u)=\hat{b}_{0ji}(u)+\hat{b}_{1ji}(u)u+\hat{b}_{j2}(u)Z_{ji}(u)\). As we pass time \(v_{jim}\), a new observation \(W_{jim}\) is generated and \(\hat{X}_{ji}(u)\) is updated. This seems strange since at an early time point, \(s\) where \(s<u\), we use data \(v_{ji}(s)\) in the calculation of \(\hat{X}_{ji}(s)\) even though \(v_{ji}(u)\) may be available. However, this is necessary for the martingale property to hold for the distributional results of the parameter estimates. Suppose that \(\sigma_{j}^{2}\psi_{ji}(u)\) is the variance of \(\hat{X}_{ji}(u)\) at time \(u\). TD define the sufficient statistic to be the function \[S_{ji}^{(k)}(t,\gamma_{j},\sigma_{j}^{2})=\hat{b}_{0ji}(t)+\hat{b}_{1ji}(t)t+Z_{ ji}\hat{b}_{j2}t+\gamma_{j}\sigma_{j}^{2}\psi_{ji}(t)dN_{ji}^{(k)}(t)\] which is defined for all \(t\in(v_{ji2},t_{ji}^{(k)})\) for patient \(i\) in subgroup \(j\). Further the multi-stage, dependent on subgroup \(j\), version of the quotient function \(E_{1}/E_{0}\) in Equation (6) by TD is the \(2\times 1\) vector given by \[E_{j}^{(k)}(u,\gamma_{j},\eta_{j},\sigma_{j}^{2})=\frac{\sum_{i=1}^{n_{j}}\{S_{ ji}^{(k)}(u,\gamma_{j},\sigma_{j}^{2}),Z_{ji}\}^{T}\exp\{\gamma S_{ji}^{(k)}(t, \gamma_{j},\sigma_{j}^{2})-\gamma_{j}^{2}\sigma_{j}^{2}\psi_{ji}(t)/2+\eta_{j }Z_{ji}\}Y_{ji}^{(k)}(t)}{\sum_{i=1}^{n_{j}}\exp\{\gamma S_{ji}^{(k)}(t,\gamma_ {j},\sigma_{j}^{2})-\gamma_{j}^{2}\sigma_{j}^{2}\psi_{ji}(t)/2+\eta Z_{ji}\}Y _{ji}^{(k)}(t)}.\] We can now define the conditional score function at analysis \(k\) for subgroup \(j\in\{1,2,F\}\), denoted \(U_{j}^{(k)}(\gamma_{j},\eta_{j},\sigma_{j}^{2})\). Let \(\tau_{k}\) be the maximum follow-up time at analysis \(k\), then \[U_{j}^{(k)}(\gamma_{j},\eta_{j},\sigma_{j}^{2})=\int_{0}^{\tau_{k}}\sum_{i=1}^ {n_{j}}\left(\{S_{ji}^{(k)}(u,\gamma_{j},\sigma_{j}^{2}),Z_{ji}\}^{T}-E_{j}^{( k)}(u,\gamma_{j},\eta_{j},\sigma_{j}^{2})\right)dN_{ji}^{(k)}(u). \tag{9}\] Here, integration over the function \(dN_{ji}^{(k)}(u)\) ensures that the integrand is evaluated at the place where \(N_{ji}^{(k)}(t)\) jumps from 0 to 1 if \(\delta_{ji}^{(k)}=1\) and \(v_{ji2}\leq t\), and 0 otherwise. The object \(U_{j}^{(k)}(\gamma_{j},\eta_{j},\sigma_{j}^{2})\) has the same dimensionality as \(E_{j}^{(k)}(u,\gamma_{j},\eta_{j},\sigma_{j}^{2})\). Burdon et al. (2022) show that \(\mathbb{E}(U_{j}^{(k)}(\gamma_{j},\eta_{j},\sigma_{j}^{2}))=\mathbf{0}\) for each \(k=1,\ldots,K,\) and \(j\in\{1,2,F\}\). Therefore, the conditional score function at analysis \(k\) is an estimating function, and set equal to zero defines an estimating equation. Hence, asymptotically normal parameter estimates for \(\gamma_{j}\) and \(\eta_{j}\) can be found as the root of the estimating equation. As in TD Equation (13), define the pooled estimate \(\hat{\sigma}_{j}^{(k)2}=\sum_{i=1}^{n_{j}}\mathbb{I}\{m_{ji}(k)>2\}R_{ji}(k)/ \sum_{i=1}^{n_{j}}\mathbb{I}\{m_{ji}(k)>2\}(m_{ji}(k)-2),\) where \(R_{ji}(k)\) is the residual sum of squares for the least squares fit to all \(m_{ji}(k)\) observations for patient \(i\) in subgroup \(j\) available at analysis \(k\). Then, let \(\hat{\gamma}_{j}^{(k)},\hat{\eta}_{j}^{(k)}\) be the values of \(\gamma_{j}\) and \(\eta_{j}\) respectively such that \(U_{j}^{(k)}(\hat{\gamma}_{j}^{(k)},\hat{\eta}_{j}^{(k)},\hat{\sigma}_{j}^{(k)2 })=\mathbf{0}\). We shall use the sandwich estimator, as in Section 2.6 by Wakefield (2013), to calculate a robust estimate for the variance of the treatment effect estimate. Firstly, define matrices \(A_{j}^{(k)}=\partial U_{j}^{(k)}(\gamma_{j},\eta_{j},\sigma_{j}^{2})/\partial( \gamma_{j},\eta_{j})^{T}\) and \(B_{j}^{(k)}=Var(U_{j}^{(k)}(\gamma_{j},\eta_{j},\sigma_{j}^{2}))\). Burdon et al. (2022) present analytical forms for each of these \(2\times 2\) matrices including a detailed calculation for the derivative matrix \(A_{j}^{(k)}\). In practice, \(A_{j}^{(k)}\) can be calculated numerically and \(B_{j}^{(k)}\) is found by considering the score statistic as a sum over \(n_{j}\) patients. Further, these matrices are estimated by substituting the estimates \(\hat{\gamma}_{j}^{(k)},\hat{\eta}_{j}^{(k)}\) and \(\hat{\sigma}_{j}^{(k)2}\) for \(\gamma_{j},\eta_{j}\) and \(\sigma_{j}^{2}\) respectively. Then the information for the treatment effect estimate is given by \(\mathcal{I}_{j}^{(k)}=n_{j}\left[(A_{j}^{(k)})^{-1}B_{j}^{(k)}((A_{j}^{(k)})^{- 1})^{T}\right]_{22}^{-1}.\) The subscript represents that we are interested in the second parameter \(\eta_{j}\) in the vector \((\gamma_{j},\eta_{j},\sigma_{j}^{2})^{T}\). The methodology in Section 2 can now be applied and an enrichment trial performed. The null hypothesis here is \(H_{0j}:\eta_{j}\leq 0\) which can be calculated by finding estimates \(\hat{\eta}_{j}^{(k)}\), information levels \(\mathcal{I}_{j}^{(k)}\) and \(Z\)-statistics using the conditional score framework. This is summarised in Table 1. Burdon et al. (2022) show that the \(Z\)-statistics do not have the canonical joint distribution in Equation (1), however, by proceeding with a group sequential test assuming that this does hold is sensible since Type 1 error rates are conservative and diverge minimally from planned significance level \(\alpha\). It may seem strange that the causal effect of treatment acting through the longitudinal data, \(b_{2j}\), is not utilised in the hypothesis test. We have found that we do not lose much power by focusing on the direct effect, \(\eta_{j}\). Under sensible choices for the parameter values, the benefits of the conditional score method outweigh the slight increase in power and therefore this will be the method of primary focus. This is a desireable method because the analysis is semi-parametric so that we are not required to specify a distribution for, or estimate, the baseline hazard function \(h_{j0}(t)\). Further, we are not required to specify any distributional assumptions for the random effects which makes the conditional score methodology robust to some model misspecifications. ### 5-year restricted mean survival time (RMST) We have presented a joint model for longitudinal and TTE data which includes a causal treatment effect acting through the longitudinal process. So far, we have considered the conditional score method which does not make use of the information about the treatment effect in the longitudinal data. We aim to perform a clinical trial which leverages information on both \(\eta_{j}\) and \(b_{2j}\) in the joint model. We require a single one dimensional test statistic that summarises the overall effect of treatment and we propose using the restricted mean survival time (RMST) to do so. Royston and Parmar (2011) define RMST as the area under the survival curve up to time \(t^{*}\). The value of \(t^{*}\) is fixed at the design stage and we shall discuss our choice. Let \(\mathbf{\psi}_{j}\) be the \(p\times 1\) vector of all parameters in the joint model in subgroup \(j\). Suppose that \(F_{0j}\) and \(F_{ij}\) are time-to-failure random variables for patients on the control and experimental treatment arms respectively and that \(\mathcal{S}_{0j}(t;\mathbf{\psi}_{j})\) and \(S_{1j}(t;\mathbf{\psi}_{j})\) are the corresponding survival functions integrated over any patient specific random effects. Then the difference in RMST between treatment groups is given by \[\Delta_{j}(t^{*};\mathbf{\psi}_{j})=\mathbb{E}[\min(F_{1j},t^{*})]-\mathbb{E}[\min (F_{0j},t^{*})]=\int_{0}^{t^{*}}\left[S_{1j}(t;\mathbf{\psi}_{j})-S_{0j}(t;\mathbf{ \psi}_{j})\right]\,dt. \tag{10}\] Most commonly, non-parametric methods are employed for estimating \(\Delta_{j}(t^{*};\mathbf{\psi}_{j})\), which include integration under the Kaplan-Meier curve and bootstrap methods to estimate the variance. Lu and Tian (2020) consider some practical design challenges for such methods. Non-parametric estimation is a popular solution when the proportional hazards assumption does not hold since the estimator is robust to model misspecification. Our motivation for using RMST is to find a test statistic summarising the effect of two treatment effect parameters and we wish to exploit the gain in power accrued from covariate information. Hence we shall focus on the parameteric estimator. We now present the exact analytical form for the difference in RMST for the joint model Equations (5)-(7). The cummulative hazard function for patient \(i\) in subgroup \(j\) is given in two parts. \[\text{For }t\leq 1:H_{ji}(t;\mathbf{\psi}_{j})= \frac{c_{j}\exp\{\gamma_{j}b_{0ji}+\eta_{j}Z_{ji}\}}{\gamma_{j}(b _{1ji}+b_{2j}Z_{ji})}[\exp\{\gamma_{j}(b_{1ji}+b_{2j}Z_{ji})\}-1]\] \[\text{For }t>1:H_{ji}(t;\mathbf{\psi}_{j})= \frac{\exp\{\gamma_{j}b_{0ji}+\eta_{j}Z_{ji}\}}{\gamma_{j}(b_{1 ji}+b_{2j}Z_{ji})}[0.4c_{j}\exp\{\gamma_{j}(b_{1ji}+b_{2j}Z_{ji})\}+0.6c_{j} \exp\{\gamma_{j}t(b_{1ji}+b_{2j}Z_{ji})\}-1].\] The control and experimental treatment survival functions integrated over the random effects are \[S_{0j}(t;\mathbf{\psi}_{j})=\int_{-\infty}^{\infty}\exp\{-H_{ji}(t; \mathbf{\psi}_{j},Z_{ji}=0\}f(b_{0ji},b_{1ji})db_{0ji}db_{1ji}\] \[S_{1j}(t;\mathbf{\psi}_{j})=\int_{-\infty}^{\infty}\exp\{-H_{ji}(t; \mathbf{\psi}_{j},Z_{ji}=1\}f(b_{0ji},b_{1ji})db_{0ji}db_{1ji}\] where \(f(b_{0ji},b_{1ji})\) is the probability density function for the normal distribution given in Equation (6). Gauss-Hermite integration can be used to efficiently calculate the integrals over \(b_{0ji}\) and \(b_{1ji}\). The survival functions are then substituted into Equation (10) to calculate \(\Delta_{j}(t^{*};\mathbf{\psi}_{j})\). The parametric RMST estimate can be found by substituting the maximum likelihood estimate (MLE) for the vector of model parameters into the model-based definition of RMST. Rizopoulos (2012) presents the full likelihood function for the joint model from which we can obtain the MLE \(\hat{\mathbf{\psi}}_{j}^{(k)}\) at analysis \(k\). An estimate for the treatment difference is given by \(\Delta_{j}(t^{*};\hat{\mathbf{\psi}}_{j}^{(k)})\) for \(k=1,\ldots,K\). The delta method, as in Doob (1935), can be used calculate the variance of the parametric RMST estimate. We have that \(\hat{\mathbf{\psi}}_{j}^{(k)}\) has the same dimensionality as \(\mathbf{\psi}_{j}\), a \(p\times 1\) vector. Let \(\Sigma_{j}^{(k)}\) be the \(p\times p\) covariance matrix of the MLE \(\hat{\mathbf{\psi}}_{j}^{(k)}\) at analysis \(k\), then we have that \(n^{1/2}(\hat{\mathbf{\psi}}_{j}^{(k)}-\mathbf{\psi}_{j})\xrightarrow{d}N(\mathbf{0},\Sigma_ {j}^{(k)})\). The information level at analysis \(k\) for the difference in RMST between treatment arms is given by \(\mathcal{I}_{j}^{(k)}=n_{j}\left(\left[\partial\Delta_{j}(t^{*};\hat{\mathbf{\psi}} _{j}^{(k)})/\partial\mathbf{\psi}_{j}\right]^{T}\Sigma_{j}^{(k)}\left[\partial \Delta_{j}(t^{*};\hat{\mathbf{\psi}}_{j}^{(k)})/\partial\mathbf{\psi}_{j}\right] \right)^{-1}\) where \(\partial\Delta_{j}(t^{*};\hat{\mathbf{\psi}}_{j}^{(k)})/\partial\mathbf{\psi}_{j}\) is the \(p\times 1\) vector which is the first derivative of the function \(\Delta_{j}(t^{*};\mathbf{\psi}_{j})\) with respect to the vector \(\mathbf{\psi}_{j}\) evaluated at \(\hat{\mathbf{\psi}}_{j}^{(k)}\). In the calculation of \(\mathcal{I}_{j}^{(k)}\), a consistent estimate \(\hat{\Sigma}_{j}^{(k)}\) can be substituted in place of the covariance matrix \(\Sigma_{j}^{(k)}\). In practice, the MLE \(\hat{\mathbf{\psi}}_{j}^{(k)}\) and an estimate of the covariance matrix can be calculated using the R package JM by Rizopoulos (2012). In a later paper, Royston and Parmar (2013) extend on their earlier work and give particular emphasis on the choice of truncation time \(t^{*}\). The authors suggest taking \(t^{*}\) as the value that minimises the expected sample size given the recruitment time and minimum follow-up time. We find that for all parameter values considered in Section 5.2, the required sample size reduces with \(t^{*}\). Further, we shall see that each trial has recruitment roughly 2 years and final analysis time at roughly 5 years, although these analysis times are events based so cannot be known exactly. To ensure that the method is robust to model misspecifications, it is important to avoid extrapolation of the RMST estimate beyone analysis time. Hence, we have chosen to use \(t^{*}=5\). To summarise the overall effect of treatment on survival, which incorporates both treatment effects \(b_{2j}\) and \(\eta_{j}\), we test the null hypothesis \(H_{0j}:\Delta_{j}(5;\psi_{j})\leq 0\). To do so, we define RMST estimates \(\Delta_{j}(5;\hat{\psi}_{j}^{(k)})\), information levels \(\mathcal{I}_{j}^{(k)}\) and \(Z\)-statistics for each \(k=1,\ldots,K\) and \(j=1,2\), \(F\). This is summarised in Table 1. We see in Section 5.2 that under some scenarios, this method is more powerful than the conditional score method. ## 4 Alternative models and their analysis methods ### Cox proportional hazards model Methods which leverage information from biomarkers in TTE studies are yet to be established. The current best practice for adaptive designs with a TTE endpoint is to base analyses on Cox proportional hazards models. We emulate this conventionality in order to assess the gain in power from including the longitudinal data in the analysis. To do so, we shall present a simple Cox proportional hazards model and define a hypothesis test that can be used in accordance with the threshold selection rule to perform an enrichment trial. Denote \(\tilde{h}_{0j}(t)\) as the baseline hazard function, \(\tilde{\eta}_{j}\) the treatment effect and \(Z_{ji}\) as the treatment indicator that patient \(i\) in subgroup \(j\in\{1,2,F\}\) receives the new treatment. Then the hazard function for the survival model is given by \[h_{ji}(t)=\tilde{h}_{0j}(t)\exp\{\tilde{\eta}_{j}Z_{ji}\}. \tag{11}\] We note the similarities in definition between \(\tilde{h}_{0j}(t)\) and \(h_{0j}(t)\) of Section 3.1 and also \(\tilde{\eta}_{j}\) and \(\eta_{j}\), however these objects are not exactly the same since they arise from different models. This highlights the fact that when the joint model is true, but we fit the data to the Cox proportional hazards model, then this will be a misspecified model and vice versa. Similarly to Section 3.2, at analysis \(k\), we define the at-risk and counting processes. With the same definitions for the observed event time \(t_{ji}^{(k)}\) and censoring indicator, \(\delta_{ji}^{(k)}\), these are \(\tilde{Y}_{ji}^{(k)}(t)=\mathbb{I}\{t_{ji}^{(k)}\geq t\}\) and \(d\tilde{N}_{ji}^{(k)}(t)=\mathbb{I}\{t\leq t_{ji}^{(k)}<t+dt,\delta_{ji}^{(k)}=1\}\). As in Jennison and Turnbull (1997), the function \(\tilde{E}_{j}^{(k)}(u,\tilde{\eta}_{j})\) and the score function \(\tilde{U}_{j}^{(k)}(\tilde{\eta}_{j})\) at analysis \(k\) are given by \[\tilde{E}_{j}^{(k)}(u,\tilde{\eta}_{j})=\frac{\sum_{i=1}^{n_{j}}Z_{ji}\exp\{ \tilde{\eta}_{j}Z_{ji}\}\tilde{Y}_{ji}^{(k)}(t)}{\sum_{i=1}^{n_{j}}\exp\{ \tilde{\eta}_{j}Z_{ji}\}\tilde{Y}_{ji}^{(k)}(t)},\qquad\tilde{U}_{j}^{(k)}( \tilde{\eta}_{j})=\int_{0}^{\tau_{k}}\sum_{i=1}^{n_{j}}\left(Z_{ji}-\tilde{E} _{j}^{(k)}(u,\tilde{\eta}_{j})\right)d\tilde{N}_{ji}^{(k)}(u). \tag{12}\] The function \(\tilde{U}_{j}^{(k)}(\cdot)\) is a score function and \(\tilde{\eta}_{j}\) can be estimated by solving the equation \(\tilde{U}_{j}^{(k)}(\tilde{\eta}_{j})=0\) for \(\tilde{\eta}_{j}\). Let this estimate, at analysis \(k\), be denoted by \(\hat{\eta}_{j}^{(k)}\). By standard results by Jennison and Turnbull (1997), the estimates \(\hat{\eta}_{j}^{(k)}\) follow the CJD where the information level at analysis \(k\) is given by \(\mathcal{I}_{j}^{(k)}=n_{j}\left[\partial\tilde{U}_{j}^{(k)}(\hat{\eta}_{j}^{( k)})/\partial\tilde{\eta}_{j}\right]^{-1}\). In Equation (11), the parameter \(\tilde{\eta}_{j}\) describes the entire effect of treatment on survival. Hence, when analysing data using this model, a suitable null hypothesis is given by \(H_{0j}:\tilde{\eta}_{j}\leq 0\) and this can be tested at analysis \(k=1,\ldots,K\) by calculating a treatment effect estimate \(\hat{\eta}_{j}^{(k)}\), information level \(\mathcal{I}_{j}^{(k)}\) and \(Z\)-statistic. The resultsing \(Z\)-statistics have the CJD given in Equation (1) and the methodology of Sections 2 can be used to create an enrichment trial design. Some advantages of this simplified Cox proportional hazards model are that we need not specify the baseline hazard function since the maximum partial likelihood analysis is semiparametric and requires no assumptions regarding \(\tilde{h}_{0j}(t)\). Further, there is no criteria to have a minimum of two longitudinal observations to be included in the at-risk process. ### Cox proportional hazards model with longitudinal data as a time-varying covariate A final option for analysis is one where the longitudinal data is included but is assumed to be free of measurement error. This requires a more sophisticated model than the simple Cox proportional hazards model of Section 4.1 and represents a trial where the longitudinal data is regarded as important enough to be considered and included. However, this is still a naive approach since the model will be misspecified in the presence of measurement error. For the purpose of assessing the necessity of correctly modeling the data, we shall fit a Cox proportional hazards model to the data where the longitudinal data is treated as a time-varying covariate. In what follows, the definitions of the treatment indicator \(Z_{ji}\) and longitudinal data measurements \(W_{ji}(v_{ji1}),\ldots,W_{ji}(v_{jim_{ji}})\) remain the same as in Section 3.1 and the at-risk process \(\tilde{Y}_{ji}^{(k)}(t)\) and counting process function \(d\tilde{N}_{ji}^{(k)}(u)\) are as in Section 4.1. Let \(\dot{\gamma}_{j}\) and \(\dot{\eta}_{j}\) be longitudinal data and treatment coefficients respectively, then the hazard function is given by \[h_{ji}(t)=\dot{h}_{j0}(t)\exp\{\dot{\gamma}_{j}W_{ji}(t)+\dot{\eta}_{j}Z_{ji}\}. \tag{13}\] This model differs from the joint model because the assumption here is that \(W_{ji}(t)\) is a function of time that is measured without error. In reality we often have measurments \(W_{ji}(v_{ji1}),\ldots,W_{i}(v_{jim_{ji}})\) for patient \(i\) in subgroup \(j\) that include noise around a true underlying trajectory. The function \(\dot{E}_{j}^{(k)}(u,\cdot)\) and score statistic \(\dot{U}_{j}^{(k)}(\cdot)\) for this model are given by \[\dot{E}_{j}^{(k)}(u,\dot{\eta}_{j})=\frac{\sum_{i=1}^{n_{j}}\{W_{ ji}(t),Z_{ji}\}^{T}\exp\{\dot{\gamma}_{j}W_{ji}(t)+\dot{\eta}_{j}Z_{ji}\} \tilde{Y}_{ji}^{(k)}(t)}{\sum_{i=1}^{n_{j}}\exp\{\dot{\gamma}_{j}W_{ji}(t)+ \dot{\eta}_{j}Z_{ji}\}\tilde{Y}_{ji}^{(k)}(t)}\] \[\dot{U}_{j}^{(k)}(\dot{\eta}_{j})=\int_{0}^{n_{k}}\sum_{i=1}^{n_{ j}}\left(\left\{W_{ji}(t),Z_{ji}\right\}^{T}-\dot{E}_{j}^{(k)}(u,\dot{\eta}_{j}) \right)d\tilde{N}_{ji}^{(k)}(u). \tag{14}\] Both objects \(\dot{E}_{j}^{(k)}(u,\dot{\eta}_{j})\) and \(\dot{U}_{j}^{(k)}(\dot{\eta}_{j})\) are \(2\times 1\) dimensional vectors. To evaluate these objects, we will need to know \(W_{ji}(t_{js})\) which is the value of the time-varying covariate for patient \(i\) in subgroup \(j\), evaluated at the event time of patient \(s\) in subgroup \(j\). For this model, \(W_{ji}(\cdot)\) is a function of time and is known. For calculation purposes, for \(t>v\), we shall set \(W_{ji}(t)\) as \(W_{ji}(v)\) where \(v=max(v_{jim}|v_{jim}\leq t)\). In a similar manner to Section 4.1, and summarised in Table 1, a suitable hypothesis test based on this model is \(H_{0j}:\dot{\eta}_{j}\leq 0\). This can be tested by finding \(Z-\)statistics, with the CID of Equation (1) and following the enrichment trial design of Section 2. These test statistics are calculated as \(Z_{j}^{(k)}=\dot{\eta}_{j}^{(k)}\sqrt{\mathcal{I}_{j}^{(k)}}\) where \(\dot{\eta}_{j}^{(k)}\) is the value of \(\dot{\eta}_{j}\) such that \(\dot{U}_{j}^{(k)}(\dot{\eta}_{j})=0\) and the information level is given by \(\mathcal{I}_{j}^{(k)}=n_{j}\left[\partial\tilde{U}_{j}^{(k)}(\dot{\eta}_{j}^{ (k)})/\partial\dot{\eta}_{j}\right]^{-1}.\) Again, this analysis method is semiparametric so that the baseline hazard function does not need to be estimated. Further, this model includes the longitudinal data however there are no distribution assumptions about the random effects \(\mathbf{b}_{j1},\ldots,\mathbf{b}_{jn_{j}}\). In fact, under this model, we need not specify the structure of the trajectory of the longitudinal data. ## 5 Results ### Example: A clinical trial for the treatment of metastatic breast cancer We shall apply the joint modeling methodology to a study by Dawson et al. (2013) which was designed to compare different biomarkers and their accuracy in monitoring tumour burden among women with metastatic breast cancer. The investigators found that circulating tumour DNA (ctDNA) was successfully detected and highly correlated with OS. As a posthoc analysis, survival curves were estimated under different quantiles of ctDNA, however this study could benefit from joint modeling analyses. Further, the HER2 status of each patient was presented and we shall define \(S_{1}\) and \(S_{2}\) as the subgroups of women whose HER2 status are negative and positive respectively. The prevalence of HER2 negative patients is found to be \(\lambda=2/3\) in this dataset which is in accordance with pivotal results of Slamon et al. (1987) who showed that HER2 status is highly correlated with OS. The authors stress that patients who are HER2 positive may be resistant to conventional therapies which confirms the suitability of the assumption under \(H_{A}\) that only patients in subgroup \(S_{1}\) will benefit from the experimental treatment. For the presented analyses, we shall assume that the true model is the joint model. Hence, the working model for data generation is given by Equations (5)-(7) and parameter values for simulation studies are informed using the metastatic breast cancer dataset. We removed patients whose ER status is negative, to retain 27 patients and following Barnett et al. (2021), measurements of ctDNA which were "not detected" were set to 1.5 (Copies/ml). The resulting dataset contains multiple treatment arms and dosing schedules, hence, we fit the model under the assumption that this dataset represents standard of care (control group). The parameter values, which shall remain fixed throughout the simulation studies are given by \[\lambda=1/3,\gamma=\gamma_{1}=\gamma_{2}=0.8,(\phi_{1},\phi_{12}, \phi_{2})=(\phi_{11},\phi_{121},\phi_{21})=(\phi_{12},\phi_{122},\phi_{22})=(2.5,1.7,5),\] \[\sigma^{2}=\sigma_{1}^{2}=\sigma_{2}^{2}=1,(\mu_{01},\mu_{11})=( \mu_{02},\mu_{12})=(4.23,1.81),c_{1}=c_{2}=0.0085. \tag{15}\] Remaining parameters which are needed to fully define \(\mathbf{\Theta}_{0}\) and \(\mathbf{\Theta}_{A}\) include \(\eta_{j}\) and \(b_{2j}\) for each \(j=1,2,F\). To represent no differences between control and treated groups under \(H_{0j}\), let \(\eta_{j}=b_{2j}=0\) for each \(j=1,2,F\). Then, as in Section 2.2, we expect HER2 negative patients to respond well to treatment and HER positive patients be unaffected by treatment which is given by \(H_{A1}:\eta_{1}=-0.5,b_{21}=-0.5\) and \(H_{A2}:\eta_{2}=b_{22}=0\). In what follows, ctDNA measurements will be observed, via a blood test, at two weeks for the first three months following entry to study and then once per month. The necessity of performing regular blood tests may be considered a draw-back of including the longitudinal data in the analysis method. We present the number of hospital visits per patient in the supplementary materials. The final object of importance which is required for data generation is the mechanism which simulates censoring times, \(y_{1},\ldots,y_{n}\). We shall simulate these according to the distribution \(y_{i}\sim Exp(5\times 10^{-5})\) and this is independent of the time-to-event outcome to reflect noninformative censoring. This results in roughly \(10\%\) of patients being lost to follow-up. ### Efficiency comparison The purpose of this comparison is to assess the gain by including the longitudinal data and to decide whether correctly modeling the measurement error is necessary. We shall focus on power as a measure of efficiency between the different methods and we compare some other outcome measures, such as number of hospital visits and expected stopping time in the supplementary materials. The four analysis methods are summarised in Table 1. This includes the hypothesis that is being tested and a summary desciption of how to calculate test statistics \(\hat{\theta}_{j}^{(k)}\) and information levels \(\mathcal{I}^{(k)}\) for subgroup \(j\in\{1,2,F\}\) at analysis \(k\). The number of events at the first analysis in subgroup \(S_{1}\), denoted \(d_{1}^{(1)}\), have been chosen to ensure that subgroup \(S_{1}\) is selected rougly \(60\%\) of the time, using the conditional score method, and the total number of events at the second analysis, \(d^{(2)}\), have been chosen to attain power of \(90\%\) as described in Section 2.5. These numbers of events are displayed in Table 2 for a range of values of \(\gamma,\sigma^{2}\) and \(\phi_{2}\). As \(\gamma\) increases, we see that both the required \(d_{1}^{(1)}\) and \(d^{(2)}\) increase. When \(\gamma=1.2\) and with a small number of events at the first interim analysis, it is not always possible to find a root to Equation (9). The consequence is that the required \(d_{1}^{(1)}\) and \(d^{(2)}\) are high to ensure that large sample properties of the estimator hold. We have not seen this problem occur for \(\gamma\leq 0.8\). Similarly, the required number of events increase with \(\sigma\). That is, as the longitudinal data becomes more noisey, more events and hence more information \begin{table} \begin{tabular}{l c c c} \hline \hline Analysis method & \(H_{0j}\) & \(\hat{\theta}_{j}^{(k)}\) & \(\mathcal{I}_{j}^{(k)}\) \\ \hline Conditional score & \(\eta_{j}=0\) & The value of \(\eta_{j}\) such that & \(n_{j}\left[(A_{j}^{(k)})^{-1}B_{j}^{(k)}((A_{j}^{(k)})^{-1})^{T}\right]_{22}^ {-1}\) \\ & & \(U_{j}^{(k)}(\gamma_{j},\eta_{j},\sigma_{j}^{2})=0\) & where & \(A_{j}^{(k)}=\frac{\partial}{\partial(\gamma_{j},\eta_{j})^{T}}U_{j}^{(k)}( \gamma_{j},\eta_{j},\sigma_{j}^{2})\) \\ & & & \(B_{j}^{(k)}=Var(U_{j}^{(k)}(\gamma_{j},\eta_{j},\sigma_{j}^{2}))\) \\ RMST & \(\Delta_{j}(5;\mathbf{\psi}_{j})=0\) & \(\Delta_{j}(5;\hat{\mathbf{\psi}}_{j}^{(k)})\) where & \(n_{j}\left(\left[\frac{\partial\Delta_{j}(5;\mathbf{\psi}_{j})}{\partial\mathbf{\psi}_ {j}}\right]_{\mathbf{\psi}_{j}=\hat{\mathbf{\psi}}_{j}^{(k)}}\right]^{T}\Sigma_{j}^{( k)}\left[\frac{\partial\Delta_{j}(5;\mathbf{\psi}_{j})}{\partial\mathbf{\psi}_{j}} \right]_{\mathbf{\psi}_{j}=\hat{\mathbf{\psi}}_{j}^{(k)}}\right]\) \\ & & \(\hat{\mathbf{\psi}}_{j}^{(k)}\) is MLE & \\ Cox model & \(\tilde{\eta}_{j}=0\) & The value of \(\tilde{\eta}_{j}\) such that & \(n_{j}\left[\frac{\partial}{\partial\tilde{\eta}_{j}}\tilde{U}_{j}^{(k)}(\tilde{ \eta}_{j})\bigg{|}_{\tilde{\eta}_{j}=\hat{\eta}_{j}^{(k)}}\right]^{-1}\) \\ & & \(\tilde{U}^{(k)}(\gamma_{j},\tilde{\eta}_{j},\sigma_{j}^{2})=0\) & \\ Cox model with & \(\hat{\eta}_{j}=0\) & The value of \(\hat{\eta}_{j}\) such that & \(n_{j}\left[\frac{\partial}{\partial\tilde{\eta}_{j}}\tilde{U}_{j}^{(k)}(\tilde{ \eta}_{j})\bigg{|}_{\tilde{\eta}_{j}=\hat{\eta}_{j}^{(k)}}\right]^{-1}\) \\ biomarker & & \(\hat{U}^{(k)}(\gamma_{j},\hat{\eta}_{j},\sigma_{j}^{2})=0\) & \\ \hline \hline \end{tabular} \end{table} Table 1: Null hypothesis, treatment effect estimate and information for each analysis method. is needed to achieve power and selection probabilities. The values of \(d_{1}^{(1)}\) and \(d^{(2)}\) are immune to changes in \(\phi_{2}\), which represents the degree of similarity between patients' longitudinal trajectories. We now put the enrichment methodology into practice using simulation studies. For one simulation; generate a dataset of patients from the joint model, then subgroup selection and decisions about \(H_{0}\) are performed after \(d_{1}^{(1)}\) and \(d^{(2)}\) events have been observed according to Table 2. All four methods in the summary Table 1 are performed on the same dataset and after the same number of events. This is so that differences in the trial results can be attributed to the analysis methodology and not trial design features. The simulations are repeated \(N=10^{4}\) times for each set of parameter values and then power is caluclated as the proportion of simulations which select subgroup \(S_{1}\) and reject \(H_{01}\) as described in Section 2.3. Figures 2 and 3 show the power comparison between the different methods and how power is affected by the parameters \(\gamma,\sigma^{2}\) and \(\phi_{2}\). The numeric values in these figures are presented in the supplementary materials. It is clear that the conditional score method is most efficient since power is highest across nearly all parameter combinations. When \(\gamma=0\), the conditional score method may suffer from a small loss in power in comparison to other methods. This is the case where longitudinal data has no impact on the survival outcome so including it in the analysis is futile. For \(\gamma\neq 0\) however, a gain in power between 0.18 to 0.52 is seen. The RMST methodology is efficient in the sense that power is close to the conditional score method in all cases. This method relies on finding maximum likelihood estimates and the model is overparameterised when any parameter is equal to zero. In such a case, the resulting covariance matrix is often not positive semi-definite and the analysis cannot be performed. This is the reason for missing entries in Figures 2 and 3. This method appears to have higher power for \(\phi_{2}=2.5\) however in the supplementary materials, we show that the probability of selecting \(S_{1}\) is low in this case and hence power is subjected to higher uncertainty. This method allows us to make inferences which leverage information about the treatment effect in the longitudinal data, however there is not much power to be gained by doing so. Additional model assumptions, such as the distribution of the random effects \(\mathbf{b}_{j1},\ldots,\mathbf{b}_{jn_{j}}\) and the functional form of the baseline hazard function \(h_{0j}(t)\) are required for this method to be used. Further, there is another complication relating to the choice of the truncation time \(t^{*}\). Fitting the data to the simple Cox model is very inefficient and in the extreme cases, power is below \(0.4\). The sample size that would be needed to increase power to 0.9 in such a scenario is excessive. Figure 2 shows that this simple method has power lower than the conditional score method whenever \(\gamma\neq 0\) and becomes increasingly inefficient as \(\gamma\) increases and also shows that the efficiency of this method is not affected by \(\sigma\). Figure 3 suggests that power might decrease with \(\phi_{2}\). Hence, it is important to include the longitudinal data in the analysis when there is a suspected correlation between the longitudinal data and the survival endpoint. The final method, where TTE outcomes are fit to a Cox proportional hazards model with the longitudinal data as a time-varying covariate, appears to be a simple yet effective way of including longitudinal data in the analysis. The achieved power is at least 0.69 but is usually lower than the conditional score method. However, scenarios where this method outperforms the conditional score are when \(\sigma=0\) or \(\phi_{2}=0\) indicating that the longitudinal data is free of measurement error or there are no between-patient differences between the slopes of the longitudinal trajectories. The efficiency decreases as longitudinal data increase in noise or as patient differences become larger, that is as \(\sigma\) and \(\phi_{2}\) increase. This method has the advantage that we do not need to specify the functional form of the longitudinal data, for example that it is linear in time. Taking these advantages into account, we still believe that the most efficient and \begin{table} \begin{tabular}{c c c c c c} \hline \(\phi\) & \(\sigma\) & \multicolumn{4}{c}{\((d_{1}^{(1)},d^{(2)})\)} \\ & & \(\boldsymbol{\gamma}\)\(=\)\(\mathbf{0}\) & \(\boldsymbol{\gamma}\)\(=\)\(\mathbf{0.4}\) & \(\boldsymbol{\gamma}\)\(=\)\(\mathbf{0.8}\) & \(\boldsymbol{\gamma}\)\(=\)\(\mathbf{1.2}\) \\ \cline{2-6} **5** & **0** & (41,180) & (40,170) & (42,170) & (44,175) \\ **5** & **0.5** & (41,178) & (41,165) & (43,175) & (48,190) \\ **5** & **1** & (41,170) & (42,180) & (49,215) & (62,271) \\ **5** & **1**.5 & (44,195) & (49,190) & (60,250) & (80,350) \\ **0** & **1** & (41,160) & (40,175) & (42,200) & (61,266) \\ **2.5** & **1** & (41,165) & (42,178) & (50,195) & (69,302) \\ **5** & **1** & (41,170) & (42,180) & (49,215) & (62,271) \\ **7.5** & **1** & (41,180) & (45,190) & (50,200) & (70,300) \\ \hline \end{tabular} \end{table} Table 2: Design parameters. practical method is the conditional score, which includes the longitudinal data and takes into account the measurement error. ## 6 Discussion In current oncology practice and cancer clinical trials, the efficient testing of novel therapies is crucial to focus these on the patient subgroups most likely to benefit. Too many patients receive treatments that either do not work particularly well, are toxic, or sometimes even both. We have shown that the threshold selection rule can be combined with an error spending boundary to create an efficient enrichment trial. This is potentially suitable for any trial where the primary outcome is a TTE variable and we present a method to establish the required number of events at the design stage of the trial. The novel aspect of this work is that these methods can be applied to an endpoint which is the treatment effect in a joint model for longitudinal and TTE data. By including these routinely collected biomarker outcomes in the analysis to leverage this additional information, the enrichment trial has higher power compared to the enrichment trial where the longitudinal data is left out of the analysis. Bauer et al. (2010) show that bias is prevalent in designs with selection. In our case, selection bias occurs as the treatment effect estimate in the selected subgroup is inflated in later analyses which could affect the trial results. Figure 2: Power results for a study with \(10^{4}\) simulations displaying changes in parameters \(\gamma\) and \(\sigma\). All other parameters are as in (15). However, unlike most other selection schemes, the threshold selection rule adjusts for the magnitude of the treatment effect at the design stage so another advantage is that selection bias is incorporated into the decision making process. Further, we compared this joint modeling approach with a model which used the longitudinal data but naively assumed this was free of measurement error. Again, the joint model performed more effectively in most cases. This naive approach was slightly more efficient when the longitudinal data was truly free from measurement error, there was no correlation between the two endpoints or there was no heterogeneity between patients' biomarker trajectories. However, we believe that these situations are rare in practice and the gain in power from joint modeling outweighs this downside. ## Software All statistical computing and analyses were performed using the software environment R version 4.0.2. Software relating to the examples in this paper is available at [https://github.com/abigailburdon/Adaptive-enrichment-with-joint-models](https://github.com/abigailburdon/Adaptive-enrichment-with-joint-models). Figure 3: Power results for a study with \(10^{4}\) simulations displaying changes in parameters \(\gamma\) and \(\phi_{2}\). All other parameters are as in (15). ## Acknowledgements This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 965397. TJ also received funding from the UK Medical Research Council (MC_UU_00002/14). RB also acknowledges funding from Cancer Research UK and support for his early phase clinical trial work from the Cambridge NIHR Biomedical Research Centre (BRC-1215-20014) and Experimental Cancer Medicine Centre. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
2303.11327
3D Concept Learning and Reasoning from Multi-View Images
Humans are able to accurately reason in 3D by gathering multi-view observations of the surrounding world. Inspired by this insight, we introduce a new large-scale benchmark for 3D multi-view visual question answering (3DMV-VQA). This dataset is collected by an embodied agent actively moving and capturing RGB images in an environment using the Habitat simulator. In total, it consists of approximately 5k scenes, 600k images, paired with 50k questions. We evaluate various state-of-the-art models for visual reasoning on our benchmark and find that they all perform poorly. We suggest that a principled approach for 3D reasoning from multi-view images should be to infer a compact 3D representation of the world from the multi-view images, which is further grounded on open-vocabulary semantic concepts, and then to execute reasoning on these 3D representations. As the first step towards this approach, we propose a novel 3D concept learning and reasoning (3D-CLR) framework that seamlessly combines these components via neural fields, 2D pre-trained vision-language models, and neural reasoning operators. Experimental results suggest that our framework outperforms baseline models by a large margin, but the challenge remains largely unsolved. We further perform an in-depth analysis of the challenges and highlight potential future directions.
Yining Hong, Chunru Lin, Yilun Du, Zhenfang Chen, Joshua B. Tenenbaum, Chuang Gan
2023-03-20T17:59:49Z
http://arxiv.org/abs/2303.11327v1
# 3D Concept Learning and Reasoning from Multi-View Images ###### Abstract Humans are able to accurately reason in 3D by gathering multi-view observations of the surrounding world. Inspired by this insight, we introduce a new large-scale benchmark for 3D multi-view visual question answering (3DMV-VQA). This dataset is collected by an embodied agent actively moving and capturing RGB images in an environment using the Habitat simulator. In total, it consists of approximately 5k scenes, 600k images, paired with 50k questions. We evaluate various state-of-the-art models for visual reasoning on our benchmark and find that they all perform poorly. We suggest that a principled approach for 3D reasoning from multi-view images should be to infer a compact 3D representation of the world from the multi-view images, which is further grounded on open-vocabulary semantic concepts, and then to execute reasoning on these 3D representations. As the first step towards this approach, we propose a novel 3D concept learning and reasoning (3D-CLR) framework that seamlessly combines these components via neural fields, 2D pre-trained vision-language models, and neural reasoning operators. Experimental results suggest that our framework outperforms **Concept:** Q: Are there any televisions? **Q:** How many chairs are close to the table in the room with plant on the cabinet? **A:** 6 **Q:** Is there a sofa in the room with a printer? **A:** Yes **Figure 1. An exemplar scene with multi-view images and question-answer pairs of our 3DMV-VQA dataset. 3DMV-VQA contains four question types: concept, counting, relation, comparison. Orange words denote semantic concepts; blue words denote the relations.** baseline models by a large margin, but the challenge remains largely unsolved. We further perform an in-depth analysis of the challenges and highlight potential future directions. ## 1 Introduction Visual reasoning, the ability to composite rules on internal representations to reason and answer questions about visual scenes, has been a long-standing challenge in the field of artificial intelligence and computer vision. Several datasets [23, 33, 70] have been proposed to tackle this challenge. However, they mainly focus on visual reasoning on 2D single-view images. Since 2D single-view images only cover a limited region of the whole space, such reasoning inevitably has several weaknesses, including occlusion, and failing to answer 3D-related questions about the entire scene that we are interested in. As shown in Fig. 1, it's difficult, even for humans, to count the number of chairs in a scene due to the object occlusion, and it's even harder to infer 3D relations like "closer" from a single-view 2D image. On the other hand, there's strong psychological evidence that human beings conduct visual reasoning in the underlying 3D representations [55]. Recently, there have been several works focusing on 3D visual question answering [2, 16, 63, 65]. They mainly use traditional 3D representations (_e.g._, point clouds) for visual reasoning. This is inconsistent with the way human beings perform 3D reasoning in real life. Instead of being given an entire 3D representation of the scene at once, humans will actively walk around and explore the whole environment, ingesting image observations from different views and converting them into a holistic 3D representation that assists them in understanding and reasoning about the environment. Such abilities are crucial for many embodied AI applications, such as building assistive robots. To this end, we propose the novel task of 3D visual reasoning from multi-view images taken by active exploration of an embodied agent. Specifically, we generate a large-scale benchmark, 3DMV-VQA (3D multi-view visual question answering), that contains approximately 5k scenes and 50k question-answering pairs about these scenes. For each scene, we provide a collection of multi-view image observations. We generate this dataset by placing an embodied agent in the Habitat-Matterport environment [47], which actively explores the environment and takes pictures from different views. We also obtain scene graph annotations from the Habitat-Matterport 3D semantics dataset (HM3DSem) [62], including ground-truth locations, segmentations, semantic information of the objects, as well as relationships among the objects in the environments, for model diagnosis. To evaluate the models' 3D reasoning abilities on the entire environment, we design several 3D-related question types, including concept, counting, relation and comparison. Given this new task, the key challenges we would like to investigate include: 1) how to efficiently obtain the compact visual representation to encode crucial properties (_e.g._, semantics and relations) by integrating all incomplete observations of the environment in the process of active exploration for 3D visual reasoning? 2) How to ground the semantic concepts on these 3D representations that could be leveraged for downstream tasks, such as visual reasoning? 3) How to infer the relations among the objects, and perform step-by-step reasoning? As the first step to tackling these challenges, we propose a novel model, 3D-CLR (3D Concept Learning and Reasoning). First, to efficiently obtain a compact 3D representation from multi-view images, we use a neural-field model based on compact voxel grids [58] which is both fast to train and effective at storing scene properties in its voxel grids. As for concept learning, we observe that previous works on 3D scene understanding [1, 3] lack the diversity and scale with regard to semantic concepts due to the limited amount of paired 3D-and-language data. Although large-scale vision-language models (VLMs) have achieved impressive performances for zero-shot semantic grounding on 2D images, leveraging these pretrained models for effective open-vocabulary 3D grounding of semantic concepts remains a challenge. To address these challenges, we propose to encode the features of a pre-trained 2D vision-language model (VLM) into the compact 3D representation defined across voxel locations. Specifically, we use the CLIP-LSeg [37] model to obtain features on multi-view images, and propose an alignment loss to map the features in our 3D voxel grid to 2D pixels. By calculating the dot-product attention between the 3D per-point features and CLIP language embeddings, we can ground the semantic concepts in the 3D compact representation. Finally, to answer the questions, we introduce a set of neural reasoning operators, including filter, count, relation operators and so on, which take the 3D representations of different objects as input and output the predictions. We conduct experiments on our proposed 3DMV-VQA benchmark. Experimental results show that our proposed 3D-CLR outperforms all baseline models a lot. However, failure cases and model diagnosis show that challenges still exist concerning the grounding of small objects and the separation of close object instances. We provide an in-depth analysis of the challenges and discuss potential future directions. To sum up, we have the following contributions in this paper. * We propose the novel task of 3D concept learning and reasoning from multi-view images. * By having robots actively explore the embodied environments, we collect a large-scale benchmark on 3D multi-view visual question answering (3DMV-VQA). * We devise a model that incorporates a neural radiance field, 2D pretrained vision and language model, and neural reasoning operators to ground the concepts and perform 3D reasoning on the multi-view images. We illustrate that our model outperforms all baseline models. * We perform an in-depth analysis of the challenges of this new task and highlight potential future directions. ## 2 Related Work **Visual Reasoning** There have been numerous tasks focusing on learning visual concepts from natural language, including visually-grounded question answering [18, 19], text-image retrieval [60] and so on. Visual reasoning has drawn much attention recently as it requires human-like understanding of the visual scene. A wide variety of benchmarks have been created over the recent years [7, 8, 23, 27, 33, 70]. However, they mainly focus on visual reasoning from 2D single-view images, while there's strong psychological evidence that human beings perform visual reasoning on the underlying 3D representations. In this paper, we propose the novel task of visual reasoning from multi-view images, and collect a large-scale benchmark for this task. In recent years, numerous visual reasoning models have also been proposed, ranging from attention-based methods [5, 30], graph-based methods [28], to models based on large pretrained vision-language model [9, 38]. These methods model the reasoning process implicitly with neural networks. Neural-symbolic methods [6, 40, 66] explicitly perform symbolic reasoning on the objects representations and language representations. They use perception models to extract 2D masks as a first step, and then execute operators and ground concepts on these pre-segmented masks, but are limited to a set of pre-defined concepts on simple scenes. [26] proposes to use the feature vectors from occupancy networks [42] to do visual reasoning in the 3D space. However, they also use a synthetic dataset, and learn a limited set of semantic concepts from scratch. We propose to learn 3D neural field features from 2D multi-view real-world images, and incorporate a 2D VLM for open-vocabulary reasoning. **3D Reasoning** Understanding and reasoning about 3D scenes has been a long-standing challenge. Recent works focus on leveraging language to explore 3D scenes, such as object captioning [3, 4] and object localization from language [1, 17, 29]. Our work is mostly related to 3D Visual Question Answering [63, 65, 2, 65] as we both focus on answering questions and reasoning about 3D scenes. However, these works use point clouds as 3D representations, which diverts from the way human beings perform 3D reasoning. Instead of being given an entire 3D representation all at once, human beings would actively move and explore the environment, integrating multi-view information to get a compact 3D representation. Therefore, we propose 3D reasoning from multi-view images. In addition, since 3D assets paired with natural language descriptions are hard to get in real-life scenarios, previous works struggle to ground open-vocabulary concepts. In our work, we leverage 2D VLMs for zero-shot open-vocabulary concept grounding in the 3D space. **Embodied Reasoning** Our work is also closely related to Embodied Question Answering (EQA) [11, 68] and Interactive Question Answering (IQA) [22, 35], which also involve an embodied agent exploring the environment and answering the question. However, the reasoning mainly focuses on the outcome or the history of the navigation on 2D images and does not require a holistic 3D understanding of the environment. There are also works [12, 20, 51, 54, 69, 57] targeting instruction following in embodied environments, in which an agent is asked to perform a series of tasks based on language instructions. Different from their settings, for our benchmark an embodied agent actively explores the environment and takes multi-view images for 3D-related reasoning. **Neural Fields** Our approach utilizes neural fields to parameterize an underlying 3D compact representations of scenes for reasoning. Neural field models (_e.g.,_[43]) have gained much popularity since they can reconstruct a volumetric 3D scene representation from a set of images. Recent works [21, 24, 58, 67] have pushed it further by using classic voxel-grids to explicitly store the scene properties (_e.g._, density, color and feature) for rendering, which allows for real-time rendering and is utilized by this paper. Neural fields have also been used to represent dynamic scenes [14, 44], appearance [43, 45, 49, 53, 64], physics [34], robotics [32, 52], acoustics [39] and more general multi-modal signals [13]. There are also some works that integrate semantics or language in neural fields [31, 61]. However, they mainly focus on using language for manipulation, editing or generation. [26] leverages neural descriptor field [52] for 3D concept grounding. However, they require ground-truth occupancy values to train the neural field, which can not be applied to real-world scenes. In this paper, we propose to leverage voxel-based neural radiance field [58] to get the compact representations for 3D visual reasoning. ## 3 Dataset Generation ### Multi-View Images Our dataset includes 5k 3D scenes from the Habitat-Matterport 3D Dataset (HM3D) dataset [47], and approximately 600k images rendered from the 3D scenes. The images are rendered via Habitat [50, 59]. **Scene Generation** We build our benchmark on top of the HM3DSem dataset [62], which is a large-scale dataset of 3D real-world indoor scenes with densely annotated semantics. It consists of 142,646 object instance annotations across 216 3D spaces and 3,100 rooms within those spaces. HM3D dataset uses texture information to annotate pixel-accurate object boundaries, which provides large-scale object annotations and ensures the scale, quality, and diversity of 3D visual reasoning questions of our benchmark. To construct a benchmark that covers questions of different difficulty levels, it's crucial that we include 3D scenes of different scales in our benchmark. We start with single rooms in HM3D scenes, which has an appropriate amount of semantic concepts and relationships to base some simple questions on. To get the scale of single rooms, we calculate bounding boxes of rooms according to floor instance segmentations. We then proceed to generate bounding boxes for scenes with multiple adjacent rooms. For more complex holistic scene understanding, we also include whole-house scenes, which may contain tens of rooms. Overall, the 3DMV-VQA benchmark contains three levels of scenes (2000 single-room scenes, 2000 multi-room scenes and 100 whole-house scenes). **Image Rendering** After we get the bounding box of each scene, we load the scene into the Habitat simulator. We also put a robot agent with an RGB sensor at a random initial point in the bounding box. The data is collected via exploration of the robot agent. Specifically, at each step of the data collection process, we sample a navigable point and make the agent move to the point along the shortest path. When the agent has arrived at a point, we rotate the agent \(30^{\circ}\) along z-axis for 12 times so that the agent can observe the \(360^{\circ}\) view of the scene at the position. It can also look up and down, with a random mild angle from [\(-10^{\circ}\),\(10^{\circ}\)] along the x-axis. A picture is taken each time the agent rotates to a new orientation. In total 12 pictures are taken from each point. While traveling between points, the robot agent further takes pictures. We also exploit a policy such that when the camera is too far from or too close to an object and thus the agent cannot see anything, we discard the bad-view images. ### Questions and Answers We pair each scene with machine-generated questions from pre-defined templates. All questions are open-ended and can be answered with a single word (samples in Fig. 1). **Concepts and Relationships** To generate questions and answers, we utilize the semantic annotations of HM3DSem [62] to get the semantic concepts and their bounding boxes, as well as the bounding boxes of the rooms. We merge semantic concepts with similar meanings (_e.g.,_, L-shaped sofa to sofa, desk chair / computer chair e.g. to chair). We also define 11 relationships: inside, above, below, on the top of, close, far, large, small, between, on the left, and on the right. Before generating questions, we first generate a scene graph for each scene containing all concepts and relationships. **Question Types** We define four types of questions: concept, counting, relation and comparison. \(\bullet\)**Concept.** Conceptual questions query whether there's an object of a certain semantic concept in the scene, or whether there's a room containing the objects of the semantic concept. \(\bullet\)**Counting.** Counting-related questions ask about how many instances of a semantic concept are in the scene, or how many rooms contain objects of the semantic concept. \(\bullet\)**Relation.** Relational questions ask about the 11 relationships and their compositions. Based on the number of relations in a question, we have one-hop to three-hop questions for the relation type. \(\bullet\)**Comparison.** The comparison question type focuses on the comparison of two objects, two semantic concepts or two rooms. It can be combined with the relational concepts to compare two objects (_e.g.,_ larger, closer to, more left _etc_). It also compares the number of instances of two semantic concepts, or the number of objects of certain concepts in different rooms. **Bias Control.** Similar to previous visual reasoning benchmarks [26, 33], we use machine-generated questions since the generation process is fully controllable so that we can avoid dataset bias. Questions are generated from pre-defined templates, and transformed into natural language questions with associated semantic concepts and relationships from the scene. We manually define 41 templates for question generation. We use depth-first search to generate questions. We perform bias control based on three perspectives: template counts, answer counts, and concept counts. For selecting templates, we sort the templates each time we generate a question to ensure a balanced question distribution. We force a flat answer distribution for each template by rejection sampling. Specifically, once we generate a question and an answer, if the number of the questions having the same answer and template is significantly larger than other answers, we discard it and continue searching. Once we find an answer that fits in the ideal answer distribution, we stop the depth-first searching for this question. We also force a flat concept distribution for each template using the same method. In addition to controlling the number of concepts mentioned in the templates, we also control the number of relation tuples consisting of the same concept sets. ## 4 Method Fig. 2 illustrates an overview of our framework. Specifically, our framework consists of three steps. First, we learn a 3D compact representation from multi-view images using neural field. And then we propose to leverage pre-trained 2D vision-and-language model to ground concepts on 3D space. This is achieved by 1) generating 2D pixel features using CLIP-LSeg; 2) aligning the features of 3D voxel grid and 2D pixel features from CLIP-LSeg [37]; 3) dot-product attention between the 3D features and CLIP language features [37]. Finally, to perform visual reasoning, we propose neural reasoning operators, which execute the question step by step on the 3D compact representation and outputs a final answer. For example, we use Filter operators to ground semantic concepts on the 3D representation, Get_Instance to get all instances of a semantic class, and Count_Relation to count how many pairs of the two semantic classes have the queried relation. ### Learning 3D Compact Scene Representations Neural radiance fields [43] are capable of learning a 3D representation that can reconstruct a volumetric 3D scene representation from a set of images. Voxel-based methods [21, 24, 58, 67] speed up the learning process by explicitly storing the scene properties (_e.g._, density, color and feature) in its voxel grids. We leverage Direct Voxel Grid Optimization (DVGO) [58] as our backbone for 3D compact representation for its fast speed. DVGO stores the learned density and color properties in its grid cells. The rendering of multi-view images is by interpolating through the voxel grids to get the density and color for each sampled point along each sampled ray, and integrating the colors based on the rendering alpha weights calculated from densities according to quadrature rule [41]. The model is trained by minimizing the L2 loss between the rendered multi-view images and the ground-truth multi-view images. By extracting the density voxel grid, we can get the 3D compact representation (_e.g._, By visualizing points with density greater than 0.5, we can get the 3D representation as shown in Fig. 2 I. ) ### 3D Semantic Concept Grounding Once we extract the 3D compact representation of the scene, we need to ground the semantic concepts for reasoning from language. Recent work from [26] has proposed to ground concepts from paired 3D assets and question-answers. Though promising results have been achieved on synthetic data, it is not feasible for open-vocabulary 3D reasoning in real-world data, since it is hard to collect large-scale 3D vision-and-language paired data. To address this challenge, our idea is to leverage pre-trained 2D vision and language model [46, 48] for 3D concept grounding in real-world scenes. But how can we map 2D concepts into 3D neural field representations? Note that 3D compact representations can be learned from 2D multi-view images and that each 2D pixel actually corresponds to several 3D points along the ray. Therefore, it's possible to get 3D features from 2D per-pixel features. Inspired by this, we first add a feature voxel grid representation to DVGO, in addition to density and color, to represent 3D features. We then apply CLIP-LSeg [37] to learn per-pixel 2D features, which can be attended to by CLIP concept embeddings. We use an alignment loss to align 3D features with 2D features so that we can perform concept grounding on the 3D representations. **2D Feature Extraction.** To get per-pixel features that can be attended by concept embeddings, we use the features from language-driven semantic segmentation (CLIP-LSeg) [37], which learns 2D per-pixel features from a pre-trained vision-language model (_i.e._, [46]). Specifically, it uses the text encoder from CLIP, trains an image encoder to produce an embedding vector for each pixel, and calculates the scores of word-pixel correlation by dot-product. By outputting the semantic class with the maximum score of each pixel, CLIP Figure 2: An overview of our 3D-CLR framework. First, we learn a 3D compact scene representation from multi-view images using neural fields (I). Second, we use CLIP-LSeg model to get per-pixel 2D features (II). We utilize a 3D-2D alignment loss to assign features to the 3D compact representation (III). By calculating the dot-product attention between the 3D per-point features and CLIP language embeddings, we could get the concept grounding in 3D (IV). Finally, the reasoning process is performed via a set of neural reasoning operators, such as Filter, Get_Instance and Count_Relation (V). Relation operators are learned via relation networks. LSeg is able to perform zero-shot 2D semantic segmentation. **3D-2D Alignment.** In addition to density and color, we also store a 512-dim feature in each grid cell in the compact representation. To align the 3D per-point features with 2D per-pixel features, we calculate an L1 loss between each pixel and each 3D point sampled on the ray of the pixel. The overall L1 loss along a ray is the weighted sum of all the pixel-point alignment losses, with weights same as the rendering weights: \(\mathcal{L}_{\text{feature}}=\sum_{i=1}^{K}w_{i}(\|\mathbf{f_{i}}-F(\mathbf{r})\|)\), where \(\mathbf{r}\) is a ray corresponding to a 2D pixel, \(F(\mathbf{r})\) is the 2D feature from CLIP-LSeg, \(K\) is the total number of sampled points along the ray and \(\mathbf{f_{i}}\) is the feature of point \(i\) by interpolating through the feature voxel grid, \(w_{i}\) is the rendering weight. **Concept Grounding through Attention.** Since our feature voxel grid representation is learnt from CLIP-LSeg, by calculating the dot-product attention \(<\mathbf{f},\mathbf{v}>\) between per-point 3D feature \(\mathbf{f}\) and the CLIP concept embeddings \(\mathbf{v}\), we can get zero-shot view-independent concept grounding and semantic segmentations in the 3D representation, as is presented in Fig. 2 IV. ### Neural Reasoning Operators Finally, we use the grounded semantic concepts for 3D reasoning from language. We first transform questions into a sequence of operators that can be executed on the 3D representation for reasoning. We adopt a LSTM-based semantic parser [66] for that. As [26, 40], we further devise a set of operators which can be executed on the 3D representation. Please refer to **Appendix** for a full list of operators. **Filter Operators.** We filter all the grid cells with a certain semantic concept. **Get_Instance Operators.** We implement this by utilizing DBSCAN [15], an unsupervised algorithm which assigns clusters to a set of points. Specifically, given a set of points in the 3D space, it can group together the points that are closely packed together for instance segmentation. **Relation Operators.** We cannot directly execute the relation on the 3D representation as we have not grounded relations. Thus, we represent each relation using a distinct neural module (which is practical as the vocabulary of relations is limited [36]). We first concatenate the voxel grid representations of all the referred objects and feed them into the relation network. The relation network consists of three 3D convolutional layers and then three 3D deconvolutional layers. A score is output by the relation network indicating whether the objects have the relationship or not. Since vanilla 3D CNNs are very slow, we use Sparse Convolution [10] instead. Based on the relations asked in the questions, different relation modules are chosen. ## 5 Experiments ### Experimental Setup **Evaluation Metric.** We report the visual question answering accuracy on the proposed 3DMV-VQA dataset w.r.t the four types of questions. The train/val/test split is 7:1:2. **Implementation Details** For 3D compact representations, we adopt the same architectures as DVGO, except skipping the coarse reconstruction phase and directly training the fine reconstruction phase. After that, we freeze the density voxel grid and color voxel grid, for the optimization of the feature voxel grid only. The feature grid has a world size of 100 and feature dim of 512. We train the compact representations for 100,000 iterations and the 3D features for another 20,000 iterations. For LSeg, we use the official demo model, which has the ViT-L/16 image encoder and CLIP's ViT-B/32 text encoder. We follow the official script for inference and use multi-scale inference. For DBSCAN, we use an epsilon value of 1.5, minimum samples of 2, and we use L1 as the clustering method. For the relation networks, each relation is encoded into a three-layer sparse 3D convolution network with hidden size 64. The output is then fed into a one-layer linear network to produce a score, which is normalized by sigmoid function. We use cross-entropy loss to train the relation networks, and we use the one-hop relational questions with "yes/no" answers to train the relation networks. ### Baselines Our baselines range from vanilla neural networks, attention-based methods, fine-tuned from large-scale VLM, and graph-based methods, to neural-symbolic methods. \(\bullet\)**LSTM**. The question is transferred to word embeddings which are input into a word-level LSTM [25]. The last LSTM hidden state is fed into a multi-layer perceptron (MLP) that outputs a distribution over answers. This method is able to model question-conditional bias since it uses no image information. \(\bullet\)**CNN+LSTM**. The question is encoded by the final hidden states from LSTM. We use a resnet-50 to extract frame-level features of images and average them over the time dimension. The features are fed to an MLP to predict the final answer. This is a simple baseline that examines how vanilla neural networks perform on 3DMV-VQA. \(\bullet\)**3D-Feature+LSTM**. We use the 3D features we get from 3D-2D alignment and downsample the voxel grids using 3D-CNN as input, concatenated with language features from LSTM and fed to an MLP. \(\bullet\)**MAC**[30]. MAC utilizes a Memory, Attention and Composition cell to perform iterative reasoning process. Like CNN+LSTM, we use the average pooling over multi-view images as the feature map. * **MAC(V)**. We treat the multi-view images along a trajectory as a video. We modify the MAC model by applying a temporal attention unit across the video frames to generate a latent encoding for the video. * **NS-VQA**[66]. This is a 2D version of our 3D-CLR model. We use CLIP-LSeg to ground 2D semantic concepts from multi-view images, and the relation network also takes the 2D features as input. We execute the operators on each image and max pool from the answers to get our final predictions. * **ALPRO**[38]. ALPRO is a video-and-language pre-training framework. A transformer model is pretrained on large webly-source video-text pairs and can be used for downstream tasks like Video Question answering. * **LGCN**[28]. LGCN represents the contents in the video as a location-aware graph by incorporating the location information of an object into the graph construction. ### Experimental Results **Result Analysis.** We summarize the performances for each question type of baseline models in Table 1. All models are trained on the training set until convergence, tuned on the validation set, and evaluated on the test set. We provide detailed analysis below. First, for the examination of language-bias of the dataset, we find that the performance of LSTM is only slightly higher than random and frequency, and all other baselines outperform LSTM a lot. This suggests that there's little language bias in our dataset. Second, we observe that encoding temporal information in MAC (_i.e.,_ MAC(V)) is better than average-pooling of the features, especially in counting and relation. This suggests that average-pooling of the features may cause the model to lose information from multi-view images, while attention on multi-view images helps boost the 3D reasoning performances. Third, we also find that fine-tuning on large-scale pretrained model (_i.e.,_ ALPRO) has relatively high accuracies in concept-related questions, but for counting it's only slightly higher than the random baseline, suggesting that pretraining on large-scale video-language dataset may improve the model's perception ability, but does not provide the model with the ability to tackle with more difficult reasoning types such as counting. Next, we find that LGCN has poor performances on the relational questions, indicating that building a location-aware graph over 2D objects still doesn't equip the model with 3D location reasoning abilities. Last but not least, we find that 3D-based baselines are better than their 2D counterparts. 3D-Feature+LSTM performs well on the 3D-related questions, such as counting and relation, than most of the image-based baselines. Compared with 3D-CLR, NS-VQA can perform well in the conceptual questions. However, it underperforms 3D-CLR a lot in counting and relation, suggesting that these two types of questions require the holistic 3D understanding of the entire 3D scenes. Our 3D-CLR outperforms other baselines by a large margin, but is still far from satisfying. From the accuracy of the conceptual question, we can see that it can only ground approximately 66% of the semantic concepts. This indicates that our 3DMV-VQA dataset is indeed very challenging. **Qualitative Examples.** In Fig. 3, we show four qualitative examples. From the examples, we show that our 3D-CLR can infer an accurate 3D representation from multi-view images, as well as ground semantic concepts on the 3D representations to get the semantic segmentations of the entire scene. Our 3D-CLR can also learn 3D relationships such as "close", "largest", "on top of" and so on. However, 3D-CLR also fails on some questions. For the third scene in the qualitative examples, it fails to ground the concepts "mouse" and "printer". Also, it cannot accurately count the instances sometimes. We give detailed discussions below. ### Discussions We perform an in-depth analysis to understand the challenge of this dataset. We leverage the modular design of our 3D-CLR, replacing individual components of the frame \begin{table} \begin{tabular}{l c c c c c} \hline \hline Methods & Concept & Counting & Relation & Comparison & Overall \\ \hline Q-type (rand.) & 49.4 & 10.7 & 21.6 & 49.2 & 26.4 \\ Q-type (freq.) & 50.8 & 11.3 & 23.9 & 50.3 & 28.2 \\ LSTM & 53.4 & 15.3 & 24.0 & 55.2 & 29.8 \\ \hline CNN+LSTM & 57.8 & 22.1 & 35.2 & 59.7 & 37.8 \\ MAC & 62.4 & 19.7 & 47.8 & 62.3 & 46.7 \\ MAC(V) & 60.0 & 24.6 & 51.6 & 65.9 & 50.0 \\ NS-VQA & 59.8 & 21.5 & 33.4 & 61.6 & 38.0 \\ ALPRO & 65.8 & 12.7 & 42.2 & 68.2 & 43.3 \\ LGCN & 56.2 & 19.5 & 35.5 & 66.7 & 39.1 \\ 3D-Feature+LSTM & 61.2 & 22.4 & 49.9 & 61.3 & 48.2 \\ \hline 3D-CLR (Ours) & **66.1** & **41.3** & **57.6** & **72.3** & **57.7** \\ \hline \hline \end{tabular} \end{table} Table 1: Question-answering accuracy of 3D visual reasoning baselines on different question types. work with ground-truth annotations for model diagnosis. The result is shown in Fig 4. 3D-CLR w/ Semantic denotes our model with ground-truth semantic concepts from HM3DSem annotations. 3D-CLR w/ Instance denotes that we have ground-truth instance segmentations of semantic concepts. From Fig. 3 and Fig. 4, we summarize several key challenges of our benchmark: **Very close object instances** From Fig. 4, we can see that even with ground-truth semantic labeling of the 3D points, 3D-CLR still has unsatisfying results on counting questions. This suggests that the instance segmentations provided by DBSCAN are not accurate enough. From the top two qualitative examples in Fig. 3, we can also see that if two chairs contact each other, DBSCAN will not tell them apart and thus have poor performance on counting. One crucial future direction is to improve unsupervised instance segmentations on very close object instances. **Grounding small objects** Fig. 4 suggests that 3D-CLR fails to ground a large portion of the semantic concepts, which hinders the performance. From the last example in Fig. 3, we can see that 3D-CLR fails to ground small objects like "computer mouse". Further examination indicates there are two possible reasons: 1) CLIP-LSeg fails to assign the right features to objects with limited pixels; 2) The resolution of feature voxel grid is not high enough and therefore small objects cannot be represented in the compact representation. An interesting future direction would be learning exploration policies that enable the agents to get closer to uncertain objects that cannot be grounded. **Ambiguity on 3D relations** Even with ground-truth semantic and instance segmentations, the performance of the relation network still needs to be improved. We find that most of the failure cases are correlated to the "inside" relation. From the segmentations in Fig. 3, we can see that 3D-CLR is unable to ground the objects in the cabinets. A potential solution can be joint depth and segmentation predictions. ## 6 Conclusion In this paper, we introduce the novel task of 3D reasoning from multi-view images. By placing embodied robot that actively explores indoor environments, we collect a large-scale benchmark named 3DMV-VQA. We also propose a new 3D-CLR model that incorporates neural field, 2D VLM, Figure 4: Model diagnosis of our 3D-CLR. Figure 3: Qualitative examples of our 3D-CLR. We can see that 3D-CLR can ground most of the concepts and answer most questions correctly. However, it still fails sometimes, mainly because it cannot separate close object instances and ground small objects. as well as reasoning operators for this task and illustrate its effectiveness. Finally, we perform an in-depth analysis to understand the challenges of this dataset and also point out potential future directions. We hope that 3DMV-VQA can be used to push the frontiers of 3D reasoning. Acknowledgements.This work was supported by the MIT-IBM Watson AI Lab, DARPA MCS, DSO grant DSOCO21072, and gift funding from MERL, Cisco, Sony, and Amazon. We would also like to thank the computation support from AiMOS, a server cluster for the IBM Research AI Hardware Center.
2310.09450
Non-intrusive Enforcement of Decentralized Stability Protocol for IBRs in AC Microgrids
This paper presents decentralized, passivity-based stability protocol for inverter-based resources (IBRs) in AC microgrids and a non-intrusive approach that enforces the protocol. By "non-intrusive" we mean that the approach does not require reprogramming IBRs' controllers to enforce the stability protocol. Implementing the approach only requires very minimal information of IBR dynamics, and sharing such information with the non-IBR-manufacturer parties does not cause any concerns on intellectual property privacy. Enforcing the protocol allows for plug-and-play operation of IBRs, while maintaining microgrid stability. The proposed method is tested by simulating two networked microgrids with tie lines and two IBRs modeled in the electromagnetic transient (EMT) time scale. Simulations show that oscillations with increasing amplitudes can occur, when two stable AC microgrids are networked. Simulations also suggest that the proposed approach can mitigate such a system-level symptom by changing less than 2 percent of energy produced by IBRs.
Tong Huang
2023-10-13T23:57:22Z
http://arxiv.org/abs/2310.09450v3
# Non-intrusive Enforcement of Decentralized Stability Protocol for IBRs in AC Microgrids ###### Abstract This paper presents decentralized, passivity-based stability protocol for inverter-based resources (IBRs) in AC microgrids and a non-intrusive approach that enforces the protocol. By "non-intrusive" we mean that the approach does not require reprogramming IBRs' controllers to enforce the stability protocol. Implementing the approach only requires very minimal information of IBR dynamics, and sharing such information with the non-IBR-manufacturer parties does not cause any concerns on intellectual property privacy. Enforcing the protocol allows for plug-and-play operation of IBRs, while maintaining microgrid stability. The proposed method is tested by simulating two networked microgrids with tie lines and two IBRs modeled in the electromagnetic transient (EMT) time scale. Simulations show that oscillations with increasing amplitudes can occur, when two stable AC microgrids are networked. Simulations also suggest that the proposed approach can mitigate such a system-level symptom by changing \(<2\%\) of energy produced by IBRs. Microgrid stability, inverter-based resource (IBR), integration of distributed energy resources (DERs), resilient control, electromagnetic transient (EMT) ## I Introduction As many countries are decarbonizing their energy infrastructure, a growing number of Inverter-based Resources (IBRs), e.g., energy storage, rooftop solar panels, and electric vehicle charging stations, are emerging in power distribution grids [1]. However, integrating large-scale IBRs will pose unprecedented challenges to distribution grid management, since today's distribution grids are not designed for hosting tens of thousands of IBRs, and distribution system operators (DSOs) generally cannot directly control IBRs at grid edges. With the concept of microgrids [2], a large amount of IBRs in a distribution grid can be managed via a "divide-and-conquer" strategy: the distribution grid can be divided into several networked microgrids, and each microgrid manages its own generation and loads [3]. With such an architecture, the management complexity for DSOs is significantly reduced, as the DSOs only need to coordinate several microgrids, instead of controlling massive IBRs in a centralized manner [4]. A microgrid has three operational modes: a grid-connected mode [2], an islanded mode [2], and a hybrid mode [5]. Under normal conditions, a microgrid can enter the grid-connected mode where the loads in the microgrid can be balanced by the energy from both local generation and the host distribution system. When the host distribution grid fails to deliver energy, a microgrid can either balance its load autonomously by its local generation (i.e., the islanded mode), or network with its neighboring microgrids and balance loads collaboratively (i.e., the hybrid mode) [5]. One key challenge of operating microgrids in the islanded or hybrid mode is how to ensure the microgrid stability [6]. Compared with large-scale transmission systems whose dynamics are governed by thousands of giant rotating machines, the microgrids powered by IBRs are more sensitive to disturbances that include connection or disconnection of IBRs, renewable fluctuations and line faults, due to lack of physical inertia in generation resources and the small scale of the microgrids. As a result, the disturbances may compromise the quality of electricity services by incurring sustained oscillations or even instability. Exacerbating the challenge, today's IBR manufacturers tune their IBRs at a device level without much consideration of system-level performance of networked IBRs. However, the non-manufacturer parties (NMPs), e.g., DSOs, microgrid operators (\(\mu\)GOs), and IBR owners, who concern security of networked IBRs, typically do not know the detailed control schemes of IBRs and cannot reprogram the IBRs' controllers. This is because the manufacturers are reluctant to share their detailed control schemes with the NMPs due to concerns on intellectual property (IP) privacy. Without the consideration of the system-level performance, IBRs might fight with other, causing undesirable oscillations or instability. Such incidences occurred in transmission systems, e.g., the sub-synchronous control interactions (SSCI) in Texas [7] and oscillations in High Voltage DC systems that contain multiple converters [8]. In the context of microgrids, it is possible that networking two stable microgrids leads to oscillations with increasing amplitudes (shall be shown in Section V). Therefore, as more and more IBRs are emerging at grid edges, it is imperative to develop technologies that certify system-level stability of networked IBRs. Existing approaches to stability certification for electrical energy systems can be classified into two categories: centralized and decentralized approaches. In the _centralized_ approaches, system operators (SOs) are assumed to be able to collect dynamical models of key components in the systems, and they _assess_ the system stability by performing time-domain simulations [9], by conducting small-signal analysis [10], or by searching for system behavior-summary functions, e.g., the Lyapunov functions [4, 11], and energy functions [12, 13]. The drawbacks of these centralized approaches are listed as follows: * IBR manufacturers can only share a "black-box" model with SOs for simulation purposes, due to concerns on IP privacy. Consequently, detailed IBRs' models are not available for performing analytical stability assessment [4, 11, 12, 13]. * Some approaches [9, 14] are computationally intractable when addressing high-order systems. For an IBR-rich microgrid, wide-range behaviors of interested lie in the EMT time scale, and they are described by high-order dynamics. * Most approaches [11, 12, 13] cannot provide SOs with actionable guidance of enforcing system stability. Beyond stability analysis, controls enforcing stability are much needed. The _decentralized_ approaches address the drawbacks of the centralized approaches by developing decentralized stability protocol for IBRs. The decentralized stability protocol entails conditions that each IBR needs to satisfy to ensure the stability of its host system. "Decentralized" is in the sense that these conditions are only related to the local information of the IBR dynamics of interested. The passivity theory is a common tool for designing such protocol. For example, reference [15] introduces the concept of self-disciplined stabilization in the context of DC microgrids. The stability protocol for each IBR is the passivity of the single-input-single-output (SISO) transfer function of the IBR. Reference [16] proposes the distributed, passivity-like stability protocol based on low-order nodal dynamics and power flow equations. Reference [17] develops the stability protocol for conventional generators in transmission systems based on the passivity shortage framework. Reference [18] learns a neural network-structured storage function for each IBR and leverages the storage function as stability protocol to certify microgrid stability. Reference [19] presents the passivity-based stability protocol for IBRs to assess small-signal stability of both fast and slow behaviors of IBR interconnections. However, the existing decentralized approaches have the following limitations: * In references [15, 16, 17] and [19], the protocol is enforced in an _intrusive_ manner, i.e., one has to reprogram the controllers of generation resources to enforce the protocol. This is undesirable for both NMPs and IBR manufacturers due to the following reasons. The IBR controllers are typically packaged into the inverters and cannot be reprogrammed by the NMPs, for protecting IP privacy and reducing IBRs' vulnerability to cyberattacks. The control schemes of commercial inverters are typically deliberately designed and extensively tested by IBR manufacturers for achieving certain functions, such as voltage and current regulation. Hence, the IBR manufacturers might be reluctant to completely abandon or radically change their mature control schemes for enforcing the stability protocol [19]. Besides, since many IBRs have been installed in the grid, it is costly or even infeasible to reprogram the controllers of these existing IBRs. * The complexity of dynamics of IBR-dominated, AC microgrids is ignored by [15, 16, 17, 18]. For example, reference [15] only considers the SISO dynamics of converter interfaces in DC microgrids, while the IBR's dynamics in an AC microgrid can have multiple inputs and outputs. References [16, 17, 18] only address the slow dynamics of generation units but ignores the interactions among network dynamics and fast IBR controllers in the EMT time scale. Modelling full-order network dynamics is necessary in an IBR-rich microgrid, as some inverters may have high-frequency dynamics [20]. * References [18] and [19] only address stability assessment in a distributed manner without providing guidance of how to stabilize an unstable microgrid. This paper introduces a _first-of-its-kind_, _non-intrusive_, and _decentralized_ approach to enforcing stability protocol of IBRs in AC microgrids. The stability protocol is designed based on the passivity theory and can be enforced by a novel power-electronic (PE) interface in a decentralized, non-intrusive manner. The contribution of this paper is summarized as follows: * The approach enforces the stability protocol in a non-intrusive fashion, i.e., it does not require reprogramming IBR controllers. This allows the NMPs to enforce the protocol that enables plug-and-play operation of IBRs. * Designing the PE interface only needs a scalar that encapsulates input-output dynamics of an IBR, and does not require the detailed control schemes of the IBR. Exposing such a scalar to NMPs will not cause any concerns of IP privacy for IBR manufacturers, as the detailed IBR control schemes cannot be inferred only based on the scalar. We also present an algorithm for IBR manufacturers to compute the scalar. * The approach can address the high-order dynamics due to the tight interaction among voltage and current controllers, and network dynamics in the EMT time scale. The rest of this paper is organized as follows: Section II mathematically describes the dynamics of an IBR-dominated microgrid; Section III presents the decentralized stability protocol; Section IV introduces the interface that aims to enforce the stability protocol; Section V tests the performance of the interface; and Section VI summarizes this paper. ## II Microgrid Dynamics This section considers an AC microgrid with \(N\) IBRs. We describe the nodal and network dynamics of the microgrid. Then the microgrid dynamics is organized into a feedback architecture lending itself to developing stability protocol. ### _Dynamics of Grid-forming IBRs_ Figure 1 presents the cyber-physical architecture of the \(n\)-th IBR, \(n=1,2,\ldots,N\)[20]. The IBR includes a DC voltage source, an inverter, a resistor-inductor-capacitor (RLC) low-pass filter, and an inverter controller. #### Ii-A1 RLC filter The inverter connects to the rest of the microgrid via an RLC filter whose dynamics are [20] \[L_{\text{fr}}\dot{\imath}_{\text{ldn}} =-r_{\text{fr}}\dot{\imath}_{\text{ldn}}+L_{\text{fr}}\omega_{ \text{lo}}\dot{\imath}_{\text{ign}}+v_{\text{ldn}}-v_{\text{odn}} \tag{1a}\] \[L_{\text{fr}}\dot{\imath}_{\text{ign}} =-r_{\text{fr}}\dot{\imath}_{\text{ign}}-L_{\text{fr}}\omega_{ \text{lo}}\dot{\imath}_{\text{dn}}+v_{\text{iqn}}-v_{\text{qqn}}\] (1b) \[C_{\text{fr}}\dot{v}_{\text{odn}} =C_{\text{fr}}\omega_{\text{lo}}v_{\text{qqn}}+\dot{\imath}_{ \text{ldn}}+\dot{\imath}_{\text{odn}}\] (1c) \[C_{\text{fr}}\dot{v}_{\text{oqn}} =-C_{\text{fr}}\omega_{\text{lo}}v_{\text{odn}}+\dot{\imath}_{ \text{iqn}}+\dot{\imath}_{\text{qqn}} \tag{1d}\] where \(\dot{\imath}_{\text{dn}}\) and \(\dot{\imath}_{\text{odn}}\) (\(\dot{\imath}_{\text{ign}}\) and \(\dot{\imath}_{\text{qqn}}\)) are the direct (quadrature) component of the current \(\dot{\imath}_{n}\) and \(\dot{\imath}_{\text{on}}\) annotated in Figure 1; \(v_{\text{idn}}\) and \(v_{\text{odn}}\) (\(v_{\text{sqn}}\) and \(v_{\text{qqn}}\)) are the direct (quadrature) components of the voltage \(\mathbf{v}_{\text{in}}\) and \(\mathbf{v}_{\text{on}}\); resistance \(r_{\text{fn}}\), inductance \(L_{\text{fn}}\), and capacitance \(C_{\text{fn}}\) of the RLC circuit are labeled in Figure 1; and \(\omega_{0}\) is the nominal frequency (i.e., 377 or 314 rad/s). Note that the reference positive direction of \(\text{i}_{\text{on}}\) is _pointing into_ the IBR. #### Ii-B2 Power controller A power controller contains a power calculator, a power filter, and a droop controller. The power calculator computes the instantaneous real power \(\tilde{p}_{n}\) and reactive power \(\tilde{q}_{n}\)_injecting into_ the rest of the microgrid, based on IBR \(n\)'s terminal voltages (\(v_{\text{odn}}\) and \(v_{\text{qqn}}\)) and current (\(i_{\text{odn}}\) and \(i_{\text{qqn}}\)) in the direct-quadrature (d-q) reference frame of IBR \(n\). With the positive reference directions assigned to \(\mathbf{v}_{\text{on}}\) and \(\mathbf{i}_{\text{on}}\) in Figure 1, \(\tilde{p}_{n}\) and \(\tilde{q}_{n}\) are computed by [20] \[\tilde{p}_{n}=-(v_{\text{odn}}i_{\text{odn}}+v_{\text{qqn}}i_{ \text{qqn}}) \tag{2a}\] \[\tilde{q}_{n}=-(v_{\text{qqn}}i_{\text{odn}}-v_{\text{odn}}i_{ \text{qqn}}). \tag{2b}\] The instantaneous real and reactive power feed the power filter, i.e., a digital low-pass filter, whose dynamics is described by \[\dot{P}_{n}=-\omega_{\text{cn}}P_{n}+\omega_{\text{cn}}\tilde{p} _{n} \tag{3a}\] \[\dot{Q}_{n}=-\omega_{\text{cn}}Q_{n}+\omega_{\text{cn}}\tilde{q} _{n} \tag{3b}\] where \(\omega_{\text{cn}}\) is the cut-off frequency; and \(P_{n}\) and \(Q_{n}\) are the real and reactive power filtered by the power filter. The droop controller takes \(P_{n}\) and \(Q_{n}\) as inputs and it specifies frequency \(\omega_{n}\), phase angle \(\delta_{n}\) and voltage setpoints \(v^{*}_{\text{odn}}\) and \(v^{*}_{\text{qqn}}\) via \[\dot{\delta}_{n}=\omega_{n}-\omega_{0},\quad\omega_{n}=\omega_{ \text{sn}}-\alpha_{n}P_{n} \tag{4a}\] \[v^{*}_{\text{odn}}=V_{\text{on}}-\beta_{n}Q_{n},\quad v^{*}_{ \text{qqn}}=0 \tag{4b}\] where \(\omega_{\text{sn}}\) is set by a secondary controller; \(V_{0n}\) is a voltage setpoint; and \(\alpha_{n}\) and \(\beta_{n}\) are droop control parameters. #### Ii-B3 Voltage and current controllers The dynamics of the voltage and current controllers is governed by \[\dot{\phi}_{\text{dn}}=-v_{\text{odn}}+v^{*}_{\text{odn}},\quad \dot{\phi}_{\text{qqn}}=-v_{\text{qqn}}+v^{*}_{\text{qqn}}, \tag{5a}\] \[\dot{\gamma}_{\text{dn}}=-i_{\text{idn}}+i^{*}_{\text{idn}},\quad \dot{\gamma}_{\text{qn}}=-i_{\text{iqn}}+i^{*}_{\text{iqn}},\] (5b) \[i^{*}_{\text{dtn}}=K_{\text{pvn}}(v^{*}_{\text{odn}}-v_{\text{ odn}})-F_{n}i_{\text{odn}}-\omega_{0}C_{\text{fn}}v_{\text{qqn}}+K_{ \text{ivn}}\phi_{\text{dn}}\] (5c) \[i^{*}_{\text{iqn}}=K_{\text{pvn}}(v^{*}_{\text{qqn}}-v_{\text{qqn }})-F_{n}i_{\text{qqn}}+\omega_{0}C_{\text{fn}}v_{\text{odn}}+K_{\text{ivn}} \phi_{\text{qqn}}\] (5d) \[v^{*}_{\text{idn}}=K_{\text{pcn}}(i^{*}_{\text{idn}}-i_{\text{idn }})-\omega_{0}L_{\text{fn}}i_{\text{iqn}}+K_{\text{icn}}\gamma_{\text{qn}}\] (5e) \[v^{*}_{\text{iqn}}=K_{\text{pcn}}(i^{*}_{\text{iqn}}-i_{\text{iqn }})+\omega_{0}L_{\text{fn}}i_{\text{idn}}+K_{\text{icn}}\gamma_{\text{qn}} \tag{5f}\] where \(\phi_{\text{dn}}\) and \(\phi_{\text{qqn}}\) (\(\gamma_{\text{dn}}\) and \(\gamma_{\text{qqn}}\)) are state variables for the voltage (current) controller; \(i^{*}_{\text{idn}}\) and \(i^{*}_{\text{iqn}}\) are setpoints of the current controller provided by the voltage controller; and \(K_{\text{pvn}}\), \(F_{n}\), \(K_{\text{ivn}}\), \(K_{\text{pcn}}\), and \(K_{\text{icn}}\) are control parameters. #### Ii-B4 Time scale separation The state variables of dynamics (1), (3), (4), and (5) include \(\delta_{n}\), \(P_{n}\), \(Q_{n}\), \(\phi_{\text{dn}}\), \(\phi_{\text{dn}}\), \(\gamma_{\text{dn}}\), \(\gamma_{\text{qqn}}\), \(i_{\text{dqn}}\), \(i_{\text{iqn}}\), \(v_{\text{odn}}\), and \(v_{\text{qqn}}\). Define \(\mathcal{S}_{n}^{\text{s}}=\{\delta_{n},P_{n},Q_{n}\}\) and \(\mathcal{S}_{n}^{\text{f}}=\{\phi_{\text{dn}},\phi_{\text{dn}},\gamma_{\text{ qn}},\gamma_{\text{qqn}},i_{\text{idn}},i_{\text{qqn}},v_{\text{odn}},v_{\text{qqn}}\}\). Next we show that the states in \(\mathcal{S}_{n}^{\text{f}}\) can be stabilized much faster than those in \(\mathcal{S}_{n}^{\text{s}}\) via simulating a grid-connected IBR with a representative parameter setting [20]. The simulation details are reported in Appendix A. In the simulation, the load changes at time \(t=0.5\)s, Figure 2 visualizes state variables \(P_{1}\) and \(\phi_{\text{d1}}\). It can be observed that it takes more than \(0.15\)s to stabilize \(P_{1}\), while \(\phi_{\text{d1}}\) is stabilized around \(0.006\)s after the disturbance occurs. Figure 3 presents the stabilization time of key variables of the IBR. Figure 3 suggests that \(\omega_{1}\), \(P_{1}\), and \(Q_{1}\) are stabilized much slower than the states in \(\mathcal{S}_{n}^{\text{f}}\). A similar observation is also reported in [21]. A very large body of literature (see [4] and the references therein) studies the slow dynamics defined by the states in \(\mathcal{S}_{n}^{\text{s}}\) by assuming that the fast states in \(\mathcal{S}_{n}^{\text{f}}\) are stabilized fast. This paper examines the interaction among the fast states in \(\mathcal{S}_{n}^{\text{f}}\) by assuming the states in \(\mathcal{S}_{n}^{\text{s}}\) as constants. With such an Fig. 1: Cyber and physical architecture of IBR \(n\). Fig. 4: (a) Reference frame transformation; and (b) an input-output perspective of IBR dynamics Fig. 3: Stabilization time of key variables Fig. 2: Time-domain evolution of normalized \(P_{1}\) and \(\phi_{\text{d1}}\). assumption, the IBR dynamics can be described by: \[\dot{\mathbf{x}}_{n}=A_{n}\mathbf{x}_{n}+B_{n}\mathbf{k}_{\text{ odd}n}+B_{n}^{\prime}v_{\text{odd}}^{*} \tag{6a}\] \[\mathbf{v}_{\text{odd}n}=C_{n}\mathbf{x}_{n} \tag{6b}\] where \(\mathbf{x}_{n}=[\phi_{\text{dn}},\phi_{\text{dn}},\gamma_{\text{dn}},\gamma_{ \text{gn}},i_{\text{dm}},i_{\text{dqn}},v_{\text{odn}},v_{\text{ogn}}]^{\top}\); \(\mathbf{i}_{\text{odd}n}=[i_{\text{odn}},i_{\text{eqn}}]^{\top}\); \(\mathbf{v}_{\text{odd}n}=[v_{\text{odn}},v_{\text{ogn}}]^{\top}\); and matrices \(A_{n}\), \(B_{n}\), \(B_{n}^{\prime}\) and \(C_{n}\) are derived from (1) and (5). Since the dynamics (6) is linear, the following equations also hold: \[\Delta\dot{\mathbf{x}}_{n}=A_{n}\Delta\mathbf{x}_{n}+B_{n}\Delta \dot{\mathbf{j}}_{\text{odd}n} \tag{7a}\] \[\Delta\mathbf{v}_{\text{odn}}=C_{n}\Delta\mathbf{x}_{n} \tag{7b}\] where the "\(\Delta\)" variables are the deviations from their steady states. The input-output relationship of the dynamics of IBR \(n\) is shown in the central block of Figure 4-(a). The input \(\Delta\mathbf{i}_{\text{odd}n}\) and output \(\Delta\mathbf{v}_{\text{odn}n}\) are represented in the direct-quadrature (d-q) reference frame of the \(n\)-th IBR, and they interact with the rest of the microgrid in a common reference frame (i.e., D-Q frame). Next, we present the reference frame transformation that converts variables in the d-q frame to the D-Q frame. #### Ii-A5 Reference frame transformation In Figure 4-(a), the output \(\Delta\mathbf{v}_{\text{oD}n}:=[\Delta v_{\text{oD}n},\Delta v_{\text{oQ}n}]^{\top}\) is obtained by \(\Delta\mathbf{v}_{\text{oD}n}=T_{n}\Delta\mathbf{v}_{\text{odq}n}\) where \[T_{n}=\begin{bmatrix}\cos\delta_{n}&-\sin\delta_{n}\\ \sin\delta_{n}&\cos\delta_{n}\end{bmatrix}. \tag{8}\] Note that \(\delta_{n}\) is assumed to be a constant, since it changes much slower than the states \(\mathbf{x}_{n}\) in the time scale of interest. Similarly, the relationship between \(\Delta\mathbf{i}_{\text{kD}Qn}:=[\Delta i_{\text{kD}n},\Delta i_{\text{kQ}n}]^{\top}\) and \(\Delta\mathbf{i}_{\text{kQ}n}\) are described by \(\Delta\mathbf{i}_{\text{kQ}n}=T_{n}^{-1}\Delta\mathbf{i}_{\text{kQ}nQn}\). With the above definitions, IBR \(n\) can be viewed as a dynamic system that is driven by \(\Delta\mathbf{i}_{\text{kQ}n}\) while outputting \(\Delta\mathbf{v}_{\text{kQ}n}\), as shown in Figure 4-(b). ### _Dynamics of Microgrid Network_ Assume that the microgrid with \(N\) IBRs is three-phase balanced and hosts constant-impedance load. By the Kron reduction technique, the microgrid network can be reduced to a network with \(N+1\) node and \(M\) branches. One of the \(N+1\) node is the neutral/reference point of the microgrid. Let set \(\mathcal{N}:=\{0,1,2,\ldots,N\}\) collect the nodal indices of the Kron-reduced network where "0" denotes the nodal index for the neutral point. Let set \(\mathcal{M}:=\{1,2,\ldots,M\}\) collect branch indices of the reduced network. Another way to represent branch \(m\) is to use a pair \((i,j)_{m}\) where \(i,j\in\mathcal{N}\) correspond to the two nodes of the two terminals of branch \(m\). Suppose that \(i<j\), we define the positive direction assigned to branch \(m\) is from node \(i\) to \(j\). The \(M\) branches in the Kron-reduced network can be divided into two categories. Let \(\mathcal{E}_{1}\) collect the branches connecting to the neutral point via an IBR, while set \(\mathcal{E}_{2}\) collects the rest of the branches. The dynamics of branches in \(\mathcal{E}_{1}\) are governed by equations presented in Section II-A, whereas the dynamic behaviors of the branches in \(\mathcal{E}_{2}\) are modeled by RL circuits with resistor \(r_{\text{bm}}\) and inductance \(L_{\text{bm}}\): \[L_{\text{bm}}\dot{i}_{\text{bD}m}=-r_{\text{bm}}\dot{i}_{\text{ bbm}}+\omega_{0}L_{\text{bm}}\dot{i}_{\text{bQ}m}+v_{\text{bD}m} \tag{9a}\] \[L_{\text{bm}}\dot{i}_{\text{bQ}m}=-r_{\text{bm}}\dot{i}_{\text{ bQ}j}-\omega_{0}L_{\text{bm}}\dot{i}_{\text{bD}m}+v_{\text{bQ}m} \tag{9b}\] where \(m\in\mathcal{E}_{2}\); the subscript "b" reminds readers that the corresponding variables are used for describe branches without IBRs; the subscripts "D" and "Q" suggest the corresponding variables are in the common reference frame (the D-Q frame); \(v_{\text{bD}m}\) and \(v_{\text{bQ}m}\) are the bus voltage differences of branch \((i,j)_{m}\) in the D- and Q- axis, i.e., \(v_{\text{bD}m}=v_{\text{D}i}-v_{\text{D}j}\) and \(v_{\text{bQ}m}=v_{\text{Q}i}-v_{\text{Q}j}\). To characterize the relationship between branch currents \(i_{\text{bm}}\) for \(m\in\mathcal{M}\), we introduce a _reduced incidence matrix_\(C^{\prime}\in\mathbb{R}^{N\times M}\) whose entries are \(c^{\prime}_{n,m}\) with \(n\in\mathcal{N}\backslash\{0\}\) and \(m\in\mathcal{M}\). Each entry \(c^{\prime}_{n,m}\) in matrix \(C^{\prime}\) is defined as follows: \(c^{\prime}_{n,m}=1\) if branch \(m\) is incident at node \(n\), and the reference direction of branch \(m\) is away from node \(n\); \(c^{\prime}_{n,m}=-1\) if branch \(m\) is incident at node \(n\), and the reference direction of branch \(m\) is toward to node \(n\); and \(c^{\prime}_{n,m}=0\) if branch \(m\) is not incident at node \(n\). With the reference direction defined before, one can assign indices of nodes and branches such that the reduced incidence matrix \(C^{\prime}\) has the following structure [22] \[C^{\prime}=\begin{bmatrix}C_{0}&-I_{N}\end{bmatrix} \tag{10}\] where \(C_{0}\) is the first \(M-N\) columns of matrix \(C^{\prime}\); and \(I_{N}\) is a \(N\)-dimension identity matrix. Next, we present the compact form of Kirchhoff's Current Law (KCL), with the incident matrix \(C^{\prime}\). Let \(M^{\prime}\) be \(M-N\). The KCL of the microgrid network in terms of direct/quadrature current leads to \[C^{\prime}\mathbf{i}_{\text{D}}=\mathbf{0};\quad C^{\prime}\mathbf{i}_{\text{Q} }=\mathbf{0} \tag{11}\] where \(\mathbf{i}_{\text{D}}=[\mathbf{i}_{\text{bD}1}^{\top},\mathbf{i}_{\text{bD}}^{ \top}]^{\top}\) with \(\mathbf{i}_{\text{bD}}=[i_{\text{kD}1},\ldots,i_{\text{bD}M^{\prime}}]^{\top}\), \(\mathbf{i}_{\text{dD}}=[i_{\text{aD}1},\ldots,i_{\text{sD}1}]^{\top}\); and \(\mathbf{i}_{\text{Q}}=[\mathbf{i}_{\text{bQ}1}^{\top},\mathbf{i}_{\text{bQ}}^{ \top}]^{\top}\) with \(\mathbf{i}_{\text{Q}}=[\mathbf{i}_{\text{bQ}1},\ldots,i_{\text{bQ}N}]^{\top}\), \(\mathbf{i}_{\text{dQ}}=[i_{\text{sQ}1},\ldots,i_{\text{sQ}N}]^{\top}\). Plugging (10) into (11) leads to \[\mathbf{i}_{\text{bD}}=C_{0}\mathbf{i}_{\text{bD}};\quad\mathbf{i}_{\text{dQ}}=C_ {0}\mathbf{i}_{\text{bQ}}. \tag{12}\] Moreover, the relationship between the voltages across branches and the nodal voltages can be described by \[\mathbf{v}_{\text{D}}=C^{\prime\top}\mathbf{v}_{\text{oD}};\quad\mathbf{v}_{ \text{Q}}=C^{\prime\top}\mathbf{v}_{\text{oQ}} \tag{13}\] In (13), \(\mathbf{v}_{\text{D}}=[\mathbf{v}_{\text{bD}}^{\top},\mathbf{v}_{\text{oD}}^{ \top}]^{\top}\) and \(\mathbf{v}_{\text{Q}}=[\mathbf{v}_{\text{bD}1}^{\top},\mathbf{v}_{\text{oD}}^{ \top}]^{\top}\), where the voltages across branches \(\mathbf{v}_{\text{bD}}=[v_{\text{bD}1},\ldots,v_{\text{bD}M^{\prime}}]^{\top}\); \(\mathbf{v}_{\text{bQ}}=[v_{\text{bQ}1},\ldots,v_{\text{bD}M^{\prime}}]^{\top}\); and nodal voltages \(\mathbf{v}_{\text{bD}}=[v_{\text{oD}1},\ldots,v_{\text{oD}M^{\prime}}]^{\top}\), and \( The branch dynamics (9) can be organized into \[L\dot{\mathbf{i}}_{\text{bDQ}}=-R\mathbf{i}_{\text{bDQ}}+W\mathbf{i }_{\text{bDQ}}+C^{\top}\mathbf{v}_{\text{oDQ}} \tag{16a}\] \[\mathbf{i}_{\text{bDQ}}=C\mathbf{i}_{\text{bDQ}} \tag{16b}\] where \(L=\text{diag}(L_{1},\ldots,L_{M^{\prime}},L_{1},\ldots,L_{M^{\prime}})\); \(R=\text{diag}(r_{1},\ldots,r_{M^{\prime}},r_{1},\ldots,r_{M^{\prime}})\); \(C=\begin{bmatrix}C_{0}&C_{0}\end{bmatrix}\); and \[W=\begin{bmatrix}0_{M^{\prime}\times M^{\prime}}&\omega_{0}I_{M^{\prime}}\\ -\omega_{0}I_{M^{\prime}}&0_{M^{\prime}\times M^{\prime}}\end{bmatrix}.\] Since (17) is linear, the following equations also hold: \[L\Delta\dot{\mathbf{i}}_{\text{bDQ}}=-R\Delta\dot{\mathbf{i}}_{ \text{bDQ}}+W\Delta\dot{\mathbf{i}}_{\text{bDQ}}+C^{\top}\Delta\mathbf{v}_{ \text{oDQ}} \tag{17a}\] \[\Delta\mathbf{i}_{\text{bDQ}}=C\Delta\dot{\mathbf{i}}_{\text{bDQ}} \tag{17b}\] where the "\(\Delta\)" variables are the deviations of the original variables from their steady states. ### _A Feedback Perspective of Microgrid Dynamics_ The interaction between the IBRs and the microgrid network can be interpreted from a feedback perspective shown in Figure 6. The IBR dynamics \(\mathcal{F}_{n}\) (\(n\)= 1, 2,..., N) constitute the feed-forward loop \(\mathcal{F}\), whereas the feedback loop \(\mathcal{B}\) results from the network dynamics (17). The input of \(\mathcal{F}\) is \(\Delta\dot{\mathbf{i}}_{\text{bDQ}}\) defined by \(\Delta\dot{\mathbf{i}}_{\text{oDQ}}=-\Delta\dot{\mathbf{i}}_{\text{bDQ}}\) where the negative sigh results from the reference directions of \(i_{on}\) and \(i_{sn}\) defined before: recall that the positive reference direction of \(i_{on}\) points into the IBR \(n\), while the positive reference direction of \(i_{sn}\) points into the network. The output of \(\mathcal{F}\) is \(\Delta\mathbf{v}_{\text{oDQ}}\) which drives the network dynamics (17). With Figure 6, the dynamics of the microgrid with \(N\) IBRs can be interpreted as follows. At time step \(k\), current \(\Delta\dot{\mathbf{i}}_{\text{bDQ}}[k]\) for \(n=1,2,\ldots,N\) drives the dynamics of system \(\mathcal{F}\) which updates the internal state variables \(\Delta\mathbf{x}_{n}[k+1]\) and outputs voltage \(\Delta v_{\text{oDQ}n}[k]\). The voltages \(\Delta\mathbf{v}_{\text{oDQ}n}[k]\) further drive the dynamics of the microgrid network to update the internal state variables of the network and produces \(\Delta\dot{\mathbf{i}}_{\text{bDQ}}[k+1]\). The updated currents \(\Delta\dot{\mathbf{i}}_{\text{bDQ}}[k+1]\) drives the dynamics of the IBRs, and the process described above repeats. Such a feedback perspective lends itself to introducing the transient stability protocol based on the passivity theory. ## III Decentralized Stability Protocol This section aims to answer the question of what condition each IBR should satisfy such that they can establish a stable microgrid. We term the condition the decentralized stability protocol. This section first introduces some definitions in the control theory. Then we present a lemma that provides guidance to design the protocol. Finally, the the protocol is formally described and justified. ### _Stability of Interconnected Systems_ The closed-loop dynamics of Figure 6 can be described by \[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x}) \tag{18}\] where vector \(\mathbf{x}\) collects the IBR states in \(\Delta\mathbf{x}_{n}\) for \(n=1,\ldots,N\), and the network states in \(\Delta\dot{\mathbf{i}}_{\text{bDQ}}\); and function \(\mathbf{f}(\cdot)\) defines the evolution of \(\mathbf{x}\) in terms of time. Recall that the equilibrium point of (18) is the origin \(\mathbf{o}\). The asymptotic stability of \(\mathbf{o}\) is rigorously described by the following definition: **Definition 1**.: _(Asymptotic stability [23]) The equilibrium point \(\mathbf{o}\) of the system (18) is asymptotically stable, if_ \[\forall\epsilon>0,\exists\rho>0,\|\mathbf{x}(0)\|<\rho\implies\|\mathbf{x}(t )\|<\epsilon,\forall t,\] _and if for some \(\rho>0\),_ \[\|\mathbf{x}(0)\|<\rho\implies\lim_{t\to\infty}\mathbf{x}(t)=\mathbf{o}.\] For a system \(\mathcal{H}\) with input \(\mathbf{u}\in\mathbb{R}^{d}\) and output \(\mathbf{y}\in\mathbb{R}^{d}\), the next two definitions examine the input-output properties of \(\mathcal{H}\): **Definition 2**.: _(OFP [24]) The system \(\mathcal{H}:\mathbf{u}\to\mathbf{y}\) is output feedback passive (OFP), if for all square integrable \(\mathbf{u}(t)\) and some \(\sigma>0\),_ \[\int_{0}^{t}\mathbf{u}(\tau)^{\top}\mathbf{y}(\tau)d\tau-\sigma\int_{0}^{t} \mathbf{y}(\tau)^{\top}\mathbf{y}(\tau)d\tau\geq 0, \tag{19}\] _with a zero initial condition. Moreover, \(\sigma\) is called the passivity index._ **Definition 3**.: _(\(\mathcal{L}_{2}\) Gain [24]) The system \(\mathcal{H}:\mathbf{u}\to\mathbf{y}\) has finite \(\mathcal{L}_{2}\) gain \(\gamma>0\) if for all square integrable \(\mathbf{u}\)_ \[\int_{0}^{t}\mathbf{y}(\tau)^{\top}\mathbf{y}(\tau)d\tau\leq\gamma\int_{0}^{t }\mathbf{u}(\tau)^{\top}\mathbf{u}(\tau)d\tau, \tag{20}\] _with a zero initial condition._ The link between asymptotic stability and the output feedback passivity is established by the following lemma [24]: **Lemma 1**.: _(Corollary 1 in [24]) The equilibrium point \(\mathbf{o}\) of the closed-loop system in Figure 6 is asymptotically stable, if both subsystems \(\mathcal{F}\) and \(\mathcal{B}\) are output feedback passive._ Lemma 1 guides one to design a decentralized protocol for each IBR to ensure system-level stability. Subsection III-B examines the OFP property of the feedback loop \(\mathcal{B}\) in Figure 6. Subsection III-C introduces the protocol that ensures the OFP property of the feed-forward loop \(\mathcal{F}\). Fig. 6: A feedback perspective of microgrid dynamics ### _Passivity of Microgrid Networks_ The OFP property of the network dynamics (17) in the DQ frame is established by the following theorem: **Theorem 2**.: _(Network Passivity Index) The microgrid network dynamics (17) is OFP with input \(\Delta\mathbf{v}_{\text{oDQ}}\) and output \(\Delta\mathbf{i}_{\text{uDQ}}\), if matrix \(C^{\top}C\) has at least one positive eigenvalue._ Proof.: By definition, \[\int_{0}^{t}\Delta\mathbf{i}_{\text{uDQ}}^{\top}\Delta\mathbf{v }_{\text{oDQ}}d\tau=\int_{0}^{t}\Delta\mathbf{i}_{\text{bDQ}}^{\top}C^{\top} \Delta\mathbf{v}_{\text{oDQ}}d\tau\] \[=\int_{0}^{t}\Delta\mathbf{i}_{\text{bDQ}}^{\top}\left(L\Delta \mathbf{i}_{\text{bDQ}}+R\Delta\mathbf{i}_{\text{bDQ}}-W\Delta\mathbf{i}_{ \text{bDQ}}\right)d\tau\] \[=\int_{0}^{t}\left(\Delta\mathbf{i}_{\text{bDQ}}^{\top}L\Delta \mathbf{i}_{\text{bDQ}}+\Delta\mathbf{i}_{\text{bDQ}}^{\top}R\Delta\mathbf{i }_{\text{bDQ}}-\Delta\mathbf{i}_{\text{bDQ}}^{\top}W\Delta\mathbf{i}_{\text{ bDQ}}\right)d\tau\] Note that \(W=-W^{\top}\) and \(\Delta\mathbf{i}_{\text{bDQ}}^{\top}W\Delta\mathbf{i}_{\text{bDQ}}\) is a scalar. Then, \[\Delta\mathbf{i}_{\text{bDQ}}^{\top}W\Delta\mathbf{i}_{\text{bDQ}} =\left(\Delta\mathbf{i}_{\text{bDQ}}^{\top}W\Delta\mathbf{i}_{ \text{bDQ}}\right)^{\top} \tag{21}\] \[=-\Delta\mathbf{i}_{\text{bDQ}}^{\top}W\Delta\mathbf{i}_{\text{ bDQ}}.\] Equation (21) leads to \(2\Delta\mathbf{i}_{\text{bDQ}}^{\top}W\Delta\mathbf{i}_{\text{bDQ}}=0\), implying \[\Delta\mathbf{i}_{\text{bDQ}}^{\top}W\Delta\mathbf{i}_{\text{bDQ}}=0. \tag{22}\] Based on (21) and (22), \[\int_{0}^{t}\Delta\mathbf{i}_{\text{bDQ}}^{\top}\Delta\mathbf{v }_{\text{oDQ}}d\tau=V(t)-V_{0}+\int_{0}^{t}\Delta\mathbf{i}_{\text{bDQ}}^{ \top}R\Delta\mathbf{i}_{\text{bDQ}}d\tau\] where \(V(t):=0.5\Delta\mathbf{i}_{\text{bDQ}}^{\top}(t)L\Delta\mathbf{i}_{\text{bDQ }}(t)\) and \(V_{0}:=0.5\Delta\mathbf{i}_{\text{bDQ}}^{\top}(0)L\Delta\mathbf{i}_{\text{bDQ }}(0)\). As matrices \(L\succ 0\), \[\int_{0}^{t}\Delta\mathbf{i}_{\text{bDQ}}^{\top}\Delta\mathbf{v }_{\text{oDQ}}d\tau\geq-V_{0}+\int_{0}^{t}\Delta\mathbf{i}_{\text{bDQ}}^{\top} R\Delta\mathbf{i}_{\text{bDQ}}d\tau \tag{23}\] \[\geq-V_{0}+\lambda_{\text{Rmin}}\int_{0}^{t}\Delta\mathbf{i}_{ \text{bDQ}}^{\top}\Delta\mathbf{i}_{\text{bDQ}}d\tau\] \[\geq-V_{0}+\frac{\lambda_{\text{Rmin}}}{\lambda_{\text{Cmax}}} \int_{0}^{t}\Delta\mathbf{i}_{\text{bDQ}}^{\top}\Delta\mathbf{i}_{\text{bDQ }}d\tau\] where \(\lambda_{\text{Rmin}}\) is the minimal eigenvalue of \(R\); \(\lambda_{\text{Cmax}}\) is the maximal eigenvalue of \(C^{\top}C\); and \(\lambda_{\text{Cmax}}>0\) as \(C^{\top}C\succeq 0\). The third line of (23) is due to the fact that \[\Delta\mathbf{i}_{\text{uDQ}}^{\top}\Delta\mathbf{i}_{\text{uDQ}} =\Delta\mathbf{i}_{\text{bDQ}}^{\top}C^{\top}C\Delta\mathbf{i}_{ \text{bDQ}} \tag{24}\] \[\leq\lambda_{\text{Cmax}}\Delta\mathbf{i}_{\text{bDQ}}^{\top} \Delta\mathbf{i}_{\text{bDQ}}.\] The inequality (19) is evaluated with a zero initial condition. By setting \(\Delta\mathbf{i}_{\text{bDQ}}(0)=0\), it follows that \(V_{0}=0\) and dynamics (17) is OFP with passivity index \(\lambda_{\text{Rmin}}/\lambda_{\text{Cmax}}\). _Remark:_ The proof of Theorem 2 reveals that the passivity index of an RL network depends not only on the minimal branch resistance, but also on the branches' connectivity. ### _IBR-level Stability Protocol_ Theorem 2 suggests that the feedback loop \(\mathcal{B}\) in Figure 6 is OFP. According to Lemma 1, the system-level asymptotic stability can be established, if the feed-forward loop \(\mathcal{F}\) is OFP. This observation inspires us to design the following IBR-level stability protocol that leads to the microgrid-level stability: **Protocol 1:**_For \(n=1,2,\ldots,N\), the dynamics of IBR \(n\) with input \(\Delta\mathbf{i}_{\text{udqn}}\) and output \(\Delta\mathbf{v}_{\text{odqn}}\) is OFP._ The "P(assive)" in Protocol 1 should not be confused with the "passive element" defined in the circuit theory [25]. In the circuit theory, the passive element is an element that is "not capable of generating energy" [25]. However, whether an OFP component in the sense of Definition 2 is capable of generating energy or not depends on the definition of its inputs and outputs. If an IBR follows Protocol 1, it does not mean that the IBR cannot produce energy that powers its host microgrid, and it essentially means that the IBR cannot produce energy that leads disturbances to be sustained or amplified. Section V shows an example that a IBR follows Protocol 1 but produces energy. Next we show following Protocol 1 leads to asymptotic stability. **Theorem 3**.: _The equilibrium point of the closed-loop system in Figure 6 is asymptotically stable if Protocol 1 is followed._ Proof.: Protocol 1 requires each IBR to be OFP, i.e., there exist \(\sigma_{n}>0\) such that, for \(n=1,2,\ldots,N\), \[\int_{0}^{t}\Delta\mathbf{i}_{\text{uDQ}}^{\top}\Delta\mathbf{v }_{\text{odqn}}d\tau-\sigma_{n}\int_{0}^{t}\Delta\mathbf{v}_{\text{odqn}}^{ \top}\Delta\mathbf{v}_{\text{odqn}}d\tau\geq 0 \tag{25}\] According to Figure 4-(a), \(\Delta\mathbf{i}_{\text{uDQ}}=T_{n}^{-1}\Delta\mathbf{i}_{\text{uDQ}n}\) and \(\Delta\mathbf{v}_{\text{odqn}}=T_{n}^{-1}\Delta\mathbf{v}_{\text{oDQ}n}\), then \[\int_{0}^{t}\Delta\mathbf{i}_{\text{uDQ}n}^{\top}(T_{n}^{-1})^{ \top}T_{n}^{-1}\Delta\mathbf{v}_{\text{uDQ}n}d\tau- \tag{26}\] \[\sigma_{n}\int_{0}^{t}\Delta\mathbf{v}_{\text{oDQ}n}^{\top}(T_{n}^ {-1})^{\top}T_{n}^{-1}\Delta\mathbf{v}_{\text{oDQ}n}d\tau\geq 0\] Note that \((T_{n}^{-1})^{\top}T_{n}^{-1}=I\). This leads to \[\int_{0}^{t}\Delta\mathbf{i}_{\text{uDQ}n}^{\top}\Delta\mathbf{v }_{\text{uDQ}n}d\tau-\sigma_{n}\int_{0}^{t}\Delta\mathbf{v}_{\text{uDQ}n}^{\top} \Delta\mathbf{v}_{\text{uDQ}n}d\tau\geq 0 \tag{27}\] Define \(\underline{\sigma}:=\min_{n}\sigma_{n}\). It follows that \[\int_{0}^{t}\Delta\mathbf{i}_{\text{uDQ}n}^{\top}\Delta\mathbf{v }_{\text{uDQ}n}d\tau-\underline{\sigma}\int_{0}^{t}\Delta\mathbf{v}_{\text{uDQ}n}^{ \top}\Delta\mathbf{v}_{\text{uDQ}n}d\tau\geq 0 \tag{28}\] for \(n=1,2,\ldots,N\). By summing up the \(N\) inequalities in (28), we have \[\sum_{n=1}^{N}\int_{0}^{t}\Delta\mathbf{i}_{\text{uDQ}n}^{\top}\Delta\mathbf{v }_{\text{uDQ}n}d\tau-\underline{\sigma}\sum_{n=1}^{N}\int_{0}^{t}\Delta \mathbf{v}_{\text{uDQ}n}^{\top}\Delta\mathbf{v}_{\text{uDQ}n}d\tau\geq 0.\] Since \(N\) is finite, the finite summation and integration operators can be interchanged, i.e., \[\int_{0}^{t}\sum_{n=1}^{N}\Delta\mathbf{i}_{\text{uDQ}n}^{\top}\Delta\mathbf{v}_{ \text{uDQ}n}d\tau-\underline{\sigma}\int_{0}^{t}\Delta\mathbf{v}_{\text{uDQ}n}^{ \top}\Delta\mathbf{v}_{\text{uDQ}n}d\tau\geq 0.\] Note that \(\sum_{n=1}^{N}\Delta\mathbf{i}_{\text{uDQ}n}^{\top}\Delta\mathbf{v}_{\text{uQD}n}= \Delta\mathbf{i}_{\text{uDQ}}^{\top}\Delta\mathbf{v}_{\text{uDQ}}\) and \(\sum_{n=1}^{N}\Delta\mathbf{v}_{\text{uDQ}n}^{\top}\Delta\mathbf{v}_{\text{uDQ}n}= \Delta\mathbf{v}_{\text{uDQ}}^{\top}\Delta\mathbf{v}_{\text{uDQ}n}\). This leads to \[\int_{0}^{t} As Protocol \(1\) is not straight-forward to implement for IBR manufacturers, _how do they enforce protocol 1?_ This is answered in the next section. ## IV Non-intrusive Protocol Enforcement In this section, we first illustrate the basic idea of enforcing Protocol 1. Then we conceptualize the architecture of an interface that enforces Protocol 1 in a non-intrusive way. We also define the information needed to design the interface. ### _Basic Idea of Protocol Enforcement_ Protocol 1 at IBR \(n\) can be enforced by the scheme shown in Figure 7 where \(\alpha_{n},\beta_{n}\), and \(\kappa_{n}\) are tunable parameters; and \(I\) is an identity matrix. The next lemma guides one to tune \(\alpha_{n}\), \(\beta_{n}\), and \(\kappa_{n}\) to follow Protocol 1: **Lemma 4**.: _(Theorem 4 in [26]) The closed-loop system in Figure 7 with input \(\Delta\mathbf{i}^{\prime}_{\text{ad}\text{rn}}\) and output \(\Delta\mathbf{v}^{\prime}_{\text{ad}\text{rn}}\) is OFP with \(\sigma_{n}=0.5(\frac{1}{\beta_{n}}+\frac{\alpha_{n}}{\kappa_{n}})>0\), if_ \[\beta_{n}\geq\kappa_{n}\gamma_{n}>0,\quad\kappa_{n}>\alpha_{n}\beta_{n}>0, \tag{29}\] _where \(\gamma_{n}\) is the \(\mathcal{L}_{2}\) gain of the IBR \(n\) with input \(\Delta\mathbf{i}_{\text{ad}\text{rn}}\) and output \(\Delta\mathbf{v}_{\text{ad}\text{rn}}\) in Figure 7._ Suppose that an IBR manufacturer provides an \(\mathcal{L}_{2}\) gain \(\gamma_{n}\), the NMPs can leverage Algorithm 1 to find \(\alpha_{n}\), \(\beta_{n}\), and \(\kappa_{n}\). It is easy to verify that the \(\alpha_{n}\), \(\beta_{n}\), and \(\kappa_{n}\) returned by Algorithm 1 satisfy constraints (29). As a result, the closed-loop system shown in Figure 7 follows Protocol 1. The remaining question is: _how does the IBR manufacturer compute \(\gamma_{n}\)?_ ### \(\mathcal{L}_{2}\) _Gain for IBRs_ Algorithm 2 can be leveraged by IBR manufacturers to obtain \(\gamma_{n}\), and it is designed based on the following lemma: **Lemma 5**.: _[_23_]_ _Assume that the real part of every eigenvalue of matrix \(A_{n}\) in (7) is strictly negative. Let \(G_{n}(s)=C_{n}(sI-A_{n})^{-1}B_{n}\). Then, the \(\mathcal{L}_{2}\) gain of dynamics (7) is \(\sup_{\omega\in\mathbb{R}}\left\|G_{n}(\mathrm{j}\omega)\right\|_{2}\)._ ``` 1:input:\(\gamma_{n}\) 2:\(\bar{\beta}\gets 0.5\); \(\bar{\sigma}\gets 0.5\); flag\(\gets 0\) 3:while\(k_{1}=1,2,\ldots,K\)do 4: Pick an arbitrary \(\sigma_{0}>0\) and \(\sigma_{0}\neq\bar{\sigma}\) 5:while\(k_{2}=1,2,\ldots,K\)do 6: Pick an arbitrary \(\beta_{0}>0\) and \(\beta_{0}\neq\bar{\beta}\) 7:\(\kappa_{0}\leftarrow\beta_{0}/(1.05\gamma_{n})\); \(\alpha_{0}\gets 2\sigma_{n}\beta_{0}/\gamma_{n}-1/\gamma_{n}\) 8:if\(\kappa_{0}>\alpha_{0}\beta_{0}\wedge\alpha_{0}>0\)then 9:\(\alpha_{n}\leftarrow\alpha_{0}\); \(\beta_{n}\leftarrow\beta_{0}\); \(\kappa_{n}\gets 0\); flag\(\gets 1\) 10:break 11:else\(\bar{\beta}\leftarrow\beta_{0}\) 12:endif 13:endwhile 14:if flag\(==1\)thenbreak 15:else\(\bar{\sigma}\leftarrow\sigma_{0}\) 16:endif 17:endwhile 18:return\(\alpha_{n}\), \(\beta_{n}\), \(\kappa_{n}\) ``` **Algorithm 1** Non-manufacture Parties' Algorithm In Lemma 5, \(\left\|\cdot\right\|_{2}\) is the \(\mathcal{L}_{2}\) norm; \(\mathrm{j}=\sqrt{-1}\); and \(\sup_{\omega\in\mathbb{R}}\left\|G_{n}(\mathrm{j}\omega)\right\|_{2}\) is the \(H_{\infty}\) norm of \(G_{n}(\mathrm{j}\omega)\)[23] which can be obtained by standard procedures, e.g., the "hinfnorm" function in MATLAB, given matrices \(A_{n}\), \(B_{n}\), and \(C_{n}\). Lemma 5 requires a stable matrix \(A_{n}\). This is not a big assumption, as IBR control designers typically perform small-signal analysis to ensure device-level stability. ### _Architecture of Protocol Enforcement Interfaces (PEI)_ This subsection conceptualizes an interface that enforce Protocol 1. The physical layer of the interface is shown in Figure 8. The interface comprises a three-phase, controlled voltage source, and a three-phase controlled current source. The voltage \(\Delta\mathbf{v}_{\text{abcn}}:=[\Delta v_{\text{nn}},\Delta v_{\text{bn}}, \Delta v_{\text{cn}}]^{\top}\) of the voltage source and the current \(\Delta\mathbf{i}_{\text{abcn}}:=[\Delta i_{\text{nn}},\Delta i_{\text{bn}}, \Delta i_{\text{cn}}]^{\top}\) of the current source are determined by the terminal voltage measurement \(\mathbf{v}_{\text{abcn}}:=[v_{\text{nn}},v_{\text{bn}},v_{\text{cn}}]^{\top}\) and current measurement \(\mathbf{i}_{\text{abcn}}:=[i_{\text{nn}},i_{\text{bn}},i_{\text{cn}}]^{\top}\) of the IBR \(n\). This paper focuses on the control law that establishes the link between \(\{\mathbf{v}_{\text{abcn}},\mathbf{i}_{\text{abcn}}\}\) and \(\{\Delta\mathbf{v}_{\text{abcn}},\Delta\mathbf{i}_{\text{abcn}}\}\); the internal design of the controlled voltage and current sources is out of the scope of this paper. Figure 9 presents the cyber layer of the interface. In Figure 9, the three-phase variables \(\mathbf{v}_{\text{abcn}}\) and \(\mathbf{i}_{\text{abcn}}\) are first transformed into the d-q frame by the Park transforma Fig. 8: Physical layer of the protocol enforcement interface Fig. 7: Basic idea of enforcing the Stability Protocol tion: \([v_{\text{od}},v_{\text{eq}},v_{\text{oo}}]^{\top}=T_{n}^{\prime}[v_{\text{a}},v_{ \text{b}},v_{\text{c}}]^{\top}\); and \([i_{\text{od}},i_{\text{oq}},i_{\text{oq}}]^{\top}=T_{n}^{\prime}[i_{\text{a}}, i_{\text{b}},i_{\text{c}}]^{\top}\) where [27] In the above equation, \(\theta_{n}=\omega_{n}t+\delta_{n}\), and \(\theta_{n}\) can be obtained locally by a phase-locked loop [28]. Second, the deviation vectors \(\Delta\mathbf{v}_{\text{odq}n}\) and \(\Delta\mathbf{i}_{\text{odq}n}\) are obtained by subtracting the steady-state values \(\hat{\mathbf{v}}_{\text{odq}n}\) and \(\hat{\mathbf{i}}_{\text{odq}n}\) from \(\mathbf{v}_{\text{odq}n}\) and \(\mathbf{i}_{\text{odq}n}\). Third, \(\Delta\mathbf{v}_{\text{odq}n}^{\prime\prime}\) and \(\Delta\mathbf{i}_{\text{odq}n}^{\prime\prime}\) are computed by \[\Delta\mathbf{v}_{\text{odq}n}^{\prime\prime} =(I-\kappa_{n}I)\Delta\mathbf{v}_{\text{odq}n}-\beta_{n}I\Delta \mathbf{i}_{\text{odq}n} \tag{30a}\] \[\Delta\mathbf{i}_{\text{odq}n}^{\prime\prime} =-\alpha_{n}I\Delta\mathbf{v}_{\text{odq}n}. \tag{30b}\] Finally, the vectors in the d-q frame \(\Delta\mathbf{v}_{\text{odq}n}^{\prime\prime}\) and \(\Delta\mathbf{i}_{\text{odq}n}^{\prime\prime}\) are transformed to the three-phase frame. Equation (30) is justified by transforming Figure 8 in the three-phase frame to the d-q frame. Figure 10 presents the circuit in the d-q frame. According to Figure 7, we have \[\Delta\mathbf{v}_{\text{odq}n}^{\prime}=\kappa_{n}I\Delta\mathbf{ v}_{\text{odq}n}+\beta_{n}I\Delta\mathbf{i}_{\text{odq}n} \tag{31a}\] \[\Delta\mathbf{i}_{\text{odq}n}^{\prime}=\alpha_{n}I\Delta\mathbf{ v}_{\text{odq}n}+\Delta\mathbf{i}_{\text{odq}n}. \tag{31b}\] In Figure 10, based on the Kirchhoff's circuit laws, we have \[\Delta\mathbf{v}_{\text{odq}n}^{\prime}=\Delta\mathbf{v}_{\text{ odq}n}-\Delta\mathbf{v}_{\text{odq}n}^{\prime\prime} \tag{32a}\] \[\Delta\mathbf{i}_{\text{odq}n}^{\prime}=\Delta\mathbf{i}_{\text{ odq}n}-\Delta\mathbf{i}_{\text{odq}n}^{\prime\prime}. \tag{32b}\] Plugging (32) into (31) leads to (30). It is worth noting that designing the interface shown in Figures 8 and 9 only requires an IBR manufacturer to provide the \(\mathcal{L}_{2}\) gains of their IBRs which can be easily obtained via Algorithm 2 by the manufacturer. The interface design does not need the information of detailed IBR control. While the IBR manufacturer may be reluctant to share such information with the NMPs due to privacy concerns on intellectual properties, revealing the \(\mathcal{L}_{2}\) of the IBRs does not lead to such privacy issues, as it is impossible to infer the detailed control design of an IBR merely based on the \(\mathcal{L}_{2}\) gains of the IBR. ## V Case Study This section tests the effectiveness of the PEIs by simulating the two networked microgrids shown in Figure 11. ### _Motivating Example_ The test system in Figure 11 contains two microgrids. All control parameters of IBR \(1\) can be found in [20]. For IBR \(2\), \(k_{\text{w2}}=78\), and the rest of parameters are from [20]. The per-phase impedances of Loads \(1\) and \(2\) are \(25\Omega\) and \(20\Omega\), respectively. Before time \(t=0.4\)s, Microgrids 1 and 2 are in the islanded mode. At \(t=0.4\)s, the two small microgrids are networked via the tie line and they enter the hybrid mode. Figures 12 visualizes the three-phase terminal currents at both IBRs, i.e., \(\mathbf{i}_{\text{abc1}}\) and \(\mathbf{i}_{\text{abc2}}\), from \(0.2\)s to \(1\)s. In Figures 12, it can be observed that the magnitudes of \(\mathbf{i}_{\text{abc1}}\) and \(\mathbf{i}_{\text{abc2}}\) are constant before the two microgrids are networked, i.e., \(t<0.4\)s. This suggests the two microgrids in the islanded mode are stable. However, after the two microgrids are networked, i.e., \(t>0.4\)s, the magnitudes of \(\mathbf{i}_{\text{abc1}}\) and \(\mathbf{i}_{\text{abc2}}\) keep oscillating. Figure 13 examines the three-phase currents \(\mathbf{i}_{\text{abc1}}\) and \(\mathbf{i}_{\text{abc2}}\) in the d-q frame: before \(t=0.4\)s, both \(\mathbf{i}_{\text{odq1}}\) and \(\mathbf{i}_{\text{odq2}}\) can be stabilized at their nominal values. However, after the switch is closed at \(t=0.4\)s, both \(\mathbf{i}_{\text{odq1}}\) and \(\mathbf{i}_{\text{odq2}}\) keep oscillating with increasing amplitudes, suggesting that the two networked microgrids become unstable. i.e., \(\alpha_{n}\), \(\beta_{n}\), and \(\kappa_{n}\), via Algorithm 1. It is worth noting that the manufacturer does not need to share the detailed model of their IBRs with the NMPs to enable them to design the PEI. The \(\mathcal{L}_{2}\) gain \(\gamma_{n}\) obtained from Algorithm 2 and the interface parameters \(\alpha_{n}\), \(\beta_{n}\), and \(\kappa_{n}\) computed by Algorithm 1 are listed in Table I. It can be easily verified that condition (29) is satisfied. Figures 14 and 15 show the performance of PEIs. It can be observed that after the two microgrids are networked at \(t=0.4\) s, the three-phase current magnitudes are constant after some transients. Figure 16 visualizes the d-q components \(\mathbf{i}_{\text{odd2}}\) and \(\mathbf{i}_{\text{odd2}}\): the PEIs can stabilize the currents at constant values after the two IBRs are networked, while both \(\mathbf{i}_{\text{odd1}}\) and \(\mathbf{i}_{\text{odd2}}\) would keep oscillating with increasing amplitudes if no PEI is installed (shown in Figure 13). #### V-A2 Impact of PEIs _Do the PEIs consume significant amount of energy to stabilize the microgrids?_ We answer this question by comparing the energy consumed by the interfaces with the energy produced by the IBRs. For \(n=1,2\), denote by \(P_{n}\), \(P_{\text{cn}}\), and \(P_{\text{v}}\) the real power _produced_ by IBR \(n\), the real power _consumed_ by the three-phase, shunt current source in the PEI at IBR \(n\), and the real power _consumed_ by the three-phase, series voltage source in the PEI at IBR \(n\), respectively. Denote by \(E_{n}\), \(E_{\text{cn}}\), and \(E_{\text{v}n}\) the energy produced by IBR \(n\), the energy consumed by the three-phase current source in the PEI at IBR \(n\), and the energy consumed by the three phase voltage source in the PEI at IBR \(n\), over a period. Figure 17 visualize \(P_{n}\), \(P_{\text{cn}}\), and \(P_{\text{v}n}\). In Figure 17-(a), it can be observed that the real power used for stabilizing the microgrids, i.e., \(P_{\text{c1}}\) and \(P_{\text{v1}}\), is much less than \(P_{1}\). By integrating \(P_{1}\), \(P_{\text{c1}}\), and \(P_{\text{v1}}\) over a period, \(E_{1}\), \(E_{\text{c1}}\), and \(E_{\text{v1}}\) over the period, can be computed. Table II presents \(E_{1}\), \(E_{\text{c1}}\), and \(E_{\text{v1}}\) over the transient process (i.e., the process from \(0.4\)s to \(0.7\)s) and the steady state (i.e., the process from \(0.7\)s to \(1\)s). Let \(E_{\text{in}}=E_{\text{c1}}+E_{\text{v1}}\) for \(n=1,2\). It can be seen that the PEI at IBR 1 only takes a very small amount of energy, i.e., \(1.66\%\) of total energy produced by IBR 1 during the transients, to stabilize the microgrids. In the steady state, the energy consumed by the PEI is only \(0.51\%\) of the total energy produced by the IBR \(1\). Similarly, Figure 17-(b) shows that after the two IBR, the absolute value of real power consumed by the interface at IBR \(2\) is much smaller than the real power produced by IBR \(2\). The values of \(E_{2}\), \(E_{\text{c2}}\) and \(E_{\text{v2}}\) over the transient process (\(0.4\)s - \(0.7\)s) and the steady state (\(0.7\)s - \(1\)s) are reported in Table II. It can be seen that the protocol enforcement interface at IBR \(2\) actually produces energy to stabilize the system, as \(E_{\text{c2}}\) and \(E_{\text{v2}}\) are negative in Table II. Compared with the energy produced by IBR \(2\), the energy produced by IBR \(2\) for the stabilization purpose is very small, i.e., \(0.91\%\) of \(E_{2}\) during the transients and \(0.61\%\) of \(E_{2}\) during the steady state. #### V-A3 The role of \(\sigma_{n}\) Figure 18 visualizes the response of \(i_{\text{odd1}}\) under the same disturbance in Section V-A with different \(\sigma_{1}\). In the simulation, \(\sigma_{1}=\sigma_{2}\). It can be observed that all responses are stabilized with \(\sigma_{1}\) listed in Figure 18, but larger Fig. 14: (a) Time-domain evolution of instantaneous currents (curr.) \(\mathbf{i}_{\text{odd1}}\) at IBR 1 with the passivisation interface; (b) Zoomed-in version of \(\mathbf{i}_{\text{odd1}}\) during the transients (the upper panel) and the steady state (the lower panel). Fig. 15: (a) Time-domain evolution of instantaneous currents (curr.) \(\mathbf{i}_{\text{odd2}}\) at IBR 2 with the passivisation interface; (b) Zoomed-in version of \(\mathbf{i}_{\text{odd2}}\) during the transients (the upper panel) and the steady state (the lower panel). \(\sigma_{1}\) leads to larger overshooting. Table III shows the energy consumed by the PEIs at IBRs \(1\) and \(2\). Table III and Figure 18 suggest that it is not wise to choose a large \(\sigma_{n}\) (e.g., \(\sigma_{n}=30\)), as a large \(\sigma_{n}\) leads to both large overshooting and large energy consumption/generation of the PEIs. ## VI Conclusion This paper introduces passivity-based stability protocol for IBRs in AC microgrids. The protocol is enforced by a novel interface at the grid edge in a decentralized, non-intrusive manner. The proposed method is tested by simulating two networked microgrids with benchmark parameters. Simulations show that growing oscillations can occur, when two stable AC microgrids are networked, and they also suggest that the proposed interface can mitigate such a system-level symptom by only changing less than \(2\%\) of energy produced by its host IBR. Future work will address the nonlinearity resulting from constant power loads and investigate the power-electronics implementation of the protocol enforcement interface.
2308.13642
The Potential of Quantum Techniques for Stock Price Prediction
We explored the potential applications of various Quantum Algorithms for stock price prediction by conducting a series of experimental simulations using both Classical as well as Quantum Hardware. Firstly, we extracted various stock price indicators, such as Moving Averages (MA), Average True Range (ATR), and Aroon, to gain insights into market trends and stock price movements. Next, we employed Quantum Annealing (QA) for feature selection and Principal Component Analysis (PCA) for dimensionality reduction. Further, we transformed the stock price prediction task essentially into a classification problem. We trained the Quantum Support Vector Machine (QSVM) to predict price movements (whether up or down) contrasted their performance with classical models and analyzed their accuracy on a dataset formulated using Quantum Annealing and PCA individually. We focused on the stock price prediction and binary classification of stock prices for four different companies, namely Apple, Visa, Johnson and Jonson, and Honeywell. We primarily used the real-time stock data of the raw stock prices of these companies. We compared various Quantum Computing techniques with their classical counterparts in terms of accuracy and F-score of the prediction model. Through these experimental simulations, we shed light on the potential advantages and limitations of Quantum Algorithms in stock price prediction and contribute to the growing body of knowledge at the intersection of Quantum Computing and Finance.
Naman S, Gaurang B, Neel S, Aswath Babu H
2023-08-25T19:26:41Z
http://arxiv.org/abs/2308.13642v1
# The Potential of Quantum Techniques for Stock Price Prediction ###### Abstract We explored the potential applications of various quantum Algorithms for stock price prediction by conducting a series of experimental simulations using both Classical as well as Quantum Hardware. Firstly, we extracted various stock price indicators, such as Moving Averages (MA), Average True Range (ATR), and Aroon, to gain insights into market trends and stock price movements. Next, we employed Quantum Annealing (QA) for feature selection and Principal Component Analysis (PCA) for dimensionality reduction. Further, we transformed the stock price prediction task essentially into a classification problem. We trained the Quantum Support Vector Machine (QSVM) to predict price movements (whether up or down) and contrasted its performance with classical models and analysed their accuracy on dataset formulated using Quantum Annealing and PCA individually. We focused on stock price prediction and binary classification of stock prices for four different companies, namely Apple, Visa, Johnson and Jonson and Honeywell. We primarily used the real-time stock data of the raw stock prices of these companies. We compared various Quantum Computing techniques with their classical counterparts in terms of accuracy and F-score of the prediction model. Through these experimental simulations, we shed light on the potential advantages and limitations of Quantum Algorithms in stock price prediction and contribute to the growing body of knowledge at the intersection of Quantum Computing and Finance. Computing Systems (Quantum); QSVM, Quantum Annealing, Feature Selection, Machine Learning ## I Introduction The financial markets have been a focal point of intensive research for the past few decades, with a constant pursuit of accurate stock price predictions among investors, traders, and analysts. Quantum Computing realized by harnessing quantum properties has emerged as a potentially revolutionary tool in this domain, wherein complex financial challenges can be addressed with greater efficiency and precision in comparison to classical computing methods. This emerging technology has grabbed significant attention, with the research community and industry experts exploring its applications and potential impact on stock price prediction. Conventional stock price prediction approaches purely rely on statistical models, machine learning algorithms, and time-series analysis. However, these methods encounter limitations when handling vast historical financial data, intricate market dynamics, and rapidly changing conditions. Quantum Computing offers a distinct computational paradigm, capitalizing on Quantum Mechanics principles like Superposition and Entanglement to explore an exponentially larger solution space compared to Classical Computing. Quantum Algorithms have shown the potential for exponentially faster problem-solving, making them intriguing candidates for financial analysis, including stock price prediction [1, 2, 3]. Feature selection is a crucial aspect of building accurate Machine Learning Models. Quantum Algorithms like Quantum Annealing have demonstrated their efficiency in feature selection and dimensional reduction tasks. By identifying relevant features more effectively from extensive financial data sets, Quantum Computing can enhance input data quality, resulting in improved stock price forecasting accuracy. Additionally, Quantum Computing can address optimization challenges in financial modeling. Stock price prediction often requires finding optimal weights for influencing variables, and Quantum Optimization Algorithms like the Quantum Approximate Optimization Algorithm (QAOA) show promise in handling these tasks more efficiently [4, 5, 6]. Quantum Neural Networks (QNN) amalgamate the principles of Quantum Mechanics with the architecture of Neural Networks, harnessing the potential for exponential computational speedup and enhanced problem-solving capabilities [7, 8, 9]. The applications of QNNs span diverse domains, showcasing their potential to revolutionize industries. Financial modeling benefits from QNNs' ability to process intricate data relationships, enhancing portfolio optimization, risk assessment, and option pricing. Earlier it has been researched on a comparative analysis of forecasting stock prices using Long Short-Term Memory (LSTM) and Quantum Long Short-Term Memory (QLSTM) models [10, 11]. LSTM, a prevalent Recurrent Neural Network (RNN) architecture, excels at capturing sequential data's temporal patterns, making it apt for predicting time series like stock prices. In contrast, QLSTM leverages Quantum Computing's distinctive traits to potentially enhance predictive accuracy even further [12, 13, 14, 15]. QLSTM represents an innovative advancement in the realm of quantum machine learning, blending the power of quantum computing with the capabilities of LSTM networks. However, Quantum Computing faces challenges, including noise and errors with current quantum hardware, which may limit result accuracy. Identifying specific financial analysis tasks where Quantum Computing excels remains an ongoing research area. This study delves into the potential applications of Quantum Computing in stock price prediction and explores its benefits and limitations to contribute to the growing knowledge in this emerging field. We explored the use of Quantum Computing in stock price prediction, examining various Quantum Algorithms and their potential benefits in financial analysis. We conducted experiments using real-world financial data and compared the performance of quantum approaches with classical methods to evaluate the feasibility and implications of integrating Quantum Computing into the domain of stock market forecasting. Through this investigation, we aim to contribute to the growing body of knowledge on the practical applications of Quantum Computing in finance and its potential impact on stock price prediction methodologies. ## II Background ### _Stock Market Indicators_ The stock market is a complicated and dynamic domain influenced by a multitude of factors. To navigate this intricate landscape and make informed investment decisions, market participants heavily rely on a diverse range of indicators. These indicators are essential tools that aid in analyzing market trends, identifying potential opportunities, and assessing risk levels. Various types of indicators exist, which include trend-following indicators like Moving Averages, that provide insights into price trends over time. Oscillators, such as the Relative Strength Index (RSI), offer signals of overbought or oversold conditions. Volatility indicators, such as the Bollinger Bands, assist in measuring market volatility and further recognize potential price movements. Additionally, volume-based indicators, such as On-Balance-Volume (OBV), give valuable information on the strength of price trends based on trading volume. In this study, we explored and evaluated the performance of these different indicators in the context of stock market analysis and their potential contributions to stock price prediction and decision-making processes. Understanding the significance and implications of these indicators is crucial for developing effective investment strategies and improving financial decision-making. These indicators are just a few examples of the many tools available to stock market participants. Traders and investors often use overlapping indicators to gain a comprehensive understanding of the dynamics of the market to plan their trading strategies. However, it is essential to note that no indicator is foolproof, and using multiple indicators can help mitigate risks and enhance decision-making in the highly varying and uncertain stock market environment. ### _Quantum Annealing_ Quantum Annealing (QA) based on Quantum Computing techniques, has emerged as a promising approach for feature selection in various domains. As an optimization technique, QA aims to find the optimal feature subset that maximizes or minimizes a given objective function. In the context of feature selection, QA explores the energy landscape of the feature space, seeking the most relevant features that contribute significantly to the predictive power of a model. Unlike classical feature selection methods, QA can efficiently explore a vast number of possible feature combinations simultaneously, potentially leading to more accurate and efficient feature selection [16, 17, 18]. ### _Quantum Support Vector Classifier_ Quantum Support Vector Machine (QSVM) is a revolutionary extension of the classical Support Vector Machine (SVM) algorithm, that includes the power of quantum computing, in turn, the additional ability to address complex classification problems [19]. With the growing interest in quantum machine learning, QSVM has garnered significant attention for its potential to outperform classical SVM in certain scenarios. Having leveraged using Quantum Computing methods, QSVM facilitates the process of figuring out optimal hyper-planes in high-dimensional feature spaces, leading to more accurate and efficient binary classification tasks [20]. ## III Quantum Enhanced Pipeline ### _Raw Data_ We have collected real-time raw data for the stock prices of four companies namely Honeywell (HON), Johnson and Johnson (JNJ), Apple(AAPL) and Visa (VISA) using the fylinance library. Here raw data includes the closing price, Highest and Lowest price for the stock for each given day, from 25 December 2020 to 25 December 2022 extracted through yfinance API. As these companies belong to different domains, which allows us to explore the model's performance on different data patterns. ### _Feature Extraction_ In the stock market, various indicators are used to analyze and interpret market trends, price movements, and overall market sentiment. These indicators provide valuable insights to traders and investors for making informed decisions. For the available historical data of the stocks, we computed the following indicators, which were employed for undertaking Feature Extraction: 1. Moving Averages (MA): Describe the average price of a security over a specified period, wherein price fluctuations are smoothed out and trends are highlighted. Among two kinds of MA, one is Simple Moving Average (SMA) and the other is Exponential Moving Average (EMA). 2. Relative Strength Index (RSI): Measures the speed with which change of price movements occurs, and that indicates overbought or oversold conditions. RSI values range from 0 to 100, wherein the readings above 70 are treated as overbought and anything below 30 is oversold. 3. Moving Average Convergence Divergence (MACD): It is a trend-following momentum indicator that compares two moving averages of a security's price. It provides signals when the two moving averages converge or diverge. 4. Stochastic Oscillator: This momentum indicator compares a closing of security's closing price to its price range over a specific period. It identifies potential reversals or trend continuations. 5. Average True Range (ATR): Measures market volatility by calculating the average range between daily high and low prices over a specific period. 6. Aroon Indicators: Consist of two lines, Aroon Up and Aroon Down, which measure the time elapsed since the highest and lowest prices, respectively, within a given period. These indicators help traders identify the strength and direction of a trend. Aroon Up reaching 100 indicates a new high, while Aroon Down reaching 100 indicates a new low, suggesting a strong uptrend or downtrend. Conversely, when Aroon Up or Aroon Down falls towards 0, it suggests weakening trend momentum or potential trend reversal. ### _Dimensional Reduction_ The current Quantum Hardware technology is in the developing stage, and it is highly sensitive, in turn, more prone to noise-induced errors with regard to a few of several Quantum bits _i.e., Qubits_ utilized in the computing process. The Quantum advantage of handling exponentially large data with a limited number of Qubits comes with a cost of difficulty in realizing an ideal set of noise-free Qubits. Whether it is real hardware or simulation on classical hardware, one has to overcome this limitation. Hence, to overcome such limitation a minimal number of quality Qubits were involved, as a result, with the constraint of limited quality Qubits, the computing process has to be spot on. quality Qubits can not be wasted on useless parts of huge raw data, that can be ensured by doing Dimensional Reduction. Techniques like Principal Component Analysis (PCA) and Feature selection using Quantum Annealing are employed to compress high-dimensional data and extract essential features, wherein relevant information is retained that are needed for training the prediction model. Thereby mitigating misuse of available Quantum Hardware resources so that Quantum Algorithms can operate efficiently within hardware limitations, making Quantum Computation more practical for real-world applications. #### Ii-C1 Principle Component Analysis This is a widely used technique for dimensional reduction in data analysis. PCA transforms the original features of high-dimensional data sets into a new set of uncorrelated variables called principal components. These components are ordered based on the amount of variance they exhibit in the data. Here, the first component captures the highest variance, and the subsequent variances were captured by later ones. Since the data with high variance dictates its nature, the subset of such top principal components is selected as the PCA process. This effectively reduces the dimensionality of the data set while ensuring the most relevant information. In addition to increasing the speed of the computation, PCA offers simplification of data visualization. #### Ii-C2 Quantum Annealing for Feature Selection The quantum version of Feature Selection is formulated as a combinatorial optimization problem known as a Quadratic Unconstrained Binary Optimization (QUBO) run on D-Wave Quantum Annealing Computers. In this method, a feature that is selected or not is represented in the form of binary variables, which are constrained to take the value "1" if the feature is selected, otherwise "0". The relevance of each feature is determined by its correlation with the target variable. As QUBO can even find solutions without enforcing the constraints, that empowers Quantum Computers over Classical counterparts. The QUBO formulation seeks to find the binary variable assignment that optimizes the objective function, which corresponds to the subset of features that best contributes to the prediction or classification task. The objective function of the QUBO is designed to maximize the relevance of selected features while minimizing redundancies and irrelevant features. The Optimization Problem of QUBO Feature Selection is described below: \[\text{Minimize}\quad Q(\mathbf{x})=x^{T}qx=\sum_{i=1}^{N}\sum_{j=i+1}^{N}q_{ij} x_{i}x_{j},\] where \(\mathbf{x}\) is vector of binary variables \(x_{i}(\text{or}\,x_{j})=\{0,1\}\), and given real-valued upper triangular matrix \(\mathbf{q}\in\mathbf{R}^{N\times N}\) Fig. 1: Workflow depicting Quantum Assisted Pipeline for Stock Price Prediction whose entries \(q_{ij}\) are weights. Implementing Quantum Annealing Algorithms on D-Wave Quantum Annealer to undertake the QUBO for feature selection, the most relevant features required for the given stock market indicators can be identified [21, 22]. ### _Prediction Model_ Comparison of stock prices on subsequent days, say on day T and day T+1, a binary classification task can be designed so that it allows predicting future stock prices. Such prediction model has to be trained first using all the extractable features. The simplest binary classification task can be defined in terms of Binary function 'Change' as below: \[\text{Change}=\begin{cases}1,&\text{if }\text{Price}_{\text{T}}<\text{Price}_{ \text{T+1}}\\ 0,&\text{otherwise}\end{cases}\] Pertaining to training classical machine learning models, SVM, Decision Tree, Random Forest, KNN, Logistic Regression, Naive Bayes, Gradient Boosting, and XGBoost can be employed. To evaluate the impact of dimensionality reduction techniques on prediction accuracy, use of PCA and Feature Selection methods can be fruitful. PCA reduces the dimensionality of the feature space by transforming the original variables into a set of uncorrelated principal components, whereas Feature Selection aims to select the most relevant and informative features from the data set. We prepared data sets with 3, 5, and 8 features from the original data set. These new low-dimension data sets are named PCA3, PCA5, PCA8, QA3, QA5, and QA8. We utilized a diverse set of machine learning models to conduct a comprehensive comparison between the performances of PCA and Feature Selection in stock price prediction. By employing multiple models and carefully evaluating their outcomes, we aimed towards providing a thorough analysis of the effectiveness of these dimensionality reduction techniques in improving the accuracy of our predictive models. The Quantum Support Vector Classifier (QSVC) is an innovative extension of scikit-learn's sklearn.svm.SVC classifier, introducing the concept of quantum kernels. By integrating quantum kernels, QSVC enhances classification performance through quantum feature mapping. One such feature map is the ZZ Feature Map, which transforms classical input data into quantum states by leveraging ZZ couplings between Qubits. In QSVC, the ZZ Feature Map is applied to encode the dataset into a quantum state, exploiting quantum entanglement and correlations among qubits to capture complex relationships within the data. This enables QSVC to process and classify data in a quantum-enhanced manner, offering the potential for improved performance in challenging classification tasks. This combination of QSVC and the ZZFFeature Map exemplifies the fusion of quantum computing techniques and classical machine learning, promising advancements in solving complex real-world problems. We trained Quantum Support Vector Classifier (QSVC) models using the datasets with reduced dimensionality. The Qiskit SDK offers 4 different entanglement schemes for the ZZ-Feature Map [23]. We trained the QSVM model using all four entanglement schemes namely 'linear', 'circular', 'full' and 'pairwise' and compared the accuracy achieved in each case. This allows us to explore the effect of feature maps on the accuracy of the QSVM model. ## IV Results and Discussion In this study, we looked into the applications of Quantum Computing in the financial domain, with a focus on stock price prediction and binary classification of decreasing or increasing stock prices for four different companies, namely, Apple, Visa, Johnson & Jonson and Honeywell. We compared various Quantum Computing techniques with their classical counterparts in terms of accuracy (_or_, average accuracy) and F-score of the prediction model (see Fig.2). The F-score is a metric used to evaluate the performance of a given Machine Learning Model, wherein it is evaluated by combining precision and recall. The results from our experiments provided valuable insights into the performance of Quantum Computing methods for financial analysis (refer Tables I-V). Feature selection using quantum annealing demonstrated its ability to extract the most relevant features from financial data more effectively than PCA, presenting Quantum Annealing as a promising approach for feature selection tasks in finance. This finding is significant, as the identification of relevant features from a wide variety of stock market indicators plays a critical role in the accuracy of prediction models as well as in the decision-making processes. However, in the binary classification task, we found that the quantum support vector machine (QSVM) did not exhibit a significant advantage over the classical support vector machine (SVM) for the given data sets. This outcome suggests that the quantum advantage might not be evident in all financial analysis scenarios, and further investigation is required to explore other potential applications of QSVM in finance. We implemented the QSVC model for data sets with 8 features, however, these models failed to train due to the insufficient computational power, which is required for simulating the quantum circuits of the feature maps. This further emphasises the importance of using dimensionality reduction for Quantum Machine Learning pipelines (refer Fig.1). The data-set's diversity from four distinct domain companies allowed us to explore the applicability of quantum techniques across various sectors, and identify potential domain-specific trends. This holistic analysis provides valuable information for investors and financial analysts seeking to leverage Quantum Computing to enhance their decision-making. ## V Conclusion In conclusion, our investigation on the application of Quantum Computing in the financial domain has provided valuable insights with regards to its potential and limitations. Quantum Annealing exhibited promise in the feature selection task, surpassing PCA in extracting relevant patterns from financial data. However, the quantum advantage was not evident in binary classification, as QSVM did not outperform classical SVM by a significant margin. Nevertheless, the diverse dataset enabled Figure 2: Plots for Accuracy \begin{table} \begin{tabular}{||c|c|c|c|c||} \hline **Model** & **Entanglement Scheme** & **Dimensionality Reduction** & **Accuracy** & **F-Score** \\ \hline \hline Gradient Boosting & None & None & 53.68\% & 56\% \\ \hline Naive Bayes & None & PCA-3 & 55.79\% & 59.61\% \\ \hline Decision Tree & None & PCA-5 & 56.84\% & 58.58\% \\ \hline KNN & None & PCA-8 & 53.68\% & 56.86\% \\ \hline Random Forest & None & Quantum Annealing-3 & 50.52\% & 50.52\% \\ \hline XG Boost & None & Quantum Annealing-5 & 56.84\% & 57.73\% \\ \hline Decision Tree & None & Quantum Annealing-8 & 56.84\% & 53.93\% \\ \hline Quantum SVM & Linear & Quantum Annealing-3 & 58.94\% & 63.55\% \\ \hline Quantum SVM & Pairwise & Quantum Annealing-3 & 58.94\% & 63.55\% \\ \hline Quantum SVM & Linear & PCA-3 & 51.58\% & 55.77\% \\ \hline Quantum SVM & Pairwise & PCA-3 & 51.58\% & 55.77\% \\ \hline Quantum SVM & Linear & PCA-5 & 58.94\% & 58.94\% \\ \hline Quantum SVM & Pairwise & PCA-5 & 58.94\% & 58.94\% \\ \hline Quantum SVM & Circular & Quantum Annealing-5 & **60\%** & **56.82\%** \\ \hline \end{tabular} \end{table} TABLE II: Best Models for Honeywell dataset \begin{table} \begin{tabular}{||c|c|c|c||} \hline **Model** & **Dimensionality Reduction** & **AVG Accuracy** & **AVG F-Score** \\ \hline \hline Classical Machine Learning & None & 56.58\% & 58.76\% \\ \hline Classical Machine Learning & PCA-3 & 54.21\% & 56.81\% \\ \hline Classical Machine Learning & PCA-5 & 55\% & 58.83\% \\ \hline Classical Machine Learning & PCA-8 & 53.68\% & 57.71\% \\ \hline Classical Machine Learning & Quantum Annealing-3 & 55.26\% & 52.81\% \\ \hline Classical Machine Learning & Quantum Annealing-5 & **60.26\%** & **62.24\%** \\ \hline Classical Machine Learning & Quantum Annealing-8 & 55.79\% & 57.22\% \\ \hline QSVM & PCA-3 & 53.94\% & 58.08\% \\ \hline QSVM & PCA-5 & 56.58\% & 59.79\% \\ \hline QSVM & Quantum Annealing-3 & 53.94\% & 54.04\% \\ \hline QSVM & Quantum Annealing-5 & 55.78\% & 57.45\% \\ \hline \end{tabular} \end{table} TABLE I: Average Accuracy \begin{table} \begin{tabular}{||c|c|c|c|c||} \hline **Model** & **Entanglement Scheme** & **Dimensionality Reduction** & **Accuracy** & **F-Score** \\ \hline \hline Logistic Regression & None & None & 55.79\% & 57.15\% \\ \hline KNN & None & PCA-3 & 51.58\% & 54.90\% \\ \hline XG Boost & None & PCA-5 & 55.78\% & 55.31\% \\ \hline SVM & None & PCA-8 & 57.89\% & 59.18\% \\ \hline Decision Tree & None & Quantum Annealing-3 & 54.73\% & 49.41\% \\ \hline SVM & None & Quantum Annealing-5 & **58.94\%** & **61.38\%** \\ \hline SVM & None & Quantum Annealing-8 & 56.84\% & 60.19\% \\ \hline Logistic Regression & None & Quantum Annealing-8 & 56.84\% & 60.19\% \\ \hline Quantum SVM & Linear & Quantum Annealing-3 & 51.58\% & 48.89\% \\ \hline Quantum SVM & Pairwise & Quantum Annealing-3 & 51.58\% & 48.89\% \\ \hline Quantum SVM & Linear & PCA-3 & 51.58\% & 53.06\% \\ \hline Quantum SVM & Pairwise & PCA-3 & 51.58\% & 53.06\% \\ \hline Quantum SVM & Full & PCA-5 & 53.68\% & 55.10\% \\ \hline Quantum SVM & Circular & Quantum Annealing-5 & 58.94\% & 58.06\% \\ \hline \end{tabular} \end{table} TABLE III: Best Models for Johnson & Johnson dataset \begin{table} \begin{tabular}{||c|c|c|c|c||} \hline **Model** & **Entanglement Scheme** & **Dimensionality Reduction** & **Accuracy** & **F-Score** \\ \hline \hline XG Boost & None & None & 51.58\% & 58.18\% \\ \hline SVM & None & PCA-3 & 51.58\% & 58.18\% \\ \hline SVM & None & PCA-5 & 53.68\% & 60.71\% \\ \hline SVM & None & PCA-8 & 51.58\% & 57.40\% \\ \hline Decision Tree & None & Quantum Annealing-3 & 57.89\% & 62.26\% \\ \hline K- Nearest Neighbours & None & Quantum Annealing-5 & **62.10\%** & **68.96\%** \\ \hline XGBoost & None & Quantum Annealing-8 & 52.61\% & 57.94\% \\ \hline Quantum SVM & Linear & Quantum Annealing-3 & 52.63\% & 55.45\% \\ \hline Quantum SVM & Full & PCA-3 & 55.78\% & 61.81\% \\ \hline Quantum SVM & Circular & Quantum Annealing-5 & 56.84\% & 61.68\% \\ \hline Quantum SVM & Circular & Quantum Annealing-5 & 56.84\% & 61.68\% \\ \hline Quantum SVM & Linear & PCA-5 & 57.89\% & 65.51\% \\ \hline Quantum SVM & Full & PCA-5 & 57.89\% & 65.51\% \\ \hline \end{tabular} \end{table} TABLE I: Average Accuracy us to explore quantum techniques' applicability across various sectors, identifying domain-specific trends. While Quantum Computing shows promise, its full potential in financial analysis requires further research and development. Overall, this study contributes to the understanding of quantum computing's role in the finance domain and provides a foundation for future investigations in quantum-enhanced financial analysis and decision-making.
2308.02997
Heavy Flavour Spectroscopy
The discovery of hadronic states beyond the conventional two-quark meson and three-quark baryon picture in the last two decades is one of the most amazing accomplishments in fundamental physics research. We review the experimental progress on the study of the exotic states (also known as the XYZ particles) beyond the conventional quark model. We give a general review and then focus on the lineshape measurement of the X(3872), observation of new decay modes of the Y(4230) and new vector charmoniumlike states Y(4500) and Y(4790), evidence for the neutral isospin partners of the charged charmoniumlike $Z_{cs}$ states, discoveries of the tetraquark state candidates with four different flavours or two-pairs of charm-anticharm quarks and the pentaquark states.
Chang-Zheng Yuan
2023-08-06T02:58:04Z
http://arxiv.org/abs/2308.02997v1
# Heavy Flavour Spectroscopy ###### Abstract The discovery of hadronic states beyond the conventional two-quark meson and three-quark baryon picture in the last two decades is one of the most amazing accomplishments in fundamental physics research. We review the experimental progress on the study of the exotic states (also known as the XYZ particles) beyond the conventional quark model. We give a general review and then focus on the lineshape measurement of the \(X(3872)\), observation of new decay modes of the \(Y(4230)\) and new vector charmoniumlike states \(Y(4500)\) and \(Y(4790)\), evidence for the neutral isospin partners of the charged charmoniumlike \(Z_{cs}\) states, discoveries of the tetraquark state candidates with four different flavours or two-pairs of charm-anticharm quarks and the pentaquark states. CC-BY-4.0 licence _Introduction:_ Hadron spectroscopy is a field of frequent discoveries and surprises, and the theoretical difficulties in understanding the strong interaction in the color-confinement regime make the field even more fascinating. The tremendous data collected by the BaBar, Belle, BESIII, LHCb, and other experiments and improved theoretical tools developed to analyze the experimental data result in rapid progress of the field. In the conventional quark model, mesons are composed of one quark and one antiquark, while baryons are composed of three quarks. However, many quarkoniumlike states were discovered at two \(B\)-factories BaBar and Belle [1] in the first decade of the 21st century. Whereas some of these are good candidates of quarkonium states, many other states have exotic properties, which may indicate that exotic states, such as multi-quark state, hadronic molecule, or hybrid, have been observed [2]. BaBar and Belle experiments finished their data taking in 2008 and 2010, respectively, and the data are still used for various physics analyses. BESIII [3] and LHCb [4] experiments started data taking and contributed to the study of exotic hadrons since 2008. Most of the discoveries of the such states were made at these four experiments. Figure 1 shows the history of the discovery of some of the new hadrons, started from the observation of the \(X(3872)\) in 2003 [5]. In this article, we present recent experimental progress and focus on those states with exotic properties, including the \(X(3872)\), \(Y(4260)\), \(Z_{c}(3900)\), \(P_{c}\) and their siblings. _The lineshape of the \(X(3872)\):_ The \(X(3872)\) was observed in 2003 by the Belle experiment [5], and confirmed very soon by the CDF [6] and \(D0\)[7] experiments in \(p\bar{p}\) collision. The mass of the \(X(3872)\) has been measured as \(3871.65\pm 0.06\) MeV [8], which is lower than the mass threshold of \(\bar{D}^{0}D^{*0}\), \(3871.69\pm 0.11\) MeV, by \(0.04\pm 0.12\) MeV, to be compared with the bounding energy of the deuteron of 2.2 MeV. The width measurements are less precise and model dependent since the \(X(3872)\) is very narrow and the mass resolution of the experiments is usually much larger than the intrinsic width. Fitting the \(\pi^{+}\pi^{-}J/\psi\) invariant mass distribution with a Breit-Wigner (BW) function, LHCb reported a width of about 1 MeV (the mass resolution is 2.4-3.0 MeV); and the fit with a Flatte function with constraints from other measurements yields a FWHM of 0.22 MeV which depends strongly on the \(X(3872)\to\bar{D}^{0}D^{*0}\) coupling [9, 10]. Although the statistics are low at BESIII experiment, the high efficiencies of reconstructing all the \(X(3872)\) decays modes and the very good mass resolution in the \(\bar{D}^{0}D^{*0}\) mode (\(<1\) MeV) make it possible to measure the lineshape of the \(X(3872)\) state. BESIII determined the pole locations of the \(X(3872)\) based on a simultaneous fit to the data samples of \(X(3872)\to D^{0}\bar{D}^{0}\pi^{0}\) and \(X(3872)\to D^{0}\bar{D}^{0}\pi^{0}\). Figure 1: Discovery of some heavy exotic states from experiments. \(\pi^{+}\pi^{-}J/\psi\), with the \(X(3872)\) produced in \(e^{+}e^{-}\to\gamma X(3872)\) process [11]. The parameterization of the \(X(3872)\) lineshape, with the effect of \(D^{*0}\) width taken into account, is developed in Ref. [12]. The fit results and the lineshape of the \(X(3872)\) are shown in Fig. 2. The lineshape parameters are determined to be \(g=(0.16\pm 0.10^{+1.12}_{-0.11})\), \(\Gamma_{0}=(2.67\pm 1.77^{+8.01}_{-0.82})\) MeV and \(M_{X}=(3871.63\pm 0.13^{+0.06}_{-0.05})\) MeV. Here \(g\) denotes the effective coupling constant of the \(X(3872)\) to neutral and charged \(D^{*}\bar{D}\); the constant \(\Gamma_{0}\) represents all the channels except \(D^{*}\bar{D}\), and is separated into three parts: \(\Gamma_{0}=\Gamma_{\pi^{+}\pi^{-}J/\psi}+\Gamma_{\rm known}+\Gamma_{\rm unknown}\); and \(M_{X}\) is the mass of the \(X(3872)\). The FWHM of the lineshape is determined to be \((0.44^{+0.13}_{-0.35}{}^{+0.38}_{-0.25})\) MeV. Two poles are found on the first and second Riemann sheets corresponding to the \(D^{*0}\bar{D}^{0}\) branch cut. The pole location on the first sheet is much closer to the \(D^{*0}\bar{D}^{0}\) threshold than the other, and is determined to be \((7.04\pm 0.15^{+0.07}_{-0.08})\) MeV above the \(D^{0}\bar{D}^{0}\pi^{0}\) threshold with an imaginary part \((-0.19\pm 0.08^{+0.14}_{-0.19})\) MeV. Belle measured the \(X(3872)\) lineshape with \(B\to X(3872)K\to D^{0}\bar{D}^{*0}K\)[13]. The peak near the threshold in the \(D^{0}\bar{D}^{*0}\) invariant mass spectrum is fitted using a relativistic BW function. Belle determined a mass of \((3873.71^{+0.56}_{-0.50}\pm 0.13)\) MeV and a width of \((5.2^{+2.2}_{-1.5}\pm 0.4)\) MeV. The peak is also studied using a Flatte lineshape and the lower limit on the \(D\bar{D}^{*}\) coupling constant \(g\) is determined to be 0.075 at 95% credibility. A coupled channel analysis of the data used in this analysis and those in \(X(3872)\to\pi^{+}\pi^{-}J/\psi\) decay is highly recommended to get reliable information about the \(X(3872)\) lineshape. _Observation of new \(\psi(4230)\) decays and new vector states \(Y(4500)\) and \(Y(4790)\):_ The \(Y\) states were discovered in the initial state radiation in the \(B\)-factory experiments, and they have \(J^{PC}=1^{--}\). So these state can also be produced directly in \(e^{+}e^{-}\) annihilation experiment like BESIII. Much improved measurements of the \(Y(4260)\)[14], \(Y(4360)\), \(Y(4660)\)[15] and so on are achieved, their new decay modes are discovered and new vector states are observed. The most precise measurements of the \(Y(4260)\) are from the BESIII experiment [16; 17]. By doing a high luminosity energy scan in the vicinity of the \(Y(4260)\), BESIII found the peak of the \(Y(4260)\) is much lower (so it is now named the \(\psi(4230)\)) than that from previous measurements and the width is narrow, and there is a high mass shoulder with a mass of 4.32 GeV if fitted with a BW function. Since then, more new decay modes of the \(\psi(4230)\) were observed including \(\pi^{+}\pi^{-}h_{c}\)[18], \(\pi^{+}\pi^{-}\psi(2S)\)[19], \(\omega\chi_{c0}\)[20], \(\pi\bar{D}D^{*}+c.c.\)[21], \(\pi\bar{D}^{*}D^{*}\)[22], and \(K\bar{K}J/\psi\)[23; 24]. The cross sections of \(e^{+}e^{-}\to K^{+}K^{-}J/\psi\) at center-of-mass energies from 4.127 to 4.600 GeV are measured based on 15.6 fb\({}^{-1}\) data collected by the BESIII experiment [23]. Two resonant structures are observed in the line shape of the cross sections. The mass and width of the first structure are measured to be \(4225.3\pm 0.15\) MeV. Figure 2: The fit to the \(D^{0}\bar{D}^{0}\pi^{0}\) (left) and \(\pi^{+}\pi^{-}J/\psi\) (middle) invariant mass distributions. Data are taken from Ref. [11]. The \(X(3872)\) lineshape at the best estimation is shown in right panel. The vertical dashed line indicates the position of \(D^{*0}\bar{D}^{0}\) threshold. \(2.3\pm 21.5\) MeV and \((72.9\pm 6.1\pm 30.8)\) MeV, respectively. They are consistent with those of the established \(\psi(4230)\). The second structure is observed for the first time with a statistical significance greater than \(8\sigma\), denoted as \(Y(4500)\). Its mass and width are determined to be \(4484.7\pm 13.3\pm 24.1\) MeV and \(111.1\pm 30.1\pm 15.2\) MeV, respectively. This state is confirmed in \(e^{+}e^{-}\to\pi\bar{D}^{*}D^{*}\) reported in Ref. [22]. With the world's largest \(e^{+}e^{-}\) scan data sample between 4.226 and 4.95 GeV accumulated by BESIII, the Born cross sections of \(e^{+}e^{-}\to D_{s}^{*+}D_{s}^{*-}\) are measured precisely [25]. Besides two enhancements in the energy dependent cross sections at around 4.2 and 4.45 GeV that may come from the \(\psi(4160)\) or \(\psi(4230)\) and the \(\psi(4415)\), respectively, a third resonance structure (\(Y(4790)\)) is observed at around 4.7\(\sim\)4.8 GeV with statistical significance greater than 6.1\(\sigma\). Due to the limited number of data points around 4.79 GeV, the fitted mass of the third structure varies from 4786 to 4793 MeV and the width from 27 to 60 MeV. This could be the same state observed in \(e^{+}e^{-}\to K^{0}_{S}K^{0}_{S}J/\psi\) with a statistical significance of 4.0\(\sigma\)[24]. In the charmonium energy region between 3 and 5 GeV, we now have identified 6 well known \(\psi\) peaks (\(J/\psi\), \(\psi(2S)\), \(\psi(3770)\), \(\psi(4040)\), \(\psi(4160)\), and \(\psi(4415)\)) and 9 new \(Y\) structures (\(Y(4230)\), \(Y(4320)\), \(Y(4360)\), \(Y(4390)\), \(Y(4500)\), \(Y(4630)\), \(Y(4660)\), \(Y(4710)\), and \(Y(4790)\)). They are all vector states and they cannot be all charmonium states. While more experimental efforts are needed to resolve the origins of these states, theoretical efforts are also necessary to identify if the vector charmonium hybrids and/or tetraquark states have already been observed. _Neutral partners of charged charmoniumlike \(Z_{cs}\) states:_ The charged charmoniumlike state \(Z_{c}(3900)\) discovered in \(\pi J/\psi\) by the BESIII [26] and Belle [27] experiments, the \(Z_{c}(4020)\) discovered in \(\pi h_{c}\) by the BESIII [28] experiment, and the \(Z_{c}(4430)\) discovered in \(\pi\psi(2S)\) by the Belle [29] experiment are all states with minimal quark content of \(c\bar{c}u\bar{d}\). Recent studies try to search for states with one of the four quarks replaced by a different quark, for example, the \(Z_{cs}\) states with quark content \(c\bar{c}u\bar{s}\). BESIII announced observation of a near-threshold structure \(Z_{cs}(3985)\) in the \(K^{+}\) recoill-mass spectra in \(e^{+}e^{-}\to K^{+}(D_{s}^{-}D^{*0}+D_{s}^{*-}D^{0})\)[30] with a mass of 3983 MeV and a width of about 10 MeV; and LHCb reported two resonances decaying into \(K^{\pm}J/\psi\), the \(Z_{cs}(4000)\) with a mass of 4003 MeV and a width of about 131 MeV, and the \(Z_{cs}(4220)\) with a mass of 4216 MeV and a width of about 233 MeV [31]. The widths of the \(Z_{cs}(3985)\) and \(Z_{cs}(4000)\) are quite different, maybe one of them is the strange partner of the \(Z_{c}(3900)\) with the \(d\) quark replaced with an \(s\) quark. Both BESIII [32] and LHCb [33] reported evidence for the neutral partners of the \(Z_{cs}\) states at around 4 GeV with quark content \(c\bar{c}u\bar{s}\). These indicate that these states form isospin doublets. The \(Z_{c}(3900)\) (\(Z_{c}(4020)\)) and \(Z_{cs}\) states may form multiplets shown in Fig. 3, the missing states can be searched for with the existing or future data samples. _New tetraquark states from LHC experiments:_ LHCb experiment observed two new resonances with four different flavors with mass of \(2908\pm 11\pm 20\) MeV and width of \(136\pm 23\pm 13\) MeV, which decay to \(D_{s}^{+}\pi^{+}\) and \(D_{s}^{+}\pi^{-}\), respectively, from a combined amplitude analysis for the decays \(B^{0}\to\overline{D}^{0}D_{s}^{+}\pi^{-}\) and \(B^{+}\to D^{-}D_{s}^{+}\pi^{+}\), which are related by isospin symmetry. The former state indicates the first observation of a doubly charged open-charm tetraquark state with minimal quark content \(c\bar{s}u\bar{d}\), and the latter state is a neutral tetraquark composed of \(c\bar{s}\bar{u}d\) quarks (\(T_{cs}\)). Both states are found to have spin-parity \(0^{+}\), and their resonant parameters are consistent with each other, which suggests that they belong to an isospin triplet [34]. Tetraquark states \(T_{cs}\) with four different flavors (\(cs\bar{u}\bar{d}\)) have been search for at LHCb and evidence (\(3.9\sigma\)) for two states (\(X_{0}(2900)\) and \(X_{1}(2900)\)) in \(D^{-}K^{+}\) system were reported from a PWA of \(B^{+}\to D^{+}D^{-}K^{+}\) events by the LHCb experiment [35]. They are good candidates for the flavour partners of the \(T_{cs}\) states, and more flavour partners with other quark contents and spin-parities are expected. The LHCb, ATLAS, and CMS experiments reported observation of states decay to two charmonium states [36, 37, 38]. The \(X(6900)\) is observed in all these three experiments, and a new structure (\(X(6600)\)), with a significance above \(5\sigma\), and evidence for another new structure (\(X(7300)\)), with a local significance of \(4.1\sigma\), are found at CMS. The masses, widths, and significances are obtained in model-dependent ways without considering possible interference between the resonances. These are good candidates for tetraquark states with two pairs of charm-anticharm quarks. _Pentaquark states:_ In the decay of \(\Lambda_{b}\to J/\psi pK^{-}\) analyzed by the LHCb experiments, there are three very narrow peaks in the invariant mass distribution of \(J/\psi p\)[39]. In a simple fit to the invariant mass spectrum, the resonance parameters of the \(P_{c}(4312)^{+}\), \(P_{c}(4440)^{+}\), and \(P_{c}(4457)^{+}\) are determined. They are all narrow, and the \(P_{c}(4312)^{+}\) state peaks right below the \(\Sigma_{c}^{+}\bar{D}^{0}\) threshold, the \(P_{c}(4457)^{+}\) state peaks right below the \(\Sigma_{c}^{+}\bar{D}^{*0}\) threshold, while the \(P_{c}(4440)^{+}\) state peaks about 20 MeV below it. Being so close to the thresholds, they are very good candidates for the molecules of a charmed baryon and an anti-charmed meson, and more similar states close to the other baryon-meson thresholds are expected. In a following amplitude analysis of \(B\to J/\psi\Lambda\bar{p}\) decays, a narrow resonance in the \(J/\psi\Lambda\) system, consistent with a pentaquark candidate with strangeness, is observed with high significance [40]. The mass and the width of this new state are measured to be \(4338.2\pm 0.7\pm 0.4\) MeV and \(7.0\pm 1.2\pm 1.3\) MeV, respectively. It is very close to the \(\Xi_{c}^{+}D^{-}\) threshold of 4337.4 MeV. Evidence for states at around \(\Xi_{c}^{0}\bar{D}^{*0}\) threshold are reported in Ref. [41]. _Summary and Perspectives:_ Many states with exotic properties were observed in the past two decades. Some of them are quite close to the thresholds of two heavy objects, either two heavy flavor mesons or one heavy flavor meson and one heavy flavor baryon, like the \(X(3872)\) (\(\bar{D}^{0}D^{*0}\)), \(Y(4220)\) (\(D_{s}^{*+}D_{s}^{*-}\) or \(\bar{D}D_{1}\)), \(Z_{c}(3900)^{+}\) (\(\bar{D}^{0}D^{*+}\)), \(Z_{c}(4020)^{+}\) (\(\bar{D}^{*0}D^{*+}\)), \(Z_{c}^{+}\) (\(\bar{D}^{0}D_{s}^{*+}\)), \(P_{c}(4312)^{+}\) (\(\Sigma_{c}^{+}\bar{D}^{0}\)), \(P_{c}(4440)^{+}\) and \(P_{c}(4457)^{+}\) (\(\Sigma_{c}^{+}\bar{D}^{*0}\)); and some other states are not close to such thresholds, such as the \(Y(4360)\), \(Y(4660)\), \(Z_{c}(4430)^{+}\), and \(Z_{cs}(4220)^{+}\). These may suggest that we did observe the hadronic molecules close to thresholds and we also observed hadronic states with some other quark configurations like compact tetraquark states and so on. It is expected that more results will be produced by the Belle II, BESIII, LHCb, and other experiments. Figure 3: The possible multiplets of the \(Z_{c}(3900)\) and \(Z_{c}(4020)\). _Acknowledgments:_ We thank the organizers for the invitation to give a talk at the conference in such a beautiful city. This work is supported in part by National Key Research and Development Program of China (No. 2020YFA0406300), and National Natural Science Foundation of China (NSFC, Nos. 11961141012 and 11835012).
2306.17188
Decentralized Healthcare Systems with Federated Learning and Blockchain
Artificial intelligence (AI) and deep learning techniques have gained significant attraction in recent years, owing to their remarkable capability of achieving high performance across a broad range of applications. However, a crucial challenge in training such models is the acquisition of vast amounts of data, which is often limited in fields like healthcare. In this domain, medical data is typically scattered across various sources such as hospitals, clinics, and wearable devices. The aggregated data collected from multiple sources in the healthcare domain is sufficient for training advanced deep learning models. However, these sources are frequently hesitant to share such data due to privacy considerations. To address this challenge, researchers have proposed the integration of blockchain and federated learning to develop a system that facilitates the secure sharing of medical records. This work provides a succinct review of the current state of the art in the use of blockchain and federated learning in the decentralized healthcare domain.
Abdulrezzak Zekiye, Öznur Özkasap
2023-06-24T21:10:24Z
http://arxiv.org/abs/2306.17188v1
# Decentralized Healthcare Systems with Federated Learning and Blockchain ###### Abstract Artificial intelligence (AI) and deep learning techniques have gained significant attraction in recent years, owing to their remarkable capability of achieving high performance across a broad range of applications. However, a crucial challenge in training such models is the acquisition of vast amounts of data, which is often limited in fields like healthcare. In this domain, medical data is typically scattered across various sources such as hospitals, clinics, and wearable devices. The aggregated data collected from multiple sources in the healthcare domain is sufficient for training advanced deep learning models. However, these sources are frequently hesitant to share such data due to privacy considerations. To address this challenge, researchers have proposed the integration of blockchain and federated learning to develop a system that facilitates the secure sharing of medical records. This work provides a succinct review of the current state of the art in the use of blockchain and federated learning in the decentralized healthcare domain. Keywords:federated learning, blockchain, healthcare ## 1 Introduction The growing collection of data has raised serious concerns regarding privacy, particularly for medical data, which can be obtained from healthcare providers and wearable devices. Such data encompasses a wide range of information, including treatments, drugs, prescriptions, tests, images, and vital signs. Nevertheless, medical data is often dispersed across multiple entities, and the availability of large-scale medical datasets is limited. For instance, researchers in [1] trained a machine learning model with only 303 records. To tackle these challenges, the combination of Federated Learning (FL) and Blockchain (BC) technology has been proposed to enable data sharing while preserving privacy. By means of federated learning, entities train a global machine learning model and then share the parameters of the model instead of raw data. Blockchain, generally, can be used to make the system decentralized, transparent, immutable, and self-controlled by using smart contracts. There exist reviews about using blockchain in the medical fields as given in [4] and [5]. The usage of federated learning for healthcare is discussed in [6]. To our knowledge, there is no review of the usage of both techniques in an integrated manner in the field of medical systems. The contribution of this paper is providing a review of state-of-the-art works utilizing both federated learning and blockchain in the medical field. In the rest of this extended abstract, we present some of the latest research that used both federated learning and blockchain in the medical field with a graphical representation of blockchain and federated learning-enabled systems. ## 2 State-of-the-art: Blockchain and Federated Learning Empowered Healthcare Solutions This section presents an overview of the representative recent studies that employed blockchain and federated learning in the healthcare domain. The type of used blockchain, the type of aggregation performed in federated learning, the proposed solution and the application types are summarized in Table 1. A graphical representation of the integration of blockchain and federated learning in healthcare systems is presented in Figure 1. The diagram displays entities, including hospitals, clinics, and wearable devices, as the primary sources of data. The federated learning component trains local models on local data, either through on-site processing or by sending the data to nodes with sufficient computational power. Model aggregation can be performed either through a centralized server or through a decentralized mechanism facilitated by a blockchain-based smart contract. Researchers in [7] utilized blockchain and federated learning to build a secure and provenance framework to be used in the Internet of Health Things (IoHT). They tested the built framework with COVID-19 detection scenarios. Federated learning has been used to train data locally instead of sharing it. Differential privacy was applied on top of the federated learning to increase privacy protection. The role of blockchain in their framework is managing the federated learning process in a decentralized way. Specifically, the blockchain had a role in aggregating the global model, handling clients' reputations and the quality of their contributions, and storing the local and global models in Interplanetary File System (IPFS). In addition, the blockchain in this solution allows users to track the decision path, i.e. the used model and dataset. Figure 1: A general representation of blockchain and federated learning-enabled systems Using federated learning has a problem when dealing with non-independent identically distributed (non-IID) data. The reason is that the data are different at each federated agent and there is a risk of training each local model using imbalanced data. Researchers in [8] implemented a two-step solution to tackle this problem. The first step is building a uniformly distributed subset of the data by receiving raw data from the participants. The composed dataset is then sent to the participant to include in their training along with their local dataset. This way, the problem of having imbalanced data is less. The blockchain here is used to share the composed dataset. The second step is having each participant training the machine learning model using their own private data along with the shared dataset in the first step. In [9], a blockchain and federated learning-based platform was built to preserve privacy and distribute the learning process between different clinics. The aggregation of local models has been done by centralized nodes using Secure Multiparty Computation (SMPC) technique. The nodes encrypt the aggregated model to increase privacy and send the encrypted model to the leader, which is the entity that initialized the platform. The leader decrypts the received models, does aggregation, then sends the global model to the Software Defined Networks (SDNs) if further training is needed. Simulation using a Breast Cancer Dataset has been done to test the proposed system. The work presented in [10] includes an approach to construct large medical datasets from diverse data sources. Blockchain technology is leveraged to regulate access to off-chain nodes. Additionally, federated learning is employed with modifications to train local models on locally stored data, thereby maintaining privacy through decentralized training. A blockchain and federated learning middleware that is responsible for deciding whether to give an entity access to the patients' data or not was built in [11]. Federated learning role in their proposed solution is predicting the risk level \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Ref** & **Blockchain Type** & **Aggregation** & **Solution** & **Application Type** \\ \hline [7] & Permissioned & Decentralized & Blockchain-managed Federated Learning & Security and Provenance in IoHT \\ \hline [8] & Permissioned & Centralized & Blockchain-based two-stage platform & FL for Non-IID Data in IoMT \\ \hline [9] & Public & Centralized & Blockchain \& federated learning with encryption & Privacy-preserving clinician collaboration \\ \hline [10] & Public & Not Mentioned & Blockchain as computing architecture & Building large medical datasets \\ \hline [11] & Permissioned & Decentralized & Risk-Based authorization middleware & Access control to IoMT data \\ \hline \end{tabular} \end{table} Table 1: Summary and comparison of the state-of-the-art solutions depending on the patient's condition and the information of the entity requesting access to the data. The trained AI model will give three different levels of risk: critical, serious, and stable. According to those levels, the decision of giving access or not is taken. Blockchain was used to make the system decentralized, i.e. replacing the centralized server for aggregating the global model. The global and local models are stored on the blockchain as well. ## 3 Conclusion Combining blockchain and federated learning has emerged as a promising solution for sharing medical data while preserving privacy. The decentralized nature of blockchain and the ability of federated learning to train models locally while sharing only model parameters make this approach well-suited for healthcare. The reviewed papers used both premissionless (public) and permissioned blockchains and aggregated local models either through a centralized server or decentralized smart contracts. Blockchain played a crucial role in decentralizing the system, managing privacy, and authorizing access to required data. Remarkable results have been achieved by combining federated learning and blockchain techniques to classify data located at different nodes. According to [7], a success rate of 85% in classifying COVID-19 cases has been achieved using this approach. Furthermore, [8] tested their system with three different datasets and demonstrated that sharing a small amount of raw data while conducting federated learning can increase accuracy when dealing with non-IID data. In another study, [9] trained a deep learning model on the Breast Cancer Wisconsin dataset by distributing it over several nodes, achieving an f1-score of approximately 99%. Finally, researchers in [11] compared their platform with a centralized approach and demonstrated that training a model on small datasets dispersed through different nodes can achieve results that are similar to those of a centralized solution with a large dataset. ## 4 Acknowledgment This work was supported in part by TUBITAK 2247-A Award 121C338.
2305.07813
Fast robust location and scatter estimation: a depth-based method
The minimum covariance determinant (MCD) estimator is ubiquitous in multivariate analysis, the critical step of which is to select a subset of a given size with the lowest sample covariance determinant. The concentration step (C-step) is a common tool for subset-seeking; however, it becomes computationally demanding for high-dimensional data. To alleviate the challenge, we propose a depth-based algorithm, termed as \texttt{FDB}, which replaces the optimal subset with the trimmed region induced by statistical depth. We show that the depth-based region is consistent with the MCD-based subset under a specific class of depth notions, for instance, the projection depth. With the two suggested depths, the \texttt{FDB} estimator is not only computationally more efficient but also reaches the same level of robustness as the MCD estimator. Extensive simulation studies are conducted to assess the empirical performance of our estimators. We also validate the computational efficiency and robustness of our estimators under several typical tasks such as principal component analysis, linear discriminant analysis, image denoise and outlier detection on real-life datasets. A R package \textit{FDB} and potential extensions are available in the Supplementary Materials.
Maoyu Zhang, Yan Song, Wenlin Dai
2023-05-13T01:54:32Z
http://arxiv.org/abs/2305.07813v1
# Fast robust location and scatter estimation: a depth-based method ###### Abstract The minimum covariance determinant (MCD) estimator is ubiquitous in multivariate analysis, the critical step of which is to select a subset of a given size with the lowest sample covariance determinant. The concentration step (C-step) is a common tool for subset-seeking; however, it becomes computationally demanding for high-dimensional data. To alleviate the challenge, we propose a depth-based algorithm, termed as FDB, which replaces the optimal subset with the trimmed region induced by statistical depth. We show that the depth-based region is consistent with the MCD-based subset under a specific class of depth notions, for instance, the projection depth. With the two suggested depths, the FDB estimator is not only computationally more efficient but also reaches the same level of robustness as the MCD estimator. Extensive simulation studies are conducted to assess the empirical performance of our estimators. We also validate the computational efficiency and robustness of our estimators under several typical tasks such as principal component analysis, linear discriminant analysis, image denoise and outlier detection on real-life datasets. A R package _FDB_ and potential extensions are available in the Supplementary Materials. _Keywords:_ Computationally efficient; High-dimensional data; Outliers; Robustness; Statistical depth. Introduction The Minimum Covariance Determinant (MCD) estimator (Rousseeuw, 1984) is among the first affine equivariant and highly robust estimators of multivariate location and scatter. Specifically, for a collection of multivariate data, MCD seeks a subset of samples that leads to a sample covariance matrix with the minimum determinant out of all the candidate sets of a specific size. The location and scatter estimators are then defined as the average and a scaled covariance matrix of these samples, respectively. Butler et al. (1993) and Cator and Lopuhaa (2012) established the consistency and asymptotic normality of the MCD estimator. MCD has been applied in various fields such as quality control, medicine, finance, image analysis, and chemistry (Hubert et al., 2008, 2018). Estimating the covariance matrix is the cornerstone of many multivariate statistical methods, so MCD has also been used to develop robust and computationally efficient multivariate techniques, such as principal component analysis (Croux and Haesbroeck, 2000; Hubert et al., 2005b), factor analysis (Pison et al., 2003), classification (Hubert and Van Driessen, 2004), clustering (Hardin and Rocke, 2004), multivariate regression (Rousseeuw et al., 2004b), and others (Hubert et al., 2008). To cater to its broad applications, extensive effort has been made to improve the computational efficiency of the approximation algorithm. For example, Rousseeuw and Driessen (1999) propose the first computationally efficient algorithm, termed FASTMCD; (Hubert et al., 2012) suggest an improved version of FASTMCD, termed DetMCD; De Ketalere et al. (2020) accelerates DetMCD by refinement of the calculation steps and parallel computation. Furthermore, Boudt et al. (2020) generalizes the MCD to high-dimensional cases as the minimum regularized covariance determinant (MRCD). Other variants include the orthogonalized Gnanadesikan-Kettenring (Maronna and Zamar, 2002), the minimum (regularized) weighted covariance determinant (Roelant et al., 2009; Kalina and Tichavsky, 2021), and kernel MRCD for non-elliptical data (Schreurs et al., 2021). Practically, the MCD-type algorithms are limited by two factors. First, the computational complexity of the concentration step (C-step), which is critical for such algorithms, is \(O(np^{2}+p^{3})\), and this severely limits the scalability of the algorithm for massive high-dimensional data. Second, the approximation to the true MCD subset becomes less accurate due to the curse of dimensionality. We note that the asymptotic trimmed region induced by a class of statistical depth shares the same form with the asymptotic MCD subset when the data are elliptically symmetric distributed (Butler et al., 1993; Zuo and Serfling, 2000b). This motivates us to investigate the possibility of finding the MCD subset directly by utilizing statistical depth. Statistical depth was first considered for ranking multivariate data from the center outward (Mahalanobis, 1936; Tukey, 1975; Oja, 1983; Liu, 1990; Zuo and Serfling, 2000a; Vardi and Zhang, 2000). Usually, a statistical depth is an increasing function of the centrality of observations, taking values in \([0,1]\). Motivated by the connection mentioned above, we propose a fast depth-based algorithm, denoted as FDB, which approximates the MCD subset with a depth-induced trimmed region. Specifically, we investigate FDB based on two representative depth notions, projection depth and \(L_{2}\) depth, and denote the estimators as, \(\texttt{FDB}_{\text{pro}}\) and \(\texttt{FDB}_{\text{L}_{2}}\), respectively. Four main advantages of the proposed algorithm are worth mentioning: 1) Asymptotically, \(\texttt{FDB}_{\text{pro}}\) leads to a trimmed region equivalent to the MCD subset for elliptically symmetric distributions. 2) Both \(\texttt{FDB}_{\text{pro}}\) and \(\texttt{FDB}_{\text{L}_{2}}\) achieve the same level of robustness as the MCD estimator. 3) Empirically, FDB reveals comparable or even better performance than the MCD estimator regarding estimation accuracy. 4) Furthermore, the computational efficiency is dramatically improved by using FDB, especially for high-dimensional cases. The rest of the paper is organized as follows. Section 2 reviews the MCD estimator and some related theoretical properties. Section 3 introduces the idea of FDB estimators and demonstrates the theoretical equivalence between the MCD subsets and the depth-trimmed regions. Section 4 investigates the invariance, robustness, and computational complexity of the proposed FDB estimators. In Section 5, we conduct extensive simulation studies to assess the performance of the FDB algorithm and compare it with the existing ones regarding estimation accuracy and computational efficiency. Section 6 applies the proposed methods to several real applications through typical multivariate analysis tasks, including principal component analysis, linear discriminant analysis, image denoise, and outlier detection. We end the paper with some discussion in Section 7. Proofs of the theoretical results and additional simulation results are provided in the Supplementary Material. ## 2 MCD Estimators In this section, we review the theoretical property of the MCD estimator as well as three widely utilized approximation algorithms. Let \(\mathbf{x}\in\mathbb{R}^{p}\) be a random variable from an elliptically symmetric distribution, denoted as \(\text{ES}(f;\mathbf{\mu},\mathbf{\Sigma})\), whose density is of the form \[g(x)=c|\mathbf{\Sigma}|^{-1/2}f\left((\mathbf{x}-\mathbf{\mu})^{T}\mathbf{\Sigma}^{-1}(\mathbf{x}- \mathbf{\mu})\right),\] where \(\mathbf{\Sigma}\) is a symmetric positive definite matrix, and the function \(f:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) is assumed to be non-increasing so that \(g(x)\) is unimodal. Considering random samples \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) independently generated from the above distribution, MCD aims to solve the following optimization problem \[\hat{H}_{\alpha_{n},\text{MCD}}=\underset{H\in\mathcal{H}_{h}}{\text{argmin}} \left(\det\mathbf{\hat{\Sigma}}(\mathbf{x}_{H})\right), \tag{1}\] where \(H\) is an index set of \(h\) observations (with \(\lfloor(n+p+1)/2\rfloor\leqslant h\leqslant n\), where \(\lfloor a\rfloor\) means the largest integer smaller than or equal to \(a\)), \(\alpha_{n}=h/n\) and \(\mathcal{H}_{h}\) is the collection of all such sets. Observations with corresponding indices \(\hat{H}_{\alpha_{n},\text{MCD}}\) constitute the final MCD subset, termed \(\hat{E}_{\alpha_{n},\text{MCD}}\),. Define \(\Delta(A,B)=A\cup B-A\cap B\) as the difference between sets \(A\) and \(B\). The convergence property of \(\hat{E}_{\alpha_{n}\text{MCD}}\) is revisited in Lemma 1, with the proof provided in Section S1.1 of the Supplementary Material. **Lemma 1**: _Assume that the random samples \(\mathbf{x}_{i}\sim ES(f;\mathbf{\mu},\mathbf{\Sigma}),i=1,\ldots,n\). Then for \(\alpha>0\), we have_ \[\mathbb{P}\left(\Delta\left(\hat{E}_{\alpha_{n},\text{MCD}},E_{\alpha}\right) \right)\to 0\] _for any sequence \(\alpha_{n}\rightarrow\alpha\) as \(n\rightarrow\infty\), where \(E_{\alpha}=\left\{\mathbf{x}\in\mathbb{R}^{p}\mid(\mathbf{x}-\mathbf{\mu})^{T}\mathbf{\Sigma}^ {-1}(\mathbf{x}-\mathbf{\mu})\leq r_{\alpha}^{2}\right\}\) with \(\mathbb{P}(E_{\alpha})=\alpha\)._ Given an \(n\times p\) data matrix \(\mathbf{X}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})^{\prime}\), with its estimated center \(\hat{\mathbf{\mu}}\) and scatter matrix \(\hat{\mathbf{\Sigma}}\), we denote with the Mahalanobis distance of \(\mathbf{x}_{i}\). The C-step described in Algorithm 1 is crucial for MCD-type algorithms. ``` 0: initial subset \(H_{old}\) or the estimates (\(\hat{\mathbf{\mu}}_{\text{old}},\hat{\mathbf{\Sigma}}_{\text{old}}\)), subset size \(h\). 0:\(H_{new}\) or (\(\hat{\mathbf{\mu}}_{\text{new}},\hat{\mathbf{\Sigma}}_{\text{new}}\)) 1: Compute the distances \(d_{i,\text{old}}=\mathcal{D}\left(\mathbf{x}_{i};\hat{\mathbf{\mu}}_{\text{old}},\hat{ \mathbf{\Sigma}}_{\text{old}}\right)\) for \(i=1,\ldots,n\). 2: Sort these distances, yielding a permutation \(\pi\) for which \(d_{\pi(1),\text{old}}\leqslant d_{\pi(2),\text{old}}\leqslant\ldots\leqslant d _{\pi(n),\text{old}}\), and set \(H_{new}=\{\pi(1),\pi(2),\ldots,\pi(h)\}\). 3: Update \(\hat{\mathbf{\mu}}\) and \(\hat{\mathbf{\Sigma}}\) as \[\hat{\mathbf{\mu}}_{\text{new}}=\frac{1}{h}\sum_{i\in H_{new}}\mathbf{x}_{i}\quad\text {and}\quad\hat{\mathbf{\Sigma}}_{\text{new}}=\frac{1}{h-1}\sum_{i\in H_{new}}\left( \mathbf{x}_{i}-\hat{\mathbf{\mu}}_{\text{new}}\right)\left(\mathbf{x}_{i}-\hat{\mathbf{\mu}}_{ \text{new}}\right)^{T}.\] ``` **Algorithm 1** The C-step Rousseeuw and Driessen (1999) proposed the first computationally feasible algorithm, termed FASTMCD, for approximating the MCD subset. Specifically, they randomly constructed several initial subsets and applied two C-steps for each subset, yielding the ten subsets with the lowest determinant. Then, they took C-step iteratively for these ten subsets until the determinant sequence converged and eventually chose the MCD subset as the one leading to the smallest determinant. Given this, the computational efficiency of FASTMCD is thus roughly proportional to the number of the initial subsets. Hubert et al. (2012) proposed an alternative algorithm DetMCD, which replaces random initial subsets (of which there could be many) in FASTMCD with six well-designed deterministic estimators of \((\mathbf{\mu},\mathbf{\Sigma})\), and also involves the C-step in a similar way. Denote the estimates \(\left(\hat{\mathbf{\mu}},\hat{\mathbf{\Sigma}}\right)\) as the location and scatter matrix estimates of the \(h\)-subset for which the determinant of the sample covariance matrix is as small as possible. Further, an additional reweighted step is employed in both algorithms to improve the efficiency of the estimators. To be more specific, the estimators are renewed as trimmed estimates for location and scatter, \[\hat{\mathbf{\mu}}_{\text{re}}=\frac{1}{\sum_{i=1}^{n}W_{i}}\sum_{i=1}^{n}W_{i}\mathbf{ x}_{i}\quad\text{and}\quad\hat{\mathbf{\Sigma}}_{\text{re}}=\frac{1}{\sum_{i=1}^{n}W_{i }-1}\sum_{i=1}^{n}W_{i}\left(\mathbf{x}_{i}-\hat{\mathbf{\mu}}_{re}\right)\left(\mathbf{x}_ {i}-\hat{\mathbf{\mu}}_{re}\right)^{T}, \tag{2}\] where \(W_{i}=1\) when \(\mathcal{D}\left(\mathbf{x}_{i};\hat{\mathbf{\mu}},c_{0}\hat{\mathbf{\Sigma}}\right)\leq \sqrt{\chi_{p,0.975}^{2}}\) and \(0\) otherwise, \(\chi_{p,\alpha}^{2}\) is the \(\alpha\)-quantile of the \(\chi_{p}^{2}\) distribution and \(c_{0}=\text{med}_{i}\,\mathcal{D}^{2}\left(\mathbf{x}_{i},\hat{\mathbf{\mu}},\hat{\bm {\Sigma}}\right)/\chi_{p,0.5}^{2}\). ## 3 A Depth-based Alternative The idea behind the premier step of MCD-type algorithms is to construct outlier-free subsets as the initial values for the C-step. This motivates us to approach such a purpose using statistical depth, a popular tool for robust multivariate data analysis. More importantly, we find the equivalence between the eventual MCD subset and the depth-trimmed region, which avoids the implementation of the iterative C-steps and hence improves the computational efficiency dramatically. In general, a statistical depth notion is a function \(D:(\mathbf{x},P)\mapsto[0,1]\), for \(\mathbf{x}\in\mathbb{R}^{p}\) and \(P\) from some class \(\mathcal{P}\) of \(p\)-variate probability distributions, that provides a center-outward order for a collection of data. Taking into account the robustness as well as the computational efficiency, to be discussed later, we specifically consider the following two depth notions for the proposed method. **Projection depth** (Zuo and Serfling, 2000a): \[D_{\mathrm{Proj}}(\boldsymbol{x};P)=\left(1+\sup_{\|\boldsymbol{u}^{\prime}\|=1 }\frac{|\boldsymbol{u}^{\prime}\boldsymbol{x}-\mathrm{med}\left(\boldsymbol{u} ^{\prime}\boldsymbol{y}\right)|}{\mathrm{MAD}\left(\boldsymbol{u}^{\prime} \boldsymbol{y}\right)}\right)^{-1},\] where \(P\) is the distribution of \(\boldsymbol{y}\), \(\mathrm{med}(V)\) denotes the median of a univariate random variable \(V\), and \(\mathrm{MAD}(V)=\mathrm{med}(|V-\mathrm{med}(V)|)\) its median absolute deviation from the median. Practically, one may choose a finite number of random directions to approximate the projection depth values. \(\mathbb{L}_{2}\)**depth** (Zuo and Serfling, 2000a): \[D_{\mathbb{L}_{2}}(\boldsymbol{x};P)=\left(1+E\left[\|\boldsymbol{y}- \boldsymbol{x}\|_{2}\right]\right)^{-1},\] where \(\|\cdot\|_{2}\) is the \(L^{2}\) norm. Consider random samples \(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{n}\) independently generated from the elliptically symmetric (ES) distribution. For a given depth function \(D(\cdot;\mathrm{ES})\) and for \(\alpha>0\), we call \[E_{\alpha,\mathrm{depth}}=\left\{\boldsymbol{x}\in\mathbb{R}^{p}\mid D( \boldsymbol{x};\mathrm{ES})\geq D_{\alpha}\right\},\] and the sample version is \[\hat{E}_{\alpha_{n},\mathrm{depth}}=\left\{x\in\mathbb{R}^{p}\mid\hat{D}_{n}( \boldsymbol{x};\mathrm{ES})\geq D_{\alpha_{n}}\right\},\] the corresponding \(\alpha\)-trimmed region with \(\mathbb{P}(E_{\alpha,\mathrm{depth}})=\alpha\). **Lemma 2** (Zuo and Serfling (2000b)): _Assume that the random samples \(\boldsymbol{x}_{i}\sim\mathrm{ES}(f;\boldsymbol{\mu},\boldsymbol{\Sigma})\), \(i=1,\ldots,n\). Then for the projection depth, the depth trimmed region (subset) \(E_{\alpha_{n},\mathrm{depth}}\) sat _isfies_ \[\mathbb{P}\left(\Delta\left(\hat{E}_{\alpha_{n},\text{depth}},E_{\alpha}\right) \right)\to 0\text{ as }n\rightarrow\infty,\] _for any sequence \(\alpha_{n}\rightarrow\alpha>0\) as \(n\rightarrow\infty\)._ Other depth notions could also lead to the same conclusion except for the projection depth. We omit those notions since they are either not robust enough or computationally demanding. Also note that the result does not necessarily hold for the \(L_{2}\) depth, although \(\text{FDB}_{\text{L}_{2}}\) indeed provides satisfactory results in the simulation. Combining Lemmas 1 and 2, it is straightforward that the two subsets are asymptotic equivalent. We state this result formally in the following theorem. **Theorem 1**: _Assume that the random samples \(\mathbf{x}_{i}\sim\mathrm{ES}(f;\mathbf{\mu},\mathbf{\Sigma})\), \(i=1,\ldots,n\). Under the conditions of Lemmas 1 and 2, we have_ \[\mathbb{P}(\Delta(E_{\alpha_{n},\text{depth}},E_{\alpha_{n},\text{MCD}})) \to 0,\text{ as }n\rightarrow\infty.\] Proof of Theorem 1 is provided in Section S1.2 of the Supplementary Material. Motivated by the above result, we propose to approximate the eventual MCD subset with the depth-trimmed region and avoid the iterative implementation of the C-step. We provide two toy examples in Figure 1 to illustrate such a coincidence. Specifically, we generate data from bivariate normal distributions with unit variance and correlation coefficients 0 and 0.5 for Figure 1(a) and 1(b), respectively. We consider \(n=4000\) and \(h=3000\). In both cases, the two subsets match quite well, such that the proportions of the common elements are no less than 97%, which well supports the high effectiveness of the proposed method. For data of low dimensions, both MCD subsets and depth-based trimmed regions can be computed efficiently, and the result matches well, as shown in Figure 1. However, for high-dimensional data, the MCD algorithms are severely challenged by the cubically increased computational complexity, and hence the approximation will be less efficient. To alleviate this challenge, we consider replacing the MCD subsets with the trimmed regions induced by some computationally efficient depth notions. By doing so, we may not only reduce the computational time significantly but also attain comparable or better robust estimation for both the location and scatter matrix, especially in high-dimensional cases. In what follows, we introduce the FDB algorithm. ``` 0:\(\mathbf{x_{i}},i=1,\ldots,n\), subset size \(h\), selected depth notion. 0:\(\mathbf{\mu}_{\text{FDB}}\), \(\hat{\mathbf{\Sigma}}_{\text{FDB}}\). 1: Calculate the depth value for each observation \(\mathbf{x}_{i}\), denoted as \(\text{GD}(i)\). 2: Sort these values, yielding a permutation \(\pi\) of \(1,\ldots,n\), for which \(\text{GD}(\pi(1))\geq,\ldots,\geq\text{GD}(\pi(n))\), and a set \(H_{sub}=\{\pi(1),\ldots,\pi(h)\}\). 3: Get the location and the scatter matrix estimates as \[\hat{\mathbf{\mu}}_{\text{raw}}=\frac{1}{h}\sum_{i\in H_{sub}}\mathbf{x}_{i}\quad \text{and}\quad\hat{\mathbf{\Sigma}}_{\text{raw}}=\frac{c_{1}}{h}\sum_{i\in H_{sub }}\left(\mathbf{x}_{i}-\mathbf{\mu}_{\text{raw}}\right)\left(\mathbf{x}_{i}-\mathbf{\mu}_{ \text{raw}}\right)^{T},\] where \(c_{1}=\underset{i}{\text{med}}\mathcal{D}^{2}\left(\mathbf{x}_{i},\hat{\mathbf{\mu}}_ {\text{raw}},\hat{\mathbf{\Sigma}}_{0}\right)/\chi_{p,0.5}^{2}\) with \(\hat{\mathbf{\Sigma}}_{0}=\frac{1}{h}\sum_{i\in H_{sub}}\left(\mathbf{x}_{i}-\mathbf{\mu}_ {\text{raw}}\right)\left(\mathbf{x}_{i}-\mathbf{\mu}_{\text{raw}}\right)^{T}\). 4: Apply the reweighted step (2) to the raw estimates, yielding the final FDB estimates, \(\hat{\mathbf{\mu}}_{\text{FDB}}\) and \(\hat{\mathbf{\Sigma}}_{\text{FDB}}\). Figure 1: The subsets induced by the projection depth and DetMCD. The black points represent the samples that are outside both subsets; blue points indicate samples in DetMCD subset only; red points indicate samples in depth-based subset only; purple points are their intersection. Plots (a) and (b) are for independent and correlated data, respectively. Algorithm 2 considers the case of \(h>p\), which is the condition to guarantee the invertibility of estimated matrix (Rousseeuw and Van Zomeren, 1990). All algorithms for the original MCD require that the dimension \(p\) be lower than \(h\) to obtain an invertible covariance matrix. It is recommended that \(n>5p\) in practice (Rousseeuw and Driessen, 1999). According to Lemmas 1 and 2, for data from an elliptically symmetric distribution, MCD and FDB algorithms both approximate the optimal subset, though from different perspectives. MCD approaches the solution by combining well selected (or random) initial subset and the iterative implementation of the C-step, which could be computationally demanding for high dimensional data. In contrast, FDB relies on ordering the data from the center outward, and hence its computational complexity is mainly determined by the cost of assigning depth values to each sample. The idea of incorporating depth (outlyingness) to construct MCD estimators has been considered in the literature. For example, the Stahel-Donoho outlyingness (Donoho, 1982), equivalent to the projection depth, is applied to determine an \(h\)-subset consisting of the \(h\) points with the lowest outlyingness, and the corresponding sample mean and covariance matrix are used as one initial value for the C-step (Hubert et al., 2005; Schreurs et al., 2021). Debruyne and Hubert (2009) studied the influence function and asymptotic relative efficiency of the estimators obtained directly based on such a subset (without the reweighted step). For the first time, we establish the equivalence of the two subsets, which indicates that the depth-based subset is a reasonable approximation to the optimal subset rather than just one option of the initial value for the C-step. Properties of FDB This section focuses on the properties of the FDB estimators. Specifically, We discuss three types of properties, that are of main interest for such methods (Maronna and Zamar, 2002; Hubert et al., 2012), invariance, robustness and computational complexity. We show that the proposed estimators are quite satisfactory in these aspects. ### Invariant Properties **Affine equivariance** makes the analysis independent of any affine transformation of the data. For any nonsingular \(p\times p\) matrix \(\mathbf{A}\) and \(p\times 1\) vector \(\mathbf{v}\), the estimators \(\hat{\mathbf{\mu}}\) and \(\hat{\mathbf{\Sigma}}\) are affine equivariant if they satisfy \[\hat{\mathbf{\mu}}\left(\mathbf{X}\mathbf{A}+\mathbf{1}_{n}\mathbf{v}^{T}\right)=\hat{\mathbf{\mu}}( \mathbf{X})\mathbf{A}+\mathbf{v}\text{ and }\hat{\mathbf{\Sigma}}\left(\mathbf{X}\mathbf{A}+\mathbf{1}_{n}\mathbf{v}^{T}\right)=\mathbf{A}^{T} \hat{\mathbf{\Sigma}}(\mathbf{X})\mathbf{A},\] where \(\mathbf{1}_{n}=(1,1,\ldots,1)^{T}\). The projection depth has been shown affine equivariant (Zuo and Serfling, 2000a; Zuo, 2006), that is the depth value does not vary through affine transformation for any sample, and hence the indexes of samples forming the trimmed region remain the same. Consequently, FDB\({}_{\text{pro}}\) is obviously affine equivariant. For FDB\({}_{\text{L}_{2}}\), a similar property holds for rigid transformation (Mosler and Mozharovskyi, 2022), which is a bit more restrictive than the affine transformation. For high dimensional situations, the affine equivariance may be less important under nonstandard data contamination such as componentwise outliers (Alqallaf et al., 2009). **Permutation invariance** provides an effective way to guaranteeing the robustness of analysis to the perturbation of observations. An estimator \(T(\cdot)\) is said to be permutation invariant if \(T(\mathbf{P}\mathbf{X})=T(\mathbf{X})\) for any permutation matrix \(\mathbf{P}\). Permutation invariance holds for both projection depth and \(L_{2}\) depth since they do not involve any random subsets, and hence the depth values remain the same through permutation and so will the \(h\)-subset. ### Robustness Robustness is the property of main interest when outlier contamination of the data is suspected. As aforementioned, the MCD estimator is highly robust that it achieves the highest possible asymptotic breakdown point, about \(1/2\), with \(h\approx n/2\). The robustness of FDB is determined by the property of the employed depth notion. According to Zuo (2006) and Lopuhaa and Rousseeuw (1991), the breakdown points of the trimmed regions induced by the projection depth and \(L_{2}\) depth are both \(1/2\), with \(\alpha\approx 0.5\). That is, \(\texttt{FDB}_{\text{pro}}\) and \(\texttt{FDB}_{\text{L}_{2}}\) both have a breakdown point as high as that of the MCD estimator. Another indicator is the influence function, which captures the local robustness of estimators. Zuo (2006) and Niinimaa and Oja (1995) showed the influence functions of depth regions induced by projection depth and \(L_{2}\) depth are both bounded. Therefore, \(\texttt{FDB}_{\text{pro}}\) and \(\texttt{FDB}_{\text{L}_{2}}\) are highly robust locally as well as globally. The robustness discussed above is from the theoretical aspect. Practically, both FDB and MCD approximate the theoretically optimal subsets, and their empirical performances do not necessarily match the theoretical results under all the scenarios. This means that the subset selected by the two methods under finite samples may still contain outliers. To show this point, we provide an example in Figure 2, where we generate \(n=4000\) samples from a 40-dimensional normal distribution with standard normal marginal distributions and a correlation coefficient of 0.5. We consider two levels of contamination, \(10\%\) and \(40\%\), and two types of outliers, point and cluster (see for details in Section 5.1), respectively. For the first column with \(10\%\) outliers, we set \(h=\lfloor 75\%n\rfloor\); for the second column with \(40\%\) outliers, \(h=\lfloor 50\%n\rfloor\). \(\texttt{FDB}_{\text{pro}}\) performs perfectly for the first three cases and fails for the last scenario; in contrast, DetMCD is only satisfactory for the first case but fails for the rest. This indicates that for high-dimensional data, \(\texttt{FDB}_{\text{pro}}\) provides more reliable approximations to the optimal subset. ### Computational complexity The computational complexity for finding the \(h\)-subset by MCDs is \(O(\psi(np^{2}+p^{3}))\). Specifically, for each C-step, it requires computing the covariance matrix and Mahalanobis distances, with complexities \(O(np^{2})\) and \(O(p^{3}+np^{2})\) respectively, and \(\psi\) depends on the number of initial estimates and the times of C-step iteration. For FASTMCD, the number of initial estimates defaults to 500; for DetMCD, it defaults to 6; and for RT-DetMCD, it is further reduced to 2. However, these efforts only reduced the value of \(\phi\) but the order term still remains the same. Figure 2: The subsets induced by \(\mathsf{FDB}_{\text{pro}}\) and DetMCD. Dots denote normal sample and crosses denote outliers. Blue ones form the intersection of two subsets; orange (purple) ones indicate the samples unique to the \(\mathsf{FDB}_{\text{pro}}\) (DetMCD) subset; blue ones are samples dropped by both subsets. In contrast, the computational complexity of \(\mathsf{FDB}_{\text{pro}}\) for finding the subset is \(O(knp)\) with \(k\) as the number of projection directions. According to our numerical experiments, The performance of \(\mathsf{FDB}_{\text{pro}}\) is quite stable when the number of random directions is set around 1000; see Figure S2 of the Supplement. Hence, \(\mathsf{FDB}_{\text{pro}}\) leads to a significant improvement over the MCD estimators. We remark that it is possible to further reduce the number of projection directions according to some elaborate generative algorithms (Dyckerhoff et al., 2021). However, these algorithms may instead lengthen the total computational time due to the tedious procedure for searching "better" directions, and hence we stick to selecting the directions randomly. For the case of ultra-high dimensional data, we suggest an adaptive rule, \(k=\max(1000,10p)\). As for \(\mathsf{FDB}_{\text{L}_{2}}\), the computational complexity is \(O(n^{2}p)\), which scales linearly with the dimension of data. To show the improvement, we provide some numerical results for computation time in Figure 3. Notably, the speed of DetMCD is also influenced by the way of constructing the initial estimates. Specifically, \(Q_{n}\)(Rousseeuw and Croux, 1993) is applied to construct initial estimates for DetMCD, which is computationally demanding. To improve speed, Hubert et al. (2012) suggested substituting \(Q_{n}\) with the \(\tau\)-scale of Yohai and Zamar (1988) Figure 3: The average computation time (seconds) for different settings over 20 replicates. (a): \(\log(t)\) versus \(\log(p)\) with \(n=1000\); (b): \(t\) versus \(n\) with \(p=200\). when \(n>1000\). We follow this suggestion by using the \(Q_{n}\) estimator for DetMCD in Figure 3(a), and the \(\tau\)-estimator in Figure 3(b), respectively. For \(\mathsf{FDB}_{\text{pro}}\), we let \(k=1000\). All experiments are run using R-package ddalpha for DetMCD on an Intel(R) Xeon(R) with 3.10GHz and 192 GB memory processor. FDBs show significant improvement over DetMCD under all the settings. Specifically, in Figure 3(a), the line of DetMCD is steeper than those of the other two methods, which matches well with different orders of dimension \(p\) in their theoretical computational complexities aforementioned. In Figure 3(b), both DetMCD and \(\mathsf{FDB}_{\text{pro}}\) reveal linear trends with the increasing sample size, while \(\mathsf{FDB}_{\text{L}_{2}}\) shows a quadratic trend, though its computation time is the least when \(n<4000\). ## 5 Simulations We conduct extensive simulations with data from symmetric distributions to assess the performance of our proposed algorithms, \(\mathsf{FDB}_{\text{pro}}\) and \(\mathsf{FDB}_{L_{2}}\), and make a comparison with DetMCD (Hubert et al., 2012). Besides, we also provide some exploration for the scenarios of asymmetric distributions in Section S3.1 of the Supplement. To evaluate the estimation results, we use the following five measures (the smaller the better). * An error measure of the location estimator, given by \(e_{\mu}=\|\hat{\mathbf{\mu}}-\mathbf{\mu_{0}}\|\), where \(\mathbf{\mu_{0}}\) denotes the true sample mean. * An error measure of the scatter estimator, defined as the logarithm of the condition number of \(\hat{\mathbf{\Sigma}}\mathbf{\Sigma}^{-1}\), \(e_{\Sigma}=\log_{10}(\text{cond}(\hat{\mathbf{\Sigma}}\mathbf{\Sigma}^{-1}))\). * The mean squared error (MSE) of \(\mathbf{\Sigma}\), \(\text{MSE}=\frac{1}{Sp^{2}}\sum_{s=1}^{S}||\hat{\mathbf{\Sigma}}-\mathbf{\Sigma}||_{F} ^{2}\). * The Kullback Leibler (KL) divergence between \(\hat{\mathbf{\Sigma}}\) and \(\mathbf{\Sigma}\), \(\text{KL}\left(\hat{\mathbf{\Sigma}},\mathbf{\Sigma}\right)=\text{trace}\left(\hat{ \mathbf{\Sigma}}\mathbf{\Sigma}^{-1}\right)-\) \(\log\left(\det\left(\hat{\Sigma}\Sigma^{-1}\right)\right)-p,\), which is identical the KL divergence between the two Gaussian distributions with the same mean. * The computation time \(t\) (in seconds) of the whole procedure, including the optimal subset pursuit and the reweighted step. ### Estimation performance In this subsection, we generate the bulk of non-outlying samples as \(\mathbf{x}_{i}=\mathbf{G}\mathbf{y}_{i}\), where \(\mathbf{y}_{i}\) are from \(N_{p}(\mathbf{0},\mathbf{I})\) and \(\mathbf{G}\) is a \(p\times p\) matrix with unit diagonal elements and off-diagonal elements equal to \(0.75\). The number of outliers is \(m=\lfloor n\varepsilon\rfloor\) and \(\varepsilon\) denotes the level of contamination. Four contamination types are considered: point, random, cluster, and radial outliers. **Point outliers** are obtained by generating \(\mathbf{y}_{i}\sim N_{p}\left(r\mathbf{a}\sqrt{p},0.01^{2}\mathbf{I}\right)\), where \(\mathbf{a}\) is a unit vector generated orthogonal to \(\mathbf{a_{0}}=(1,1,\ldots,1)^{T}\). **Random outliers** are obtained by generating \(\mathbf{y}_{i}\sim N_{p}\left(\mathbf{\mu}_{ir},\mathbf{I}\right)\), where \(\mathbf{\mu}_{ir}=rp^{1/4}\mathbf{\nu}/\|\mathbf{\nu}\|\) with \(\mathbf{\nu}\) a random vector from \(N_{p}(\mathbf{0},\mathbf{I})\). **Cluster outliers** are obtained by generating \(\mathbf{y}_{i}\sim N_{p}\left(rp^{-1/4}\mathbf{a_{0}},\mathbf{I}\right)\), where \(r\) is a constant. **Radial outliers** are obtained by generating \(\mathbf{y}_{i}\sim N_{p}(\mathbf{0},5\mathbf{I})\). Except for the random outliers, the other three types have been considered by Hubert et al. (2012). Different contamination levels are considered, namely \(\varepsilon=0\%,10\%,\) and \(40\%\). Let \(h\) be \(\lfloor 0.75n\rfloor\) when \(\varepsilon=0\%\) or \(10\%\), and \(\lfloor 0.5n\rfloor\) when \(\varepsilon=40\%\) for each method under investigation. The number of directions for projection depth is set as \(k=\max(1000,10p)\) as suggested in Section 4.3. For each method, we compute the reweighted location vectors \(\hat{\mathbf{\mu}}_{\mathbf{X}}\) and the reweighted scatter matrices \(\hat{\Sigma}_{\mathbf{X}}\). The corresponding estimators for the data set \(\mathbf{Y}\) are obtained as \(\hat{\mathbf{\mu}}_{\mathbf{Y}}=\mathbf{G}^{-1}\hat{\mathbf{\mu}}_{\mathbf{X}}\) and \(\hat{\Sigma}_{\mathbf{Y}}=\mathbf{G}^{-1}\hat{\Sigma}_{\mathbf{X}}\mathbf{G}^{-1}\), which are compared to the true values using the aforementioned measures. In this part, the true covariance matrix of \(\mathbf{Y}\) is \(\mathbf{\Sigma}=\mathbf{I}_{p}\). We first provide a full picture for the performance of each method by conducting sim Figure 4: The average \(e_{\Sigma}\) for levels of abnormality (\(r\)), with \(n=400\) and \(p=40\), the first row is for \(\varepsilon=0.1\) and \(\alpha=0.75\), and the second row is for \(\varepsilon=0.4\) and \(\alpha=0.5\). Figure 5: The average \(e_{\Sigma}\) for different \(p\), with \(n=2000\), the first row is for \(\varepsilon=0.1\) and \(\alpha=0.75\); the second row is for \(\varepsilon=0.4\) and \(\alpha=0.5\); the first two columns are for \(r=2\), and the last two columns are for \(r=20\), respectively. The cases of random contamination is omitted since all methods perform similarly. ulation studies under a broad range of settings. To be more specific, we generated 1000 data sets consisting of different types of contamination under a broad range of \(r\) and \(p\), and computed the average \(e_{\Sigma}\) with the three methods. The results are illustrated in Figures 4 and 5. Other measures, \(e_{\mu}\), MSE, and KL divergence, all reveal similar patterns as \(e_{\Sigma}\) and hence are omitted. As shown in Figure 4, \(\texttt{FDB}_{\text{pro}}\) and \(\texttt{FDB}_{\text{L}_{2}}\) outperform DetMCD across all values of \(r\) for both random and cluster outliers under either low or high contamination levels. For the case of point contamination, \(\texttt{FDB}_{\text{pro}}\) still holds the upper hand, with \(\texttt{FDB}_{\text{L}_{2}}\) slightly worse than DetMCD, especially for small \(r\) values. Figure 5 shows that \(\texttt{FDB}s\) is among the best estimators for all the settings. However, in general, DetMCD's performance deviates more seriously with the increase of dimension \(p\), suggesting that it is less suitable for high-dimensional data. Overall, \(\texttt{FDB}_{\text{pro}}\) provides the best performance among the three options, that both \(\texttt{FDB}_{\text{L}_{2}}\) and DetMCD produce large \(e_{\Sigma}\) under point contamination since their induced subsets may contain outliers, which is consistent with the results in Figure 2. Between \(\texttt{FDB}_{\text{L}_{2}}\) and DetMCD, the later is better for point contamination while the former is better (or even the best) for cluster contamination. Next, we provide more detailed numerical outputs for typical simulation settings under study. Following Hubert et al. (2012), we consider three options, A: \(n=200\) and \(p=5\), B: \(n=400\) and \(p=40\), and C: \(n=2000\) and \(p=200\), representing low, moderate and high dimensions, respectively. Other settings remain the same as those in Figure 4 except that \(r\) is fixed at 5 for point, random, and cluster outliers. We report the average measures over 1000 runs in Tables 1, 2 and 3, corresponding to 0% (clean data), 10% and 40% contamination, respectively. For clean data (Table 1), the three estimators are comparable for low-dimensional cases; FDBs achieve slightly smaller values for \(e_{\mu}\), \(e_{\Sigma}\) and KL when the data dimension is moderate or high. More importantly, the running time of DetMCD is reduced in all settings, with the relative computational efficiency, defined as \(t_{\text{MCD}}/t_{\text{FDB}}\), ranging between 2 and 10 for \(\mathsf{FDB}_{\text{pro}}\), and between 3 and 27 for \(\mathsf{FDB}_{\text{L}_{2}}\). Such a comparison of computation time holds similarly when contamination presents, and hence is omitted in the remaining tables. When the amount of contamination is 10% (Table 2), the performances of three methods are all satisfying under settings with random, cluster or radial contamination, and the comparison is similar to that for clean data. For point contamination, \(\mathsf{FDB}_{\text{L}_{2}}\) performs the worse for each of the three settings, that it generates the largest values for the four measures of estimation accuracy; DetMCD gets problematic for moderate and high dimensional data (settings B and C), which indicates its deficiency for such cases; in contrast, \(\mathsf{FDB}_{\text{pro}}\) remains robust under all three settings and provides values of measures quite close to corresponding ones for the clean data in Table 1. When the amount of contamination increases to 40% (Table 3), the three methods all work well and are comparable under settings with random or radial contamination. However, none of them is satisfactory when the contamination presents as point outliers, and this weak performance was also observed for both DetMCD and FASTMCD under similar settings in Hubert et al. (2012). For cluster contamination, DetMCD leads to larger estimation errors in each cell, especially when the dimension of data is moderate or \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \cline{3-8} \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\(e_{\mu}\)} & \multicolumn{1}{c|}{\(e_{\Sigma}\)} & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{KL} & \multicolumn{1}{c|}{t} \\ \hline & DetMCD & 0.158 (0.050) & 0.238 (0.051) & 0.008 (0.003) & 0.119 (0.046) & 0.022 \\ A & \(\mathsf{FDB}_{\text{L}_{2}}\) & 0.157 (0.051) & 0.245 (0.054) & 0.008 (0.003) & 0.122 (0.049) & 0.003 \\ & \(\mathsf{FDB}_{\text{pro}}\) & 0.157 (0.050) & 0.232 (0.049) & 0.007 (0.003) & 0.112 (0.044) & 0.011 \\ \hline & DetMCD & 0.344 (0.040) & 0.615 (0.029) & 0.003 (0.000) & 2.771 (0.179) & 0.388 \\ B & \(\mathsf{FDB}_{\text{L}_{2}}\) & 0.329 (0.038) & 0.571 (0.025) & 0.003 (0.000) & 2.404 (0.135) & 0.014 \\ & \(\mathsf{FDB}_{\text{pro}}\) & 0.331 (0.038) & 0.573 (0.025) & 0.003 (0.000) & 2.435 (0.137) & 0.039 \\ \hline & DetMCD & 0.355 (0.016) & 0.656 (0.011) & 6e-4 (0.000) & 14.202 (0.249) & 3.876 \\ C & \(\mathsf{FDB}_{\text{L}_{2}}\) & 0.331 (0.016) & 0.596 (0.008) & 5e-4 (0.000) & 11.796 (0.166) & 1.043 \\ & \(\mathsf{FDB}_{\text{pro}}\) & 0.334 (0.016) & 0.598 (0.008) & 5e-4 (0.000) & 11.876 (0.177) & 1.109 \\ \hline \end{tabular} \end{table} Table 1: Simulation results for clean data. high; however, both \(\mathsf{FDB}_{\mathsf{L}_{2}}\) and \(\mathsf{FDB}_{\text{pro}}\) instead remains very stable across all three settings. Additional results for \(r=2\) and \(20\) are provided in Tables S1-S4 of the Supplementary Material, from which we may draw similar conclusions for the comparison of the three methods. \begin{table} \begin{tabular}{|c|c|c c c|c c c|c c c|c c c|} \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{3}{|c|}{Point} & \multicolumn{3}{|c|}{Random} & \multicolumn{3}{|c|}{Cluster} & \multicolumn{3}{|c|}{Radial} \\ \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{2}{|c}{Det} & \(\mathsf{FDB}_{\mathsf{L}_{2}}\) & \(\mathsf{FDB}_{\text{pro}}\) & \multicolumn{1}{c|}{Det} & \(\mathsf{FDB}_{\mathsf{L}_{2}}\) & \(\mathsf{FDB}_{\text{pro}}\) & \multicolumn{1}{c|}{Det} & \(\mathsf{FDB}_{\mathsf{L}_{2}}\) & \(\mathsf{FDB}_{\text{pro}}\) & \multicolumn{1}{c|}{Det} & \(\mathsf{FDB}_{\mathsf{L}_{2}}\) & \(\mathsf{FDB}_{\text{pro}}\) \\ \hline \multirow{4}{*}{A} & \multirow{2}{*}{\(e_{\mu}\)} & 0.166 & 1.188 & 0.165 & 0.164 & 0.166 & 0.164 & 0.163 & 0.163 & 0.163 & 0.165 & 0.171 & 0.163 \\ & & (0.054) & (0.071) & (0.054) & (0.054) & (0.056) & (0.054) & (0.053) & (0.053) & (0.053) & (0.051) & (0.053) & (0.051) \\ \cline{2-13} & \multirow{2}{*}{\(e_{\Sigma}\)} & 0.236 & 1.333 & 0.236 & 0.228 & 0.236 & 0.230 & 0.228 & 0.227 & 0.229 & 0.233 & 0.251 & 0.233 \\ & & (0.052) & (0.070) & (0.052) & (0.045) & (0.047) & (0.046) & (0.045) & (0.045) & (0.045) & (0.049) & (0.054) & (0.048) \\ \cline{2-13} & \multirow{2}{*}{MSE} & 0.008 & 5.526 & 0.008 & 0.008 & 0.008 & 0.008 & 0.007 & 0.007 & 0.007 & 0.008 & 0.009 & 0.008 \\ & & (0.003) & (0.268) & (0.003) & (0.003) & (0.003) & (0.003) & (0.003) & (0.003) & (0.003) & (0.003) & (0.003) & (0.003) & (0.003) \\ \cline{2-13} & \multirow{2}{*}{KL} & 0.107 & 9.375 & 0.112 & 0.101 & 0.112 & 0.105 & 0.101 & 0.101 & 0.102 & 0.103 & 0.117 & 0.104 \\ & & (0.041) & (0.314) & (0.043) & (0.038) & (0.042) & (0.039) & (0.038) & (0.038) & (0.038) & (0.038) & (0.046) & (0.037) \\ \hline \multirow{4}{*}{B} & \multirow{2}{*}{\(e_{\mu}\)} & 2.403 & 3.456 & 0.339 & 0.347 & 0.339 & 0.340 & 0.349 & 0.340 & 0.341 & 0.347 & 0.337 & 0.338 \\ & & (1.597) & (0.67) & (0.037) & (0.040) & (0.038) & (0.039) & (0.037) & (0.036) & (0.038) & (0.039) & (0.037) & (0.037) \\ \cline{2-13} & \multirow{2}{*}{\(e_{\Sigma}\)} & 1.787 & 2.420 & 0.591 & 0.622 & 0.595 & 0.596 & 0.616 & 0.588 & 0.592 & 0.616 & 0.593 & 0.594 \\ & & (0.911) & (0.028) & (0.026) & (0.031) & (0.026) & (0.027) & (0.026) & (0.025) & (0.024) & (0.027) & (0.026) & (0.026) \\ \cline{2-13} & \multirow{2}{*}{MSE} & 4.049 & 5.912 & 0.003 & 0.003 & 0.003 & 0.003 & 0.003 & 0.003 & 0.003 & 0.003 & 0.003 & 0.003 \\ & & (3.145) & (0.147) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) \\ \cline{2-13} & \multirow{2}{*}{KL} & 64.051 & 95.728 & 2.591 & 2.789 & 2.591 & 2.597 & 2.781 & 2.558 & 2.591 & 2.801 & 2.574 & 2.587 \\ & & (47.593) & (1.304) & (0.131) & (0.172) & (0.141) & (0.138) & (0.168) & (0.144) & (0.140) & (0.162) & (0.147) & (0.142) \\ \hline \multirow{4}{*}{C} & \multirow{2}{*}{\(e_{\mu}\)} & 7.824 & 8.056 & 0.348 & 0.355 & 0.340 & 0.341 & 0.359 & 0.344 & 0.346 & 0.362 & 0.345 & 0.347 \\ & & (2.507) & (0.082) & (0.018) & (0.015) & (0.015) & (0.015) & (0.015) & (0.017) & (0.016) & (0.017) & (0.018) & (0.018) & (0.018) \\ \cline{2-13} & \multirow{2}{*}{\(e_{\Sigma}\)} & 2.965 & 3.151 & 0.618 & 0.654 & 0.617 & 0.618 & 0.654 & 0.617 & 0.618 & 0.653 & 0.613 & 0.616 \\ & & (0.773) & (0.012) & (0.009) & (0.010) & (0.009) & (0.009) & (0.010) & (0.008) & (0.008) & (0.012) & (0.010) & (0.009) \\ \cline{2-13} & \multirow{2}{*}{MSE} & 6.488 & 6.366 & 6e-04 & 6e-04 & 6e-04 & 6e-04 & 6e-04 & 6e-04 & 6e-04 & 6e-04 & 6e-04 & 6e-04 & 6e-04 & 6e-04 \\ & & (2.185) & (0.103) & (8e-06) & (7e-06) & (7e-06) & (7e-06) & (8e-06) & (8e-06) & (8e-06) & (8e-06) & (8e-06) & (8e-06) & (8e-06) \\ \cline{2-13} & \multirow{2}{*}{KL} & 495.2 & 513.9 & 12.61 & 14.08 & 12.648 & 12.92 & 14.02 & 12.57 & 12.65 & 14.06 & 12.58 & 12.65 \\ \cline{2-13} & \multirow{2}{*}{(161.4)} & 495.2 & 513.9 & 12.61 & 14.08 & 12.648 & 12.92 & ### Robustness assessment In addition, we assess the tolerance of each method to the core-set size \(h=\lfloor\alpha n\rfloor\). Specifically, we generate data from Setting B with 40% contamination, which is high enough for most practical implementations, and we set \(r=100\) for point, random and cluster outliers. Meanwhile, we consider \(\alpha\) ranging from 0.5 to 0.6, which is the highest value that possibly \begin{table} \begin{tabular}{|c|c|c c|c c|c c|c c|c c|} \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{3}{|c|}{Point} & \multicolumn{3}{|c|}{Random} & \multicolumn{3}{|c|}{Cluster} & \multicolumn{3}{|c|}{Radial} \\ \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{1}{c}{Det} & \multicolumn{1}{c}{\(\text{FDB}_{\text{L}_{2}}\)} & \multicolumn{1}{c|}{\(\text{FDB}_{\text{pro}}\)} & \multicolumn{1}{c|}{Det} & \multicolumn{1}{c}{\(\text{FDB}_{\text{L}_{2}}\)} & \multicolumn{1}{c|}{\(\text{FDB}_{\text{pro}}\)} & \multicolumn{1}{c|}{Det} & \multicolumn{1}{c}{\(\text{FDB}_{\text{L}_{2}}\)} & \multicolumn{1}{c}{\(\text{FDB}_{\text{pro}}\)} \\ \hline \multirow{4}{*}{A} & \multirow{2}{*}{\(e_{\Delta}\)} & 7.226 & 6.946 & 6.335 & 0.193 & 0.280 & 0.191 & 0.604 & 0.179 & 0.240 & 0.216 & 0.233 & 0.215 \\ & & (0.982) & (0.354) & (0.376) & (0.066) & (0.100) & (0.066) & (0.215) & (0.061) & (0.106) & (0.072) & (0.071) & (0.072) \\ \cline{2-13} & \multirow{2}{*}{\(e_{\Delta}\)} & 2.989 & 2.981 & 2.563 & 0.279 & 0.623 & 0.278 & 0.421 & 0.263 & 0.284 & 0.299 & 0.447 & 0.291 \\ & & (0.290) & (0.208) & (0.184) & (0.062) & (0.126) & (0.062) & (0.386) & (0.058) & (0.170) & (0.058) & (0.081) & (0.053) \\ \cline{2-13} & \multirow{2}{*}{MSE} & 31.34 & 33.45 & 36.43 & 0.011 & 0.474 & 0.011 & 1.037 & 0.009 & 0.071 & 0.041 & 0.120 & 0.045 \\ & & (3.592) & (2.941) & (2.329) & (0.006) & (0.360) & (0.005) & (2.017) & (0.004) & (0.111) & (0.019) & (0.054) & (0.020) \\ \cline{2-13} & \multirow{2}{*}{KL} & 29.66 & 29.46 & 29.91 & 0.142 & 2.363 & 0.141 & 1.673 & 0.130 & 0.349 & 0.365 & 0.963 & 0.400 \\ & & (0.918) & (0.894) & (0.795) & (0.060) & (1.453) & (0.058) & (3.523) & (0.053) & (0.483) & (0.142) & (0.324) & (0.152) \\ \hline \multirow{4}{*}{B} & \multirow{2}{*}{\(e_{\Delta}\)} & 22.98 & 25.30 & 25.10 & 0.408 & 0.408 & 0.407 & 4.796 & 0.411 & 0.409 & 0.412 & 0.413 & 0.410 \\ & & (0.079) & (0.041) & (1.138) & (0.047) & (0.048) & (0.048) & (0.371) & (0.045) & (0.045) & (0.044) & (0.046) & (0.045) \\ \cline{2-13} & \multirow{2}{*}{\(e_{\Delta}\)} & 4.373 & 6.161 & 6.003 & 0.732 & 0.729 & 0.724 & 2.000 & 0.718 & 0.720 & 0.737 & 0.739 & 0.726 \\ & & (0.112) & (0.151) & (0.470) & (0.033) & (0.036) & (0.035) & (0.093) & ( 0.031) & (0.032) & (0.037) & (0.036) & (0.033) \\ \cline{2-13} & \multirow{2}{*}{MSE} & 24.51 & 15.98 & 16.55 & 0.004 & 0.004 & 0.004 & 0.875 & 0.004 & 0.004 & 0.004 & 0.004 & 0.004 \\ & & (0.568) & (0.366) & (3.196) & (0.000) & (0.000) & (0.000) & (0.074) & (0.000) & (0.000) & (0.000) & (0.000) & (0.000) \\ \cline{2-13} & \multirow{2}{*}{KL} & 242.1 & 229.1 & 231.1 & 3.868 & 3.815 & 3.796 & 36.654 & 3.731 & 3.747 & 3.877 & 3.890 & 3.799 \\ & & (2.489) & (2.716) & (5.105) & (0.217) & (0.217) & (0.216) & (2.466) & (0.201) & (0.210) & (0.218) & (0.236) & (0.204) \\ \hline \multirow{4}{*}{C} & \multirow{2}{*}{\(e_{\Delta}\)} & 51.43 & 56.56 & 55.64 & 0.420 & 0.409 & 0.413 & 6.345 & 0.418 & 0.417 & 0.421 & 0.410 & 0.415 \\ & & (0.016) & (0.012) & (3.242) & (0.020) & (0.020) & (0.021) & (0.161) & (0.019) & (0.020) & (0.021) & (0.020) & (0.021) \\ \cline{2-13} & \multirow{2}{*}{\(e_{\Delta}\)} & 5.061 & 6.996 & 6.756 & 0.769 & 0.745 & 0.753 & 2.317 & 0.757 & 0.758 & 0.769 & 0.747 & 0.760 \\ & & (0.039) & (0.027) & (0.674) & (0.012) & (0.011) & (0.011) & (0.009) & (0.012) & (0.011) & (0.011) & (0.009) & (0.011) \\ \cline{2-13} & \multirow{2}{*}{MSE} & 24.57 & 16.01 & 17.33 & 9e-04 & 8e-04 & 4e-04 & 0.155 & 8e-04 & 8e-04 & 9e-04 & 8e-04 & 8e-04 \\ & & (0.085) & (0.055) & (4.426) & (1e-05) & (1e-05) & (1e-05) & (0.004) & (1e-05) & (1e-05) & (1e-05) & (1e-05) & (9e-06) & (1e-05) \\ \cline{2-13} & \multirow{2}{*}{KL} & 1204 & 1151 & 1140 & 19.20 & 18.09 & 18.45 & 88.81 & 18.63 & 18.66 & 19.14 & 18.12 & 18.70 \\ & & (2.094) & (2.722) & (31.32) & (0.238) & (0.199) & (0.224) & (0.975) & (0.233) & (0.237) & (0.240) & (0.198) produces a clean subset. The average \(e_{\Sigma}\) from 1000 replications are reported in Figure 6. \(\mathsf{FDB}_{\text{pro}}\) remain robust for different values of \(\alpha\) under all the investigated settings; \(\mathsf{FDB}_{\text{L}_{2}}\) is satisfactory for random, cluster and radial contamination, while its estimation error grows substantially with \(\alpha\) for point contamination; DetMCD generates table estimation when \(\alpha=0.5\) and \(0.55\) but becomes deficient when \(\alpha\) raises up to \(0.6\) in each plot of Figure 6. The other three measures show similar patterns as illustrated in Figure S1 of the Supplementary Material. In conclusion, \(\mathsf{FDB}_{\text{pro}}\) reaches the strongest tolerance to the core-set size, when the proportion of outliers in the data is very high. To sum it up, \(\mathsf{FDB}\)s improves the computational efficiency significantly, which is the main motivation of this work, and \(\mathsf{FDB}_{\text{L}_{2}}\) generally achieves the highest computational efficiency. Besides, we surprisingly find that \(\mathsf{FDB}_{\text{pro}}\) shows superiority in terms of both estimation accuracy and robustness, especially in high-dimensional cases. One may safely substitute DetMCD with \(\mathsf{FDB}_{\text{pro}}\) for practical implementation. Real data examples In this section, we apply the FDB methods to four real datasets of various dimensions and sizes. The resultant robust multivariate location and scatter estimates are evaluated via some typical tasks in multivariate analysis such as outlier detection, linear discriminant analysis (LDA), principal component analysis (PCA), and image denoise. The same three methods from Section 5 are utilized. The computations are performed in R (R Core Team 2021) on a laptop with a 10-core and 32GB memory processor. ### Robust PCA for forged bank notes data The first dataset is the forged Swiss bank notes data (Milo, 1990), which is also used in Hubert et al. (2012). The data are of size \(n=100\) and dimension \(p=6\), denoted as \(\mathbf{X}\in\mathbb{R}^{n\times p}\). Since this dataset includes outliers and highly correlated variables (Rousseeuw et al., 2004; Willems et al., 2009), we employ the proposed algorithms and DetMCD to get robust estimation \((\hat{\mathbf{\mu}},\hat{\mathbf{\Sigma}})\) first and conduct robust PCA based on these estimates. The classical PCA obtained by sample location and scatter is also demonstrated for comparison. We use the first two principal components \(\mathbf{P}\in\mathbb{R}^{p\times 2}\), which explain over 80% of the total variance. The projections of data on the 2-dimensional PCA subspace, \(\mathbf{T}=\{t_{ik}\}=(\mathbf{X}-\mathbf{1}_{n}\hat{\mathbf{\mu}}^{T})\mathbf{P}\), are shown in Fig. 7(e)-7(h). The score distance (SD) and orthogonal distance (OD) represent the robust distance of samples in the two-dimensional PCA subspace and the orthogonal distance of samples to the PCA subspace, respectively. For each sample \(\mathbf{x}_{i}\), we have \(\mathbf{SD}_{i}=\sum_{k=1}^{2}t_{ik}^{2}/\lambda_{k}\), where \(\lambda_{1}\) and \(\lambda_{2}\) are the first two eigenvalues of \(\hat{\mathbf{\Sigma}}\), and \(\mathbf{OD}_{i}=\sum_{j=1}^{p}e_{i,j}^{2}\), where \(\mathbf{X}-\mathbf{T}\mathbf{P}^{T}=\{e_{i,j}\}\). Figure 7(a)-7(d) illustrate diagnostic plots. By two cutoff values in each diagnostic plot, we categorize samples into four types and assign different colors to them in Fig. 7(e)-7(h). Regular samples gather in the bottom-left region of diagnostic plots with both score distances and orthogonal distances relatively small, which form the main body of the data cloud. Good leverage samples are close to the PCA subspace but far from the regular samples, e.g., the samples 13 and 23 in the bottom-right region of Fig. 7(a)-7(c), and the green points in Fig. 7(e)-Fig. 7(g). Orthogonal outliers are far from the PCA subspace but not distinguishable by only observing their projections. With larger orthogonal distances but smaller score distances, they locate on the top-left region of diagonal plots, e.g., samples 11, 62 and 67 in Fig. 7(a)-7(c), and red points in Fig. 7(e)-Fig. 7(g). For bad leverage samples, both score and orthogonal distances are large. They lie on the top-right region of diagonal plots and are represented as orange points in projection plots. In general, the three robust methods lead to comparable analysis results and all significantly improve the performance of the classical PCA, which agrees with Theorem 1. However, DetMCD may identify some regular points as outliers according to its specific cutoff value for the Figure 7: Diagnostic plots of the forged bank notes using robust PCAs based on (a) \(\mathsf{FDB}_{\mathrm{L}_{2}}\), (b) \(\mathsf{FDB}_{\mathrm{pro}}\) and (c) DetMCD and classical PCA based on (d) Sample. Projection of data on 2-dimensional subspace and diagnostic results obtained by (e) \(\mathsf{FDB}_{\mathrm{L}_{2}}\), (f) \(\mathsf{FDB}_{\mathrm{pro}}\), and (g) DetMCD and (h) classical PCA. Purple points in (c) and (g) represent samples that may be misclassified by using DetMCD. orthogonal distance; see the purple points in the third column of Figure 7. ### Outlier detection for phoneme data In the second example, we detect outliers for a phoneme dataset, which comes from a speech recognition database TIMIT and has been discussed in Hastie et al. (2009). The data includes 1050 speech frames, 1000 of which are "ao" and 50 of which are "iy". Each data frame has been transformed to be a log-periodogram of length 256. First, we reduce the dimensions by smoothing splines. For each sample, we replace the original variables \(\mathbf{x}\in\mathbb{R}^{256}\) with 50-dimensional variables \(\tilde{\mathbf{x}}=\mathbf{N}^{T}\mathbf{x}\), where \(\mathbf{N}\in\mathbb{R}^{256\times 50}\) is the basis matrix of natural cubic splines. We use 50 basis functions with knots uniformly placed over \(1,\ldots,256\). To this end, we are dealing with data of \(n=1050\) and \(p=50\). We take "ao" and "iy" to be regular cases and outliers, respectively, and perform outlier detection. To be more specific, we calculate the robust Mahalanobis distances based on FDBs and DetMCD, and classical Mahalanobis distances based on Sample. Then, we treat cases with the top 50 largest distances as outliers, and the remaining ones as regular cases. Table 4 records the number of "iy" that are flagged as outliers, the area under the ROC curve (AUC), and the average computational time (second) of 100 replicates for these methods. Again, the proposed methods and DetMCD perform similarly and they all outperform Sample with higher AUCs. In addition, the proposed methods reveal advantages in computation time. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & \(\mathsf{FDB}_{\text{L}_{2}}\) & \(\mathsf{FDB}_{\text{pro}}\) & DetMCD & Sample \\ \hline Number & 49 & 49 & 49 & 13 \\ \hline AUC & 0.9895 & 0.9895 & 0.9895 & 0.6115 \\ \hline Time & 0.5390 & 0.5225 & 1.7911 & 0.2865 \\ \hline \end{tabular} \end{table} Table 4: Performance of various methods in outlier detection for Phoneme data. ### Denoise for MNIST data The Modified National Institute of Standards and Technology (MNIST) database is widely used for training various image processing systems. It contains a large set of handwritten images representing the digits zero through nine. Each digit is stored as a gray-scale image with a size \(p=28\times 28\). Training data \(\mathbf{X}\) and testing data \(\mathbf{X}_{t}\) of \(n=10000\) are randomly selected from MNIST. Noises \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0}_{p},60^{2}\mathbf{I}_{p})\) are generated and added to 20% of the images in the training set and all images in the testing set. Our task is to denoise the testing images, which has been considered in Schreurs et al. (2021). The detailed process is similar to the robust PCA in Section 6.1. First, we apply \(\texttt{FDB}_{\text{L}_{2}}\), \(\texttt{FDB}_{\text{pro}}\), DetMCD, and Sample to training images and obtain the estimated location and scatter. \(\texttt{FDB}_{\text{pro}}\) with \(k=1000\), denoted as \(\texttt{FDB}_{\text{pro}1000}\), is also used for better comparison. Here all methods have comparable computational time. Then, we calculate the eigenvectors of the robust estimated scatter. Next, we project the testing data to the subspace spanned by the first \(K\) eigenvectors and then transform the projected data, i.e., scores, back to the original space. We denote the reconstructed data as \(\hat{\mathbf{X}}_{t}\). Fig. 8 illustrates \(\hat{\mathbf{X}}_{t}\) obtained by various methods with \(K=75\). DetMCD is removed since it returns an error of high condition numbers and hence is not applicable to this example. We can see that the denoised images for the proposed methods are more clear than those for the Sample, which verifies the influence of adding noise on evaluating scatter and the efficiency of the proposed methods. As in Schreurs et al. (2021), we also calculate the mean absolute error MAE = \(\sum_{i=1}^{n}\sum_{j=1}^{p}|x_{t(ij)}-\hat{x}_{t(ij)}|/(np)\) between the original and denoised images. Table 5 shows that the MAE for Sample is obviously larger than those for proposed methods. ### Outlier detection for Musk data Musk data is commonly used in high-dimensional classification and outlier detection problems (Aggarwal and Sathe, 2015; Porwal and Mukund, 2017). The original data, which \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline \(K\) & 15 & 30 & 45 & 60 & 75 \\ \hline \(\texttt{FDB}_{\text{L}_{2}}\) & 23.04525 & 19.27832 & 17.52109 & 16.63028 & 16.26773 \\ \hline \(\texttt{FDB}_{\text{pro}}\) & 23.06536 & 19.27085 & 17.51270 & 16.62653 & 16.26147 \\ \hline \(\texttt{FDB}_{\text{pro1000}}\) & 23.09654 & 19.30584 & 17.54031 & 16.65916 & 16.29452 \\ \hline Sample & 24.58077 & 20.60175 & 18.92803 & 18.16014 & 17.95447 \\ \hline \end{tabular} \end{table} Table 5: MAEs for proposed robust methods and classical method with various \(K\)s. Figure 8: The first and second columns show images before and after adding noise, respectively. The third to the sixth columns show denoised images obtained by \(\texttt{FDB}_{\text{L}_{2}}\), \(\texttt{FDB}_{\text{pro}}\), \(\texttt{FDB}_{\text{pro1000}}\), and Sample with \(K=75\), respectively. can be found in UCI includes 6598 samples, divide into musk and non-musk classes. Each sample has \(p=166\) features characterizing the molecule structure. Here we use a preprocessed mask dataset of \(n=3062\), which consists of 2965 non-musk samples as inliers and 97 musk sample as outliers. Our task is to detect the outliers with the proposed methods, DetMCD and Sample. As shown in Fig. 9, our proposed methods and DetMCD can exactly pick out outliers, which are represented as red points, whereas Sample would take many regular samples as outliers. Furthermore, the computational time of \(\mathtt{FDB}_{\mathrm{L}_{2}}\), \(\mathtt{FDB}_{\mathrm{pro}}\), and DetMCD are 13.547, 12.884, and 53.391 seconds, respectively. The proposed methods are computationally more efficient than DetMCD. ## 7 Discussion MCD-type methods suffer from high computational complexity due to the iteratively used C-step. To tackle the issue, we directly approximate the MCD subset with the trimmed subset induced by statistical depth. Two depth notions, the projection depth and the \(L_{2}\) depth, are recommended due to high computational efficiency and robustness. In addition, we establish the equivalence between the desired MCD subset and the trimmed subset induced by the projection depth. Bypassing the iteration of the C-step, we manage to reduce the computational complexity from \(O(\psi np^{2}+\psi p^{3})\) to \(O(knp)\) and \(O(n^{2}p)\) with Figure 9: (a), (b), and (c) are robust distances obtained by \(\mathtt{FDB}_{\mathrm{L}_{2}}\), \(\mathtt{FDB}_{\mathrm{pro}}\), and DetMCD, respectively. (d) Mahalanobis distance obtained by Sample. \(\texttt{FDB}_{\text{pro}}\) and \(\texttt{FDB}_{\text{L}_{2}}\), respectively. Moreover, the proposed estimators also reach the same level of robustness as the MCD estimator. We conduct extensive simulation studies and show that our estimators are comparable with MCD-type estimators for low-dimensional data and significantly outperform MCD-type estimators for high-dimensional cases. The real data examples provide strong evidence that \(\texttt{FDB}\) is a valuable complement to the toolset of robust multivariate analysis, including but not limited to PCA, LDA, image denoise and outlier detection. The \(\texttt{FDB}\) algorithm may benefit other applications that directly or indirectly rely on robust covariance matrix estimation, such as robust linear regression (Coakley and Hettmansperger, 1993), regression with continuous and categorical regressors (Hubert and Rousseeuw, 1997), MCD-regression (Rousseeuw et al., 2004b), multivariate least trimmed squares estimation (Agullo et al., 2008), and robust errors-in-variables regression (Fekri and Ruiz-Gazen, 2004). The present study primarily focuses on data from elliptical symmetric distributions with enough samples, which may be violated in practical scenarios (Schreurs et al., 2021). In Section S3.1 of the Supplementary Material, we evaluate the effectiveness of our estimators when dealing with skewed distributions. In addition, we did preliminary work to extend the proposed algorithm to the scenario of "small \(n\), large \(p\)" and evaluate our idea with a real-world dataset in Section S3.2 of the Supplementary Material. Our preliminary exploration shows promising results for both cases. Further investigation is needed to address more general scenarios. One possible solution is to adapt depth notions applicable to more general distributions to obtain the \(h\)-subset and then apply the kernel trick to map the subset to a feature space, where outlier detection can be conducted. The computational time is significantly reduced for high-dimensional scenarios (\(n>p\)); however, the estimation accuracy still desires further improvement. Rather than a shrinkage estimator, it is also of interest to extend the MCD framework by considering a low-rank and sparse estimator to alleviate the curse of dimensionality. ## Supplementary materials * We provide an R-package named FDB and R codes of the FDB algorithm proposed in this paper. * The file of supplement involves proofs of theoretical results, additional simulation results as well as preliminary explorations of potential extensions. ## Acknowledgement We are very grateful to three anonymous referees, an associate editor, and the Editor for their valuable comments that have greatly improved the manuscript. The first two authors contribute equally to the paper.
2304.08809
SViTT: Temporal Learning of Sparse Video-Text Transformers
Do video-text transformers learn to model temporal relationships across frames? Despite their immense capacity and the abundance of multimodal training data, recent work has revealed the strong tendency of video-text models towards frame-based spatial representations, while temporal reasoning remains largely unsolved. In this work, we identify several key challenges in temporal learning of video-text transformers: the spatiotemporal trade-off from limited network size; the curse of dimensionality for multi-frame modeling; and the diminishing returns of semantic information by extending clip length. Guided by these findings, we propose SViTT, a sparse video-text architecture that performs multi-frame reasoning with significantly lower cost than naive transformers with dense attention. Analogous to graph-based networks, SViTT employs two forms of sparsity: edge sparsity that limits the query-key communications between tokens in self-attention, and node sparsity that discards uninformative visual tokens. Trained with a curriculum which increases model sparsity with the clip length, SViTT outperforms dense transformer baselines on multiple video-text retrieval and question answering benchmarks, with a fraction of computational cost. Project page: http://svcl.ucsd.edu/projects/svitt.
Yi Li, Kyle Min, Subarna Tripathi, Nuno Vasconcelos
2023-04-18T08:17:58Z
http://arxiv.org/abs/2304.08809v1
# SViTT: Temporal Learning of Sparse Video-Text Transformers ###### Abstract Do video-text transformers learn to model temporal relationships across frames? Despite their immense capacity and the abundance of multimodal training data, recent work has revealed the strong tendency of video-text models towards frame-based spatial representations, while temporal reasoning remains largely unsolved. In this work, we identify several key challenges in temporal learning of video-text transformers: the spatiotemporal trade-off from limited network size; the curse of dimensionality for multi-frame modeling; and the diminishing returns of semantic information by extending clip length. Guided by these findings, we propose **SViTT**, a sparse video-text architecture that performs multi-frame reasoning with significantly lower cost than naive transformers with dense attention. Analogous to graph-based networks, **SViTT** employs two forms of sparsity: edge sparsity that limits the query-key communications between tokens in self-attention, and node sparsity that discards uninformative visual tokens. Trained with a curriculum which increases model sparsity with the clip length, **SViTT** outperforms dense transformer baselines on multiple video-text retrieval and question answering benchmarks, with a fraction of computational cost. Project page: [http://svcl.ucsd.edu/projects/svitt](http://svcl.ucsd.edu/projects/svitt). ## 1 Introduction With the rapid development of deep neural networks for computer vision and natural language processing, there has been growing interest in learning correspondences across the visual and text modalities. A variety of vision-language pretraining frameworks have been proposed [12, 32, 38, 25] for learning high-quality cross-modal representations with weak supervision. Recently, progress on visual transformers (ViT) [5, 16, 35] has enabled seamless integration of both modalities into a unified attention model, leading to image-text transformer architectures that achieve state-the-art performance on vision-language benchmarks [1, 51, 30]. Progress has also occurred in _video_-language pretraining by leveraging image-text models for improved frame-based reasoning [4, 9, 18]. Spatial modeling has the advantage of efficient (linear) scaling to long duration videos. Perhaps due to this, single-frame models have proven surprisingly effective at video-text tasks, matching or exceeding prior arts with complex temporal components [27, 9]. However, spatial modeling creates a bias towards static appearance and overlooks the importance of temporal reasoning in videos. This suggests the question: Are temporal dynamics not worth modeling in the video-language domain? Upon a closer investigation, we identify a few key challenges to incorporating multi-frame reasoning in video-language models. First, limited model size implies a trade-off between spatial and temporal learning (a classic example being 2D/3D convolutions in video CNNs [53]). For any given dataset, optimal performance requires a careful bal Figure 1: We propose **SViTT**, a _sparse_ video-text transformer for efficient modeling of temporal relationships across video frames. **Top**: Semantic information for video-text reasoning is highly localized in the spatiotemporal volume, making dense modeling inefficient and prone to contextual noises. **Bottom**: **SViTT** pursues _edge_ sparsity by limiting query-key pairs in self-attention, and _node_ sparsity by pruning redundant tokens from visual sequence. ance between the two. Second, long-term video models typically have larger model sizes and are more prone to over-fitting. Hence, for longer term video models, it becomes more important to carefully allocate parameters and control model growth. Finally, even if extending the clip length improves the results, it is subject to diminishing returns since the amount of information provided by a video clip does not grow linearly with its sampling rate. If the model size is not controlled, the computational increase may not justify the gains in accuracy. This is critical for transformer-based architectures, since self-attention mechanisms have a quadratic memory and time cost with respect to input length. In summary, model complexity should be adjusted adaptively, depending on the input videos, to achieve the best trade-off between spatial representation, temporal representation, overfitting potential, and complexity. Since existing video-text models lack this ability, they either attain a suboptimal balance between spatial and temporal modeling, or do not learn meaningful temporal representations at all. Motivated by these findings, we argue that video-text models should learn to allocate modeling resources to the video data. We hypothesize that, rather than uniformly extending the model to longer clips, the allocation of these resources to the relevant spatiotemporal locations of the video is crucial for efficient learning from long clips. For transformer models, this allocation is naturally performed by pruning redundant attention connections. We then propose to accomplish these goals by exploring transformer sparsification techniques. This motivates the introduction of a _Sparse Video-Text Transformer_ (**SViTT**) inspired by graph models. As illustrated in Fig. 1, **SViTT** treats video tokens as graph vertices, and self-attention patterns as edges that connect them. We design **SViTT** to pursue sparsity for both: _edge_ sparsity aims at reducing query-key pairs in attention module while maintaining its global reasoning capability; _node_ sparsity reduces to identifying informative tokens (e.g., corresponding to moving objects or person in the foreground) and pruning background feature embeddings. To address the diminishing returns for longer input clips, we propose to train **SViTT** with _temporal sparse expansion_, a curriculum learning strategy that increases clip length and model sparsity, in sync, at each training stage. **SViTT** is evaluated on diverse video-text benchmarks from video retrieval to question answering, comparing to prior arts and our own dense modeling baselines. First, we perform a series of ablation studies to understand the benefit of sparse modeling in transformers. Interestingly, we find that both nodes (tokens) and edges (attention) can be pruned drastically at inference, with a small impact on test performance. In fact, token selection using cross-modal attention improves retrieval results by 1% without re-training. We next perform full pre-training with the sparse models and evaluate their downstream performance. We observe that **SViTT** scales well to longer input clips where the accuracy of dense transformers drops due to optimization difficulties. On all video-text benchmarks, **SViTT** reports comparable or better performance than their dense counterparts with lower computational cost, outperforming prior arts including those trained with additional image-text corpora. The key contributions of this work are: 1) a video-text architecture **SViTT** that unifies edge and node sparsity; 2) a sparse expansion curriculum for training **SViTT** on long video clips; and 3) empirical results that demonstrate its temporal modeling efficacy on video-language tasks. ## 2 Related Work **Video-language pretraining.** Vision-language pretraining has been widely adopted for various video-text downstream tasks. VideoBERT [52] was an early effort, using video-text pretraining for action classification and video captioning. Recently, the massive-scale instructional video dataset HowTo100M [43] has motivated many approaches to video-text pretraining [4, 19, 27, 64, 63, 42]. Frozen [4] proposed to pretrain a space-time transformer on a combination of video and image data to enable zero-shot text-to-video retrieval. ATP [9] and Singularity [27] showed strong performance using image-based models, highlighting the importance of spatial modeling for video-language tasks. In this work, we pursue an alternative route of efficient _temporal_ modeling across multiple video frames. **Sparse transformers.** The self-attention of naive transformers [55, 16] has quadratic complexity making them inefficient for modeling long sequences. Different forms of sparse attention have been studied to improve text [7, 13, 62], image [15, 54, 35], and video modeling [3, 8, 36], although the sparse patterns are typically predetermined and do not adapt to the input semantics. Several works have also considered speeding up vision transformers by adaptively reducing the number of input tokens [11, 33, 41, 47, 48, 60]. For example, DynamicViT [47] proposed to drop visual tokens with a dedicated module that identifies and prunes less informative ones. TokenLearner [48] introduces a learnable module to adaptively generate a small subset of tokens from input frames. EViT [33] progressively reduces the number of tokens based on their attention scores, fusing inattentive tokens into a new token to preserve input information. Unlike prior works that focus on visual modeling on images or videos alone, we study the sparsity of video-language transformers which can benefit from cross-modal attention. ## 3 Exploiting Sparsity in Video Transformers In this section, we formulate video transformers as graph models (Sec. 3.1) and present a set of approaches towards sparse video modeling, exploiting the redundancy of edges (Sec. 3.2) and nodes (Sec. 3.3). We combine these into a unified sparse framework for video-text learning (Sec. 3.4). ### Video Transformers are Graph Models Visual transformers [16] are deep neural networks that model images or videos as sequences of local pixel patches, through a combination of patchwise feature transformation and self-attention. Inspired by transformer architectures for language models [55], video transformers encode input clips into a sequence of spatiotemporal patches, flattened and linearly projected to a \(d\)-dimensional embedding space: \[\mathbf{Z}^{(0)}=\left(\mathbf{z}^{(0)}_{\text{cls}},\mathbf{z}^{(0)}_{1}, \ldots,\mathbf{z}^{(0)}_{N}\right)=f^{\text{tok}}(\mathbf{x}_{1:T})\in\mathbb{ R}^{(N+1)\times d} \tag{1}\] where \(N=T^{\prime}H^{\prime}W^{\prime}\) is the volume of the 3D patch grid, and \(\mathbf{z}^{(0)}_{\text{cls}}\) denotes a special class token responsible for instance-level prediction. The tokenized sequence is processed by a cascade of transformer blocks \(f^{(l)}\) \[\mathbf{Z}^{(l)}=f^{(l)}(\mathbf{Z}^{(l-1)}),\quad l=1,\ldots,L \tag{2}\] each of which computes the self-attention \(\mathcal{A}\) between input tokens, followed by a feed-forward network \(\mathcal{F}\):1 Footnote 1: Attention heads, residual connections and normalization terms are omitted for brevity, although we use the conventional implementation [55]. \[f^{(l)}(\mathbf{Z})=\mathcal{F}(\mathcal{A}(\mathbf{ZW}^{T}_{K},\mathbf{ZW}^ {T}_{Q},\mathbf{ZW}^{T}_{V})), \tag{3}\] \[\mathcal{A}(\mathbf{Q},\mathbf{K},\mathbf{V})=\sigma(\mathbf{Q}\mathbf{K}^{T })\mathbf{V} \tag{4}\] We interpret the transformer architecture as a special case of _graph_ networks [6], with _nodes_ representing tokenized video patches and _edges_ connecting pairs of tokens for which self-attention is computed. Specifically, consider a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) defined by the vertices \(\mathcal{V}=\{\mathbf{z}_{1},\ldots,\mathbf{z}_{N}\}\) corresponding to spatiotemporal video patches, and edges \(\mathcal{E}\subseteq\{1,\ldots,N\}^{2}\) connecting pairs of nodes. The self-attention of (4) can be generalized such that node \(\mathbf{z}_{i}\) attends to \(\mathbf{z}_{j}\) only if \((i,j)\in\mathcal{E}\), \[\mathcal{A}_{\mathcal{E}}(\mathbf{q}_{i},\mathbf{K},\mathbf{V})=\sum_{j:(i,j )\in\mathcal{E}}\mathbf{a}_{ij}\mathbf{v}_{j}, \tag{5}\] \[\mathbf{a}_{ij}=\frac{e^{\langle\mathbf{q}_{i},\mathbf{k}_{j}\rangle}}{\sum_{ j:(i,j)\in\mathcal{E}}e^{\langle\mathbf{q}_{i},\mathbf{k}_{j}\rangle}} \tag{6}\] Under this interpretation, the transformer architecture with full attention resembles a _complete_ graph, where \(\mathcal{E}\) includes every pair of vertices. This dense attention mechanism endows (5) with quadratic memory and time complexity w.r.t. sequence length \(N\), making naive transformers notoriously inefficient to train and expensive to deploy, especially for longer video clips. We argue that due to the inherent sparsity of information in video data, a large portion of the graph can be pruned dynamically without significant performance loss, leading to a _sparse_ graph model of significantly lower cost for training and inference. Figure 2: **Model Architecture.****SViTT** improves modeling efficiency of conventional video-text transformers through two key components: _node_ sparsity and _edge_ sparsity. **Edge sparsification**\(\mathcal{A}_{\mathcal{E}}\) computes sparse self-attention of input visual sequence \(\mathbf{z}\), where each query token attends to a small subset of key and value tokens, with connectivity \(\mathcal{E}\) specified by global, local, and random attention. **Node sparsification**\(\mathcal{S}_{\mathcal{V}}\) uses global attention scores from \(\mathcal{A}_{\mathcal{E}}\) to prune uninformative tokens, removing them from the computational graph of subsequent layers; \(\mathcal{S}_{\mathcal{M}}\) uses text-to-video attention in the multimodal encoder to further reduce the length of visual sequence. ### Edge Sparsity: Local & Random Attention Prior natural language processing models, such as BigBird [62], have explored the idea of restricting the number of key-value pairs each query token attends to, which reduced the number of edges in \(\mathcal{E}\). We use a similar procedure to create a video transformer with _edge sparsity_, utilizing a combination of _local_, _random_ and _global_ attention. Local attention.Regional tokens \(\{\mathbf{z}_{i}\}_{i=1}^{N}\) are first chunked into \(N_{b}=\lceil N/G\rceil\) contiguous blocks of size \(G^{2}\). Tokens of one block \(k\) attend to a local neighborhood of \(K_{l}\) blocks \(\{k-\Delta,\ldots,k+\Delta\}\), where \(\Delta=(K_{l}-1)/2\) is the maximum range of local attention, \[(k,k^{\prime})\in\mathcal{E},\quad\forall k,k^{\prime}:|k^{\prime}-k|\leq\Delta \tag{7}\] This preserves the modeling of interactions between local features (objects, people, textures) that does not require long-range attention. In the case of \(K_{l}=1\), local reduces to diagonal attention, where each block only attends to itself. Random attention.Beyond _local_ attention, each block also attends to \(K_{r}\) other blocks sampled randomly from the input sequence, \[(k,k^{\prime})\in\mathcal{E},\quad\forall k^{\prime}\in\mathcal{N}(k) \tag{8}\] where \(\mathcal{N}(k)\) is a random subset of \(\{k^{\prime}\mid|k^{\prime}-k|>\Delta\}\) of size \(K_{r}\). This enables the transformer to model long-range visual relationships while avoiding the quadratic cost. Global attention.Class token \(\mathbf{z}_{\text{cls}}\) always attends to/from _regional_ tokens \(\mathbf{z}_{i}\), i.e. the link between \(\mathbf{q}_{\text{cls}}\) and \(\mathbf{k}_{i}\), as well as \(\mathbf{q}_{i}\) and \(\mathbf{k}_{\text{cls}}\), are always retained: \[(\text{cls},i),(i,\text{cls})\in\mathcal{E},\quad i=1,\ldots,N \tag{9}\] This enables \(\mathbf{z}_{\text{cls}}\) to capture global video context, even when the rest of tokens do not attend globally. In summary, we retain all attention connections for local-to-global and global-to-local vertices, and limit local-to-local edges to \((K_{l}+K_{r})G\) per token. Edge sparsity has a _linear_ asymptotic cost of \(O((K_{l}+K_{r})GN)\), a significant improvement over dense attention at \(O(N^{2})\). However, this approach has two critical limitations of this strategy. First, the sparse patterns are predetermined and do not adapt dynamically to the input sequence. Second, the connections that remain in \(\mathcal{E}\) are not determined by the video semantics. In result, connections between pairs of tokens of low semantic affinity (low attention values) may be preserved, impairing the efficiency of the sparse attention mechanism. To enable more aggressive sparsification, we introduce a second mechanism, which is dynamic, guided by video semantics, and applied to the _nodes_ of the graph. ### Node Sparsity: Dynamic Token Pruning A large percentage of the tokens of a video transformer corresponds to _contextual_ regions, which contain little temporal dynamics and are only weakly related to the prediction target (e.g. background content uninformative of the activities of subjects in the foreground). While edge sparsification improves the efficiency of self-attention, it lacks both the flexibility and the semantic sensitivity to account for the uneven distribution of information across video patches. To introduce these properties, we propose a node sparsification strategy, based on the dynamic pruning of tokens. We leverage a combination of observations. First, video semantics are summarized by the class tokens \(\mathbf{z}_{\text{cls}}\), which contain a global representation of the information of interest for classification. Second the global-to-local edges survive the edge sparsification, through (9), making the attention weights from the \(\mathbf{z}_{\text{cls}}\) to all regional tokens \(\mathbf{z}_{i}\), \[\mathbf{a}(\mathbf{z}_{i})=\frac{e^{\langle\mathbf{q}_{\text{cls}},\mathbf{k}_ {i}\rangle}}{\sum_{j=1}^{N}e^{\langle\mathbf{q}_{\text{cls}},\mathbf{k}_{j} \rangle}},\quad i=1,\ldots,N \tag{10}\] available for node sparsification. Since the class token is used for video-level predictions, \(\mathbf{a}(\mathbf{z}_{i})\) quantifies the contribution of feature \(\mathbf{z}_{i}\) to the main task. This implies that nodes \(\mathbf{z}_{i}\) of low \(\mathbf{a}(\mathbf{z}_{i})\) are not informative and can be ignored [33]. Node pruning then reduces to keeping the \(N^{\prime}=\lceil qN\rceil\) tokens of largest class attention \[\mathcal{S}_{\mathcal{V}}(\mathbf{Z};q)=\{\mathbf{z}_{i}\mid i\in\mathrm{ topk}(\mathbf{a}(\mathbf{Z}),\lceil qN\rceil)\} \tag{11}\] where hyperparameter \(q\) denotes _keep rate_ and \(\mathrm{topk}(\mathbf{L},m)\) selects the \(m\) largest entries of vector \(\mathbf{L}\). This procedure is repeated multiple times throughout the video encoder (keep rate \(q^{(l)}\) at layer \(l\)), progressively reducing the length of input sequences. Importantly, the pruning procedure is dynamic and ensures that semantically uninformative vertices in the attention graph \(\mathcal{G}\) are removed along with all edges they are associated with. Cross-modal sparsity.Node sparsification can be naturally extended to video-language learning. For this, we propose to extend the token selection mechanism discussed above to a _cross-modal_ setting, where video and text tokens \(\mathbf{Z}_{v},\mathbf{Z}_{t}\) are modeled jointly in a multimodal encoder. In this case, we replace the query \(\mathbf{q}_{\text{cls}}\) of (10) with the class token of text sequence \(\mathbf{q}_{\text{cls}}^{(t)}\), obtaining cross-modal attention \[\mathbf{a}_{m}(\mathbf{z}_{i}^{(v)})=\frac{e^{\langle\mathbf{q}_{\text{cls}}^{( t)},\mathbf{k}_{i}^{(v)}\rangle}}{\sum_{j=1}^{N}e^{\langle\mathbf{q}_{\text{cls}}^{( t)},\mathbf{k}_{j}^{(v)}\rangle}},\quad i=1,\ldots,N \tag{12}\] and subsequently the token sparsification function \[\mathcal{S}_{\mathcal{M}}(\mathbf{Z}_{v};q)=\{\mathbf{z}_{i}^{(v)}\mid i\in \mathrm{topk}(\mathbf{a}_{m}(\mathbf{Z}_{v}),\lceil qN\rceil)\} \tag{13}\] Multimodal node sparsity \(\mathcal{S}_{\mathcal{M}}\) is applied on top of visual sparsity \(\mathcal{S}_{\mathcal{V}}\). We expect cross-modal sparsification to create additional room for sparsity over standalone visual modeling. While the visual encoder can identify salient actors and objects from background patches from the input clip, only the text semantics provide direct guidance for the video-text model to focus on regions relevant to the task of interest. With node sparsity, the compute cost of subsequent layers is improved to \(O(q^{2}N^{2})\) using dense attention or \(O(q(K_{l}+K_{r})GN)\) with edge sparsity, and the reduction accumulates with multiple sparse layers. ### Hybrid Sparse Transformers We propose to combine _edge_ and _node_ sparsity into a unified framework, **SViTT**, as illustrated in Fig. 2. **SViTT** is built on top of existing video-language transformer architectures that combine separate video and text encoders with a cross-modal transformer. The visual encoder blocks perform sparse self-attention in the following steps: * _edge sparsification_: given the random graph \(\mathcal{E}\) of (7)-(9) compute attention weights \(\mathcal{A}_{\mathcal{E}}\), using (5); * _node sparsification:_ select the nodes \(\mathcal{S}_{\mathcal{V}}\) to retain, using (10)-(11) with keep rate \(q<1\)). After video and text embeddings \(\mathbf{z}_{v},\mathbf{z}_{t}\) are derived from their respective encoders, a multimodal transformer is applied to aggregate features across modalities. It follows the design of the text encoder, except for a cross-attention module that is applied after each self-attention block, where text queries \(\mathbf{q}_{t}\) attends to key-value pairs \(\mathbf{k}_{v},\mathbf{v}_{v}\) extracted by the video encoder. The text-to-video attention of (12)-(13) is then used to select the nodes \(\mathcal{S}_{\mathcal{M}}\) to retain, further reducing the number of video tokens for subsequent layers. ## 4 Temporal Sparse Expansion In this section, we introduce a new training strategy for **SViTT**. We motivate for progressive model training with increasing clip length and sparsity in Sec. 4.1, and detail our pretraining procedure in Sec. 4.2. ### Sparsity vs. Clip Length The key insight behind the design of **SViTT** is the _diminishing return_ of clip length. In general, a \(2\times\) longer sequence does not contain twice the semantic information about the video, due to the redundancy of adjacent frames. This leads to a lower percentage of informative patches with denser frame sampling. Due to this redundancy, it is possible to use higher sparsity for models with longer clips. This can be implemented by reducing the keep rate \(q\) of node sparsification, and the percentage of key/value blocks to attend to in edge sparsification (\(K_{l}/N_{b},K_{r}/N_{b}\)). For the latter, since the total number of blocks \(N_{b}=\lceil N/G\rceil\) increases with number of frames \(T\) (assuming a fixed block size \(G\)), it suffices to keep the parameters \(K_{l},K_{r}\) constant. ### Temporal Expansion Pretraining video-text transformers on long clips is time-consuming and leads to suboptimal models. Instead, we follow a learning strategy similar to Frozen [4], where the model is initially pretrained with shorter clips, and the number of frames increases as training progresses. However, when expanding the clip length, we increase the model sparsity to simultaneously 1) account for the redundancy of information, and 2) limit the growth of computational cost. Fig. 3 depicts the expansion process proposed for video-text training. In the initial training stage \(j=1\), a dense video-text model is pretrained on clip length \(T_{1}\). Denoting the sparsity hyperparameters of **SViTT** at stage \(j\) by \(S_{j}=(q_{j},K_{l}^{(j)},K_{r}^{(j)})\), we create a learning curriculum with progressively increasing clip length and sparsity, by enforcing the constraints \[T_{1} <T_{2}<\ldots; \tag{14}\] \[q_{1} >q_{2}>\ldots;\] (15) \[\frac{K_{l}^{(1)}+K_{r}^{(1)}}{T_{1}} >\frac{K_{l}^{(2)}+K_{r}^{(2)}}{T_{2}}>\ldots \tag{16}\] In practice, we use a decreasing token keep rate (15) and keep local and random attention block numbers fixed, i.e. \(K_{l}^{(j)}=K_{l}\) and \(K_{r}^{(j)}=K_{r}\), to satisfy (16). ## 5 Results In this section we present experimental results of **SViTT** on vision-language modeling. We briefly introduce the experimental setup in Sec. 5.1, and perform several ablation studies on the design choices involving model sparsification and training in Sec. 5.2. We then demonstrate the performance of **SViTT** on various vision-language tasks in Sec. 5.3 and include additional qualitative analysis. ### Experimental Setup **Architecture.** Our implementation of video-text transformer is based on _Singularity_[27]. The model has a two-tower structure, with separate encoders for vision and language. The video encoder \(f_{v}\) is a 12-layer BEiT-B [5] initialized with ImageNet weights and inflated for video inputs. This differs from [27] which embeds each frame independently and applies late temporal fusion on extracted Figure 3: **Temporal Sparse Expansion. We propose a multi-stage curriculum for training **SViTT**. At each stage, the node and edge sparsity \(S\) of video-text transformer increases with clip length \(T\). features. The text encoder \(f_{t}\) is a pretrained BERT [14] model, whose last 3 layers are modified to implement the multimodal encoder \(f_{m}\), with _cross-modal_ attention based on visual tokens as key-value pairs, which we sparsify as described in Sec. 3.3. **Pre-training.**SViT** is pre-trained on 2.5M video-text pairs from the WebVid dataset [4]. Since our goal is to investigate how to improve the effectiveness of the _temporal_ learning of the video modality, we do not train with additional image-text corpora as done in [4, 18, 27]. The [cls] tokens of the video and text encoders are first linearly projected to a joint embedding space, producing feature vectors \(\mathbf{z}_{v}=f_{v}(\mathbf{x}_{v})\) and \(\mathbf{z}_{t}=f_{t}(\mathbf{x}_{t})\), respectively. Following prior work, we use the InfoNCE loss [45] to align these feature vectors. The output of multimodal encoder \(\mathbf{y}=f_{m}(\mathbf{z}_{v},\mathbf{z}_{t})\) is optimized with video-text matching (VTM) and masked language modeling (MLM) losses commonly found in VLP literature [12, 18, 27, 30, 31]. **Downstream tasks.** Trained video-text models are evaluated on two multimodal tasks: _text-to-video retrieval_ and _video question answering_. Video retrieval is evaluated on MSR-VTT [58], DiDeMo [2], Charades [50] and Something-Something v2 [20, 27], by top-\(K\) recalls (\(K\in\{1,5,10\}\)) and their numeric average. Question answering is evaluated on MSRVTT-QA [56], ActivityNet-QA [10, 61] and AGQA [22], with top-1 accuracy of the answers. Training details are given in the Appendix. ### Ablation Studies We start by training a video-text transformer with clip length of 4 frames and _dense_ attention, and measure its zero-shot performance on downstream tasks after _sparsifying_ its video encoder while keeping its weights unchanged. **Edge sparsification.** We first apply edge sparsification with different number of local blocks \(K_{l}\), random blocks \(K_{r}\), and block size \(G\). As shown in Fig. 3(a), under a limited budget of 6 local or random attention blocks per query token, using 1 local block and 5 random blocks provides the best trade-off. Increasing the local attention window \(K_{l}\) hurts long-term modeling capacity and degrades retrieval; while removing the single local block responsible for diagonal attention also impairs performance. This suggests that while query tokens should always attend to their respective blocks, there is no benefit in attending to neighboring blocks. We thus fix \(K_{l}=1\) for the rest of experiments. We next vary the block size \(G\) of sparse attention. Fig. 3(b) shows that larger sizes have stronger retrieval performance, as expected. However, they also make self-attention less sparse (more costly to compute). \(G=56\) provides a good balance between performance and complexity. Tab. 1 summarizes the test performance of two edge sparsity configs: \((K_{l},K_{r},G)=(1,3,56)\) and \((1,5,56)\). While both underperform the dense model, we will later demonstrate that the \begin{table} \begin{tabular}{c c c c c c c} **Attn. blocks** & **Keep rate** & **\# Edges** & & & **DiDeMo** & \\ \(K_{l},K_{r},G\) & \(q_{v},q_{m}\) & (M) & R1 & R5 & R10 & **Mean** \\ \hline — & — & 7.47 & 28.8 & 53.1 & 63.0 & **48.3** \\ \hline \multicolumn{8}{c}{_Edge sparsity_} \\ \hline (1, 3, 56) & — & 2.14 & 20.7 & 41.7 & 50.5 & **37.6** \\ (1, 5, 56) & — & 3.21 & 26.0 & 48.6 & 56.8 & **43.8** \\ \hline \multicolumn{8}{c}{_Node sparsity_} \\ \hline — & (0.7, 1) & 3.99 & 26.9 & 51.9 & 61.3 & **46.7** \\ — & (0.7, 0.1) & 3.97 & 27.6 & 53.1 & 62.9 & **47.9** \\ \hline \multicolumn{8}{c}{_Hybrid sparsity_} \\ \hline (1, 3, 56) & (0.7, 0.1) & 1.48 & 19.9 & 40.5 & 50.6 & **37.0** \\ (1, 5, 56) & (0.7, 0.1) & 2.22 & 24.5 & 47.6 & 58.6 & **43.6** \\ \hline \end{tabular} \end{table} Table 1: **Ablation on Edge and Node Sparsity. We evaluate the same _dense_ video-text model under different sparsification modes.** Figure 4: **Ablation on Edge Sparsity. We evaluate dense model using different local/random blocks \((K_{l},K_{r})\) and block size \(G\).** Figure 5: **Ablation on Node Sparsity. We evaluate the pre-trained dense model using different keep rates \(q\) at test time.** \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{\(T\)} & \multirow{2}{*}{**Spars.**} & \multicolumn{2}{c}{**FLOPs**} & \multicolumn{2}{c}{**Mem.**} & \multicolumn{2}{c}{**DiDeMo**} & \\ & & & (G) & (GB) & R1 & R5 & R10 & **Mean** \\ \hline \multirow{4}{*}{4} & Dense & 139.9 & 0.96 & 28.8 & 53.1 & 63.0 & 48.3 \\ & Edge & 135.9 & 0.80 & 28.3 & 51.9 & 61.4 & 47.2 \\ & Node & 95.8 & 0.61 & **29.9** & **54.8** & **63.9** & **49.6** \\ & Hybrid & **93.9** & **0.54** & 29.3 & 53.7 & 63.2 & 48.8 \\ \hline \multirow{4}{*}{8} & Dense & 291.3 & 3.14 & 29.6 & 54.1 & 64.1 & 49.3 \\ & Edge & 271.8 & 1.57 & 29.8 & 54.9 & 65.4 & 50.1 \\ & Node & 197.6 & 1.77 & 30.4 & 55.7 & 66.0 & 50.7 \\ & Hybrid & **166.4** & **0.92** & **31.0** & **57.2** & **66.3** & **51.5** \\ \hline \multirow{4}{*}{16} & Dense & 627.8 & 10.66 & \multicolumn{4}{c}{**Untrainable**} \\ \cline{2-6} & Edge & 543.6 & 3.02 & **31.6** & 55.1 & 64.6 & 50.5 \\ \cline{1-1} & Node & 370.9 & 4.39 & \multicolumn{4}{c}{**Untrainable**} \\ \cline{1-1} & Hybrid & **296.2** & **1.57** & 31.4 & **57.3** & **67.8** & **52.2** \\ \hline \hline \end{tabular} \end{table} Table 2: **Ablation on Training Sparse Models. We compare the zero-shot performance, inference GFLOPs, and training memory (per sample) of sparse models to the dense attention baseline.** gap can be closed and even reversed by training the sparse model with the proposed temporal expansion curriculum. Node sparsification.We next incorporate node sparsity into the video-text transformer. This includes _visual_ sparsification (keep rate \(q_{v}\)) using the self-attention of video encoder \(f_{v}\) to progressively prune the input tokens3, and _multimodal_ sparsification (keep rate \(q_{m}\)) using the text-to-video attention of cross-modal encoder \(f_{m}\) to further drop visual tokens unrelated to the text query. Fig. 4(a) shows that the dense model is quite robust to token pruning, even without sparse training. Using visual keep rate \(q_{v}\geq 0.8\) has minimal impact on test results, and performance only starts to drop rapidly at \(q_{v}=0.5\), at which point only \(1/8\) of input tokens are retained after three rounds of sparsification. Footnote 3: We follow [33] to prune visual tokens at layer #4, #7, and #10. Even more surprisingly, the subsequent multimodal sparsification step, using a keep rate of \(q_{m}=0.1\), _improves_ zero-shot performance by \(1\%\). This shows that the \(90\%\) redundant visual tokens not only add unnecessary complexity to the model, but also introduce noise that harms retrieval performance. The fact that the optimal \(q_{m}\) is much lower than \(q_{v}\) also suggests that text modality provides crucial semantic guidance for identifying relevant visual patches, at a much higher accuracy than visual modeling alone. Hybrid sparsification.Combining the best sparse settings for edges and nodes, we obtain the hybrid sparsification strategy for **SViTT**. As shown in Tab. 1, compared to edge sparsity, the introduction of node sparsity (\(q_{v}=0.7,q_{m}=0.1\)) only impacts recall scores marginally, while saving computations on a large portion of visual tokens throughout the network. Training sparse transformers.We next perform _full pretraining_ with the sparse models and compare their performance and efficiency to the dense transformer baseline. Tab. 2 summarizes the results obtained with different input clip lengths and types of sparsity. At 4 frames, edge sparsity has small benefit due to the relatively short sequence length and node sparsity performs the best. However, at 8 frames, we start to see clear advantages to edge sparsity in memory complexity, and both edge and node sparsity outperform the dense transformer baseline. Combining both types of sparsity performs the best, while only requiring 60% the FLOPs and 30% the memory of the dense model. At 16 frames, the dense and node sparsity models are no longer trainable due to their quadratically increasing cost. The models with edge sparsity, however, are able to fit into GPU memory thanks to the linear complexity from sparse attention. A 16-frame **SViTT** with hybrid sparsity requires similar computation to an 8-frame dense model and only half of its training memory, while achieving 3% higher recall. Progressive training.We next study the impact of temporal sparse expansion on the training of **SViTT** models on longer clips. Tab. 3 compares the performance of models trained using temporal expansion (i.e. initialized from checkpoints pretrained on fewer frames and lower sparsity) to standard single-stage training. For single-stage training, performance does not improve substantially with clip length. This is in contrast to the proposed sparse expansion, where using \(8\) instead or \(4\) frames results in a gain of \(2.7\%\). This suggests that the models have learned to exploit the temporal relationships between video frames. For a given clip length, sparse expansion also substantially improves upon single stage performance, from \(2.8\) points for \(T=4\) to \(4.5\) points for \(T=16\). In fact, sparse expansion training with a shorter clip length (e.g. \(4\) frames) can outperform single stage training with a larger length (\(16\) frames). ### Main Results We compare **SViTT** to state-of-the-art models in text-to-video retrieval and video question answering. Text-to-video retrieval.Video-text retrieval is evaluated under zero-shot and fine-tuning settings. Tab. 4 shows the zero-shot results on MSR-VTT and DiDeMo. Compared to our reproduced models of Singularity on WebVid-2M, which aggregate frame-level features using a temporal transformer encoder, the spatiotemporal transformer \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**PT**} & \multirow{2}{*}{\(T\)} & \multicolumn{2}{c}{**MSR-VTT**} & \multicolumn{2}{c}{**DiDeMo**} \\ & & & R1 & Mean & R1 & Mean \\ \hline VideoCLIP [57] & 100M & — & 10.4 & 20.9 & 16.6 & — \\ Frozen [4] & 5M & 4 & 23.2 & 41.5 & 21.1 & 41.1 \\ ALPRO [29] & 5M & 8 & 24.1 & 41.4 & 23.8 & 43.0 \\ VIOLET [18] & 5M & 4 & 25.9 & 45.0 & 23.5 & 44.4 \\ Singularity [27] & 5M & 1 & _28.4_ & _46.0_ & _36.9_ & _55.8_ \\ & & 1 & 21.1 & 38.7 & 23.3 & 40.8 \\ Singularity* & 2M & 4 & 24.4 & 40.0 & 26.4 & 44.1 \\ & & 8 & 24.3 & 41.0 & 25.8 & 45.5 \\ \hline \hline \multirow{2}{*}{**SViTT**} & Dense & \multirow{2}{*}{2M} & \multirow{2}{*}{8} & **26.0** & 43.6 & 29.6 & 49.3 \\ & Hybrid & & & 25.4 & **43.8** & **31.0** & **51.5** \\ \hline \hline \end{tabular} \end{table} Table 4: **Zero-shot Text-to-video Retrieval. Results reported in prior works are marked in gray; * indicates our reproduced results. PT = # video-text pairs for pre-training, \(T\) = # frames per clip.** \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Frames**} & \multirow{2}{*}{**Sparsity**} & \multicolumn{4}{c}{**DiDeMo**} \\ & \(T\) & \(T_{S}\) & R1 & R5 & R10 & **Mean** \\ \hline \multirow{2}{*}{\(4\)} & \multirow{2}{*}{\(4.80\)} & \(28.0\) & 50.7 & 59.2 & 46.0 \\ & & \(1_{0}\to 4.80\) & **29.3** & **53.7** & **63.2** & **48.8** \\ \hline \multirow{2}{*}{\(8\)} & \multirow{2}{*}{\(8\)} & \(8.91\) & 27.3 & 51.4 & 63.8 & 47.5 \\ & & \(4.80\to 8.91\) & **31.0** & **57.2** & **66.3** & **51.5** \\ \hline \multirow{2}{*}{\(16\)} & \multirow{2}{*}{\(4.80\to 8.91\to 16.96\)} & \(16.96\) & 27.5 & 52.4 & 63.2 & 47.7 \\ & & \(4.80\to 8.91\to 16.96\) & **31.4** & **57.3** & **67.8** & **52.2** \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablation on Temporal Sparse Expansion. Sparsity \(T_{S}\) indicates training on \(T\) frames while removing \(S\times 100\%\) attention edges of dense transformer.** of **SViTT** produces stronger results on both downstream datasets. This highlights the importance of temporal modeling in earlier layers of video-text transformers. **SViTT** with hybrid sparsity has similar performance to the dense model on MSR-VTT but significantly outperforms in on DiDeMo, which contains longer videos with localized activities. For Charades and SSv2, we evaluate text-to-video retrieval with fine-tuning, as shown in Tab. 5. Both datasets are action-centric, posing a greater challenge to the temporal reasoning of video-text models. **SViTT** with hybrid sparsification dominates the dense variant by \(\sim 3\%\) on both datasets, a more substantial gap than observed for MSR-VTT and DiDeMo. This confirms our hypothesis that exploiting node and edge sparsity reduces the dependency of models on contextual regions, forcing them to focus on the temporal dynamics of person and objects in the foreground. Video question answering.We next evaluate the cross-modal modeling of **SViTT** on MSRVTT-QA, ActivityNet-QA and AGQA. As shown in Tab. 6, the hybrid sparsity version of **SViTT** outperforms the dense transformer baseline on all three datasets, thanks to its more efficient temporal modeling. On MSRVTT-QA, the accuracy gap is small between dense and sparse transformer (0.3%), and **SViTT** marginally underperforms baselines pretrained with fewer number of frames (MERLOT [63], VIOLET [18]). This is likely due to the nature of MSRVTT, which consists of short video clips and questions biased towards spatial cues, allowing temporal modeling little benefit over spatial transformers pretrained on massive image & video data. On ActivityNet-QA and AGQA, which both contain longer clips and temporally challenging questions, the sparse modeling of **SViTT** proves advantageous, with 0.9% and 2.3% boost in accuracy respectively, beating all baseline methods. Qualitative analysis.To show how **SViTT** efficiently identifies and concentrates its computation on informative spatiotemporal regions of the input clips, we visualize the outcome of node sparsification in Fig. 6. Using visual sparsification in video encoder \(f_{v}\), **SViTT** learns to isolate foreground entities from the majority of background patches, enabling the model to perform sparse video-text inference on longer temporal context. Cross-modal attention in multimodal encoder \(f_{m}\) provides an even stronger signal for isolating the regions of interest of each video clip, validating the importance of text semantics in visual sparsification. ## 6 Conclusion This work introduced **SViTT**, a sparse video-text transformer for efficient reasoning over a long temporal context. By interpreting visual transformers as graph networks, we proposed to optimize their _edge_ and _node_ sparsity, using a combination of sparse block attention, visual token pruning and text-guided token selection. We further introduced a temporal expansion strategy for training **SViTT**, which aims to gradually increase model sparsity with clip length. **SViTT** showed strong performance and efficiency compared to dense transformers, with a larger gap when frame number increases. On video retrieval and question answering benchmarks, **SViTT** achieved state-of-the-art results using only video data, without extra image-text pretraining. Acknowledgements.This work was funded in part by NSF award IIS-2041009. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Method** & **PT** & \(T\) & **MSRVTT** & **ANet** & **AGQA** \\ \hline HME [17] & — & 20 & 33.0 & — & 47.7 \\ HCRN [26] & — & 128 & 35.5 & — & 47.4 \\ ClipBERT [28] & 0.2M & 16 & 37.4 & — & — \\ ALPRO [29] & 5M & 16 & 42.1 & — & — \\ Just Ask [59] & 69M & 640 & 41.5 & 38.9 & — \\ MERLOT [63] & 180M & 5 & 43.1 & 41.4 & — \\ VIOLET [18] & 185M & 4 & **43.9** & — & 49.2 \\ Singularity [27] & 5M & 1 & 42.7 & 41.8 & — \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c} \hline \hline **SViTT** & Dense & **SViTT** & **SViTT** \\ \hline \hline \end{tabular} \end{table} Table 6: **Video Question Answering.** Figure 6: **Qualitative Results.** **SViTT** isolates informative regions from background patches to facilitate efficient temporal reasoning. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Method** & **PT** & \(T\) & \begin{tabular}{c} **Charades** \\ R1 \\ \end{tabular} & \begin{tabular}{c} **SSv2** \\ **Mean** \\ \end{tabular} & \begin{tabular}{c} **SSv2** \\ **Mean** \\ \end{tabular} \\ \hline Frozen [4] & 5M & 32 & 11.9 & 25.1 & — & — \\ CLIP4Clip [39] & 400M & 12 & 13.9 & 27.1 & 43.1 & 65.1 \\ ECLIPSE [34] & 400M & 32 & 15.7 & 30.3 & — & — \\ MKTVR\({}^{\dagger}\)[40] & 400M & 42 & 16.6 & 34.7 & — & — \\ Singularity [27] & 5M & 1 & — & — & 36.4 & 58.9 \\ & & 4 & — & — & 44.1 & 66.6 \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c} \hline \hline **SViTT** & Dense & **SViTT** \\ \hline \hline \end{tabular} \begin{tabular}{l c c c} \hline \hline **Method** & **PT** & \(T\) & **MSRVTT** & **ANet** & **AGQA** \\ \hline HME [17] & — & 20 & 33.0 & — & 47.7 \\ HCRN [26] & — & 128 & 35.5 & — & 47.4 \\ ClipBERT [28] & 0.2M & 16 & 37.4 & — & — \\ ALPRO [29] & 5M & 16 & 42.1 & — & — \\ Just Ask [59] & 69M & 640 & 41.5 & 38.9 & — \\ MERLOT [63] & 180M & 5 & 43.1 & 41.4 & — \\ VIOLET [18] & 185M & 4 & **43.9** & — & 49.2 \\ Singularity [27] & 5M & 1 & 42.7 & 41.8 & — \\ \hline \hline \end{tabular} \begin{tabular}{l c c c} \hline \hline **SViTT** & Dense & **SViTT** \\ \hline \hline \end{tabular} \end{table} Table 5: **Text-to-video Retrieval with Fine-tuning.**
2307.11808
Automatic Data Augmentation Learning using Bilevel Optimization for Histopathological Images
Training a deep learning model to classify histopathological images is challenging, because of the color and shape variability of the cells and tissues, and the reduced amount of available data, which does not allow proper learning of those variations. Variations can come from the image acquisition process, for example, due to different cell staining protocols or tissue deformation. To tackle this challenge, Data Augmentation (DA) can be used during training to generate additional samples by applying transformations to existing ones, to help the model become invariant to those color and shape transformations. The problem with DA is that it is not only dataset-specific but it also requires domain knowledge, which is not always available. Without this knowledge, selecting the right transformations can only be done using heuristics or through a computationally demanding search. To address this, we propose an automatic DA learning method. In this method, the DA parameters, i.e. the transformation parameters needed to improve the model training, are considered learnable and are learned automatically using a bilevel optimization approach in a quick and efficient way using truncated backpropagation. We validated the method on six different datasets. Experimental results show that our model can learn color and affine transformations that are more helpful to train an image classifier than predefined DA transformations, which are also more expensive as they need to be selected before the training by grid search on a validation set. We also show that similarly to a model trained with RandAugment, our model has also only a few method-specific hyperparameters to tune but is performing better. This makes our model a good solution for learning the best DA parameters, especially in the context of histopathological images, where defining potentially useful transformation heuristically is not trivial.
Saypraseuth Mounsaveng, Issam Laradji, David Vázquez, Marco Perdersoli, Ismail Ben Ayed
2023-07-21T17:22:22Z
http://arxiv.org/abs/2307.11808v1
# Automatic Data Augmentation Learning using ###### Abstract Training a deep learning model to classify histopathological images is challenging, first, because of the color and shape variability of the cells and tissues, and second, because of the reduced amount of available data, which does not allow proper learning of those variations. Variations can come from the image acquisition process, for example, due to different cell staining protocols or tissue deformation. To tackle this challenge, Data Augmentation (DA) can be used during training to generate additional samples by applying transformations to existing ones. Those samples will help the model to become invariant to those color and shape transformations. The problem with DA is that it is not only dataset-specific but it also requires domain knowledge, which is not always available. Without this knowledge, selecting the right transformations can only be done using heuristics or through a computationally demanding search. To address this, we propose in this work an automatic DA learning method. In this method, the DA parameters, i.e. the transformation parameters needed to improve the model training, are considered learnable parameters and are learned automatically using a bilevel optimization approach in a quick and efficient way using truncated backpropagation. We validated the method on six different datasets of histopathological images. Experimental results show that our model can learn color and affine transformations that are more helpful to train an image classifier than predefined DA transformations. Predefined DA transformations are also more expensive as they need to be selected before the training by grid search on a validation set. We also show that similarly to a model trained with a RandAugment-based framework, our model has also only a few method-specific hyperparameters to tune but is performing better. This makes our model a good solution for learning the best data augmentation parameters, especially in the context of histopathological images, where defining potentially useful transformation heuristically is not trivial. Our code is available at [https://github.com/smounsav/bilevel_augment_histo](https://github.com/smounsav/bilevel_augment_histo). CNN image classification data augmentation bi-level optimization truncated backpropagation ## 1 Introduction Deep learning-based models have proved effective for the analysis of histopathological images (Bejnordi et al., 2016; Litjens et al., 2017; Shen et al., 2017; Ker et al., 2018). In the context of image classification, one hurdle to a good generalization is the color and shape variability of the cells and tissues in the images. Those variations can be inherent to the image acquisition process. More precisely, color variations can come from the cell staining, which is done to make the cells visible to the human eyes, whereas shape variations can come from tissue deformation. As those variations have a major impact on the performance of the models, addressing this issue has been an active area of research (Ciompi et al., 2017; Tellez et al., 2019; Ataky et al., 2020; Faryna et al., 2021; de Matos et al., 2021; Wagner et al., 2021; Garcea et al., 2022). To address color variations problems, two main directions can be followed: stain normalization and data augmentation. Stain normalization consists in altering the color space of the input images so that the difference between the color statistics of the train images and the color statistics of the test images is reduced. Data augmentation consists in creating new images from existing samples with different transformations so that the model can learn transformation invariances. In the context of histopathological image classification, data augmentation is particularly interesting as it can address the problem of both color and shape variations by teaching the classification model to be invariant to those two types of transformations. While data augmentation has been explored extensively for natural images as shown in Shao et al. (2022), most methods proposed rely on the selection of data augmentation parameters based on heuristics. Selecting meaningful transformations and the right amplitude requires prior or expert knowledge, which is not always available, especially in the medical images field (Tellez et al., 2019). Without the adequate knowledge, selecting the right transformations is not trivial, and selecting the wrong transformations can lead to a degradation of the model performance as shown in Chen et al. (2020). To tackle the problem of selecting the right transformations, automatic data augmentation methods based on bilevel optimization like Cubuk et al. (2019) have been proposed for natural images. Those methods can be computationally expensive as for each new set of parameters, the model in the inner loop needs to be fully trained till convergence. Those methods were improved using a gradient-based approach like in Hataya et al. (2019); Lin et al. (2019), or more recently Hataya et al. (2022). In our work, as we can see in Fig.1, we do not need to train the classifier in the inner loop until convergence for each different set of data augmentation parameters to test. The augmenter network generating the right data augmentation is trained at the same time as the classifier by alternating between the outer and the inner loop at each iteration. To address the problem of selecting the right data augmentation transformations, we propose in this work to extend the method in Mounsaveng et al. (2021) to histopathological images. The method consists in training an image classifier while learning the best data augmentation transformations in a bi-level optimization framework. The classifier is trained in the inner loop while the best data augmentation parameters are learned in the outer loop. To make the method computationally efficient, the gradient of the validation loss used to update the data augmentation parameters is estimated using truncated backpropagation and with only one iteration of the inner loop. We validated this method on six different histopathological images datasets. Experimental results show that our method yields a better final accuracy than predefined data augmentation found by grid search on a validation set. During training, it finds transformations in the color and affine transformation space that help the most the learning and is also less expensive to train than predefined data augmentation. As we can see in Fig.1, the best augmentation parameters are learned at the same time as the classifier is trained by alternating between the outer and inner loop at each iteration whereas when we do a grid search, the classifier needs to be trained till convergence in the inner loop for each set of different data augmentation parameters to test. What is also interesting to note is that the learned transformations do not hurt the model performance when useful transformations are more challenging to find. Moreover, similarly to a model trained with RandAugment, our model requires only a few model-specific hyperparameters to tune (in our case the hyperparameters of the augmenter network) but shows a better final classification accuracy. Our intuition is that even if RandAugment based methods have proven efficient, learning the best transformations along the training can yield a better classification performance, as the timing when transformations are presented to the model is important as shown in Golatkar et al. (2019). ContributionsWe summarize our contributions as follows: Figure 1: **Model training. In an epoch, the classifier parameters \(\omega\) are trained on the training set in the inner loop in the standard supervised way. Then, in the outer loop, the parameters of the augmentation network generating the data augmentation parameters are trained on the validation set using an online differentiable method.** * We successfully extend an automatic data augmentation learning method to learn useful color and affine augmentations in the context of histopathological images. As this method is differentiable, we can efficiently optimize a large transformation network that learns to perform data augmentation automatically. * We show that our proposed model learns different sets of transformations and achieves comparable or better results than hand-defined transformations or RandAugment based methods on six different datasets. We also show that our model never learns transformations that hurt the model performance and avoids the problem of badly hand-chosen transformations. ## 2 Related Works Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) can generate realistic new samples of a certain dataset or class, thus they can be adapted for data augmentation. Mirza and Osindero (2014) and Odena et al. (2017) proposed to generate images conditioned on their class that could be directly used to augment a dataset. Also based on GAN, but directly used for data augmentation, DAGAN (Antoniou et al., 2018) conditions the augmented image on the input image. TripleGAN (Chongxuan et al., 2017) and Bayesian data augmentation (Tran et al., 2017) train a classifier jointly with the generator. These approaches generate general image transformations, but in practice, it is not as performant as using predefined transformations. TANDA (Ratner et al., 2017) is the only GAN-based approach that uses predefined transformations. It defines a large set of transformations and learns how to combine them to generate new samples that follow the same distribution as the original data. In medical images, Frid-Adar et al. (2018) use a GAN to generate new CT-Scan images to improve the training of a liver lesion classifier. Our model is more efficient than GAN based models, as it does not require to learn a separate model before training the classifier. It learns the best transformation parameters and classifier at the same time. ### AutoAugment AutoAugment (Cubuk et al., 2019) is a data augmentation method that learns sequences of transformations that maximize the classifier accuracy on a validation set. This objective is better than simply reproducing the same data distribution as in GAN-based models, as it favors transformations that generalize well on unseen data. However, it is computationally expensive as it performs the complete bilevel optimization by training the classifier in the inner loop until convergence for each set of evaluated transformations. Some solutions to reduce the computational cost were proposed in follow-up works. Fast AutoAugment (Lim et al., 2019) optimizes the search space by matching the density between the training set and the augmented data. Alternatively, Population Based Augmentation (PBA) (Ho et al., 2019) focuses on learning the optimal augmentation schedule rather than only the transformations. However, even if these approaches reduce the computational cost of AutoAugment, they do not leverage gradient information. Faster AutoAugment (Hataya et al., 2019) does this by combining AutoAugment with a GAN discriminator and considering transformations as differentiable functions. OHL-Auto-Aug (Lin et al., 2019) uses an online bilevel optimization approach and the REINFORCE algorithm on an ensemble of classifiers to estimate the gradient of the validation loss and learn an augmentation probability distribution. RandAugment (Cubuk et al., 2019) goes further by showing that the same performance level as AutoAugment can be obtained by randomly selecting transformations from a predefined pool and just tuning the number of transformations to use and a global (same for all transformations) magnitude factor. However, this approach also requires prior knowledge of useful transformations. In histopathological images, Faryna et al. (2021) use the RandAugment method with a set of transformations extended with some specific color transformations. This leads to an improved performance of the classifier. Our model is more efficient than search-based methods as the data augmentation parameters are updated at each training iteration using the gradient of the validation loss obtained in the inner loop. This gradient is estimated using truncated backpropagation, which removes the need to train the model until convergence for each set of evaluated transformations. ### Hyperparameter Learning Our work has some roots in the hyperparameter optimization field, as data augmentation parameters can be considered as hyperparameters to tune. Hyperparameter tuning is essential to obtain the optimal performances when training neural networks on a given dataset. Classic approaches assume that the learning model is a black-box and use methods like grid search, random search (Bergstra et al., 2011, 2013), Bayesian optimization (Snoek et al., 2012), or a tree-search approach (Hutter et al., 2011). These approaches are simple but expensive because they repeat the optimization from scratch for each sampled value of the hyperparameters and so are only applicable to low dimensional hyperparameter spaces. A different line of research is to leverage the gradient of these (continuous) hyperparameters (or hyper-gradients) to perform the hyper-optimization. The first work proposing this idea (Bengio, 2000) shows that the implicit function theorem can be used to this aim. This idea was developed more recently in Bertrand et al. (2020). Domke (2012) was the first work to propose a gradient-based method using the bilevel optimization approach proposed in Colson et al. (2007) to learn hyperparameters. Using a bilevel optimization approach to train a neural network is challenging, as usually there is no closed-form expression of the function learned in the inner loop (Section 3). To address this, Maclaurin et al. (2015) and later Franceschi et al. (2017) proposed methods to reverse the forward pass to compute the gradient of the validation loss. However, these methods are applicable only when the number of hyperparameters and the complexity of the models are limited due to the memory needed to save the intermediate steps. Another approach to address the computational hurdle in the inner loop is to calculate an approximation of the gradient like in Pedregosa (2016) Luketina et al. (2016) or MacKay et al. (2019). Our method differentiates from those by using truncated backpropagation to estimate the gradient of the validation loss. Finally, note that hyperparameter optimization presents some similarities to meta-learning as shown in Franceschi et al. (2018). For instance, in MAML (Finn et al., 2017), a shared model initialization is learned to minimize the validation loss and therefore improve the generalization capabilities of the model. More recently, Hataya et al. (2022) can be positioned at the intersection of AutoAugment and meta-learning based approaches. ## 3 Proposed Data Augmentation Method Consider a labeled set \(\mathcal{X}:=\{x_{i},y_{i}\}_{i=1}^{N}\), where \(x_{i}\) is an input image, \(y_{i}\) the associated class label, N the number of samples and \(\hat{\mathcal{X}}\) the set of transformed images. We formulate the problem of identifying effective data augmentation transformations as a bilevel optimization problem. In this setup, the augmenter \(\mathcal{A}_{\theta}:\mathcal{X}\rightarrow\hat{\mathcal{X}}\) is parametrized by \(\theta\) and is used to minimize the loss \(\mathcal{L}\) on the validation data \(\mathcal{X}_{val}\) in the outer loop. In the inner loop, the classifier parameters \(\omega\) are optimized on the training data \(\mathcal{X}_{tr}\) in the standard supervised way. This formulation can be written as: \[\theta^{*} =\operatorname*{arg\,min}_{\theta}\mathcal{L}(\mathcal{X}_{val}, \omega^{*}) \tag{1}\] \[s.t. \omega^{*} =\operatorname*{arg\,min}_{\omega}\mathcal{L}(\mathcal{A}_{\theta }(\mathcal{X}_{tr}),\omega). \tag{2}\] While optimizing a few hyperparameters on the validation data is feasible with black-box approaches such as grid and random search Bergstra and Bengio (2012) or Bayesian optimization Snoek et al. (2012), it is not efficient. With bilevel optimization, our aim is to efficiently learn an entire neural network \(\mathcal{A}_{\theta}\) (possibly with thousands of parameters \(\theta\)) which defines a distribution of transformations that should be applied on the training data to improve generalization. Gradient descent was shown to be an efficient method for optimizing parameters of large networks. In problems such as architecture search (Liu et al., 2019), the parameters can be directly optimized with gradient descent (or second order methods) against the training and Figure 2: **Computational graph of our model at iteration \(t=J\). \(K\) is the number of gradient unfolding steps, and J is the number of inner loop iterations after which \(\theta\) gets updated. The case where K=J=T (T being the iteration of the classifier convergence) is the complete bilevel optimization as in Eq.1 whereas K=J=1 corresponds to updating \(\theta\) at each mini-batch (\(K=1\)), using only one step of gradient unfolding (\(J=1\)).** validation data. However, this is not the case for data augmentation. The reason is that the transformation network \(\mathcal{A}_{\theta}\) is optimized to maximize the validation score, but applies transformations only on the training set. Therefore, first order methods would not work. The aim of data augmentation is to introduce transformations during the training phase that can make the model invariant or partially invariant to any transformations that can occur at test time. If we optimize the transformation network directly on the validation data, the model will simply select trivial solutions such as the identity transformation. This approach has been used for object localization Jaderberg et al. (2015) and it did not improve the model generalization performance as much as data augmentation. To solve this issue, new methods relied on reinforcement learning instead of gradient descent to learn effective data augmentation (Cubuk et al., 2019; Lim et al., 2019; Ho et al., 2019). In this work, we show that in the case of a differentiable augmenter \(\mathcal{A}_{\theta}\), there is a simple, efficient way to find optimal data transformations based on gradient descent that generalize well on validation data. We formulate our problem as an approximation to bilevel optimization by using truncated backpropagation as it allows our method to: i) efficiently estimate a large number of parameters to generate the optimal data augmentation transformations by gradient descent; ii) obtain an online estimation of the optimal data augmentation during the different phases of the training, which can also be beneficial (Golatkar et al., 2019); iii) change the training data to adapt to different validation conditions as in supervised domain adaptation. Although approximate bilevel optimization has already been proposed for hyperparameter optimization (Shaban et al., 2019; Franceschi et al., 2018, 2017), in this paper we show that it can be used for training a large, complex model (the augmenter \(\mathcal{A}_{\theta}\) network) to learn an effective distribution of transformations. ### Approximate Online Bilevel Optimization As shown in Eq. 1 and 2, the problem of finding the optimal data augmentation transformations \(\mathcal{A}_{\theta}\) can be cast as a bilevel optimization problem. This problem can be solved by iteratively solving Eq. 2 to find the optimal network weight \(\omega^{*}\), given the parameters of the transformation \(\theta\) and then updating \(\theta\): \[\theta\leftarrow\theta-\eta_{\theta}\nabla_{\theta}\mathcal{L}(\mathcal{X}_{ val},\omega^{*}) \tag{3}\] where \(\eta_{\theta}\) is the learning rate used to train the augmenter network. However, as the augmentations are to be applied only on the training dataset and not on the validation set, calculating \(\frac{\partial\mathcal{L}(\mathcal{X}_{val},\omega^{*})}{\partial\theta}\) is not trivial. To enable this calculation, we use the fact that the weights \(\omega\) of the network are shared between training and validation data and use the chain rule to differentiate the validation loss \(\mathcal{L}(\mathcal{X}_{val},\omega^{*})\) with respect to the hyperparameters \(\theta\). In other words, instead of using a very slow black-box optimization for \(\theta\), we can exploit gradient information because the model parameters \(\omega^{*}\) are shared between the validation and the training loss. We define the gradient of the validation loss with respect to \(\theta\) as follows: \[\begin{split}\nabla_{\theta}\mathcal{L}(\mathcal{X}_{val}, \omega^{*})&=\frac{\partial\mathcal{L}(\mathcal{X}_{val},\omega^ {*})}{\partial\theta}\\ &=\frac{\partial\mathcal{L}(\mathcal{X}_{val},\omega^{*})}{ \partial\omega^{*}}\frac{\partial\omega^{*}}{\partial\theta}\end{split} \tag{4}\] By defining \(\mathcal{G}^{(t)}\) as the gradient of the training loss at iteration \(t\): \[\mathcal{G}^{(t)}=\nabla_{\omega}\mathcal{L}(\mathcal{A}_{\theta}(\mathcal{X}_{ tr}),\omega^{t}) \tag{5}\] we can write \(\frac{\partial\omega^{*}}{\partial\theta}\) in Eq. 4 as: \[\frac{\partial\omega^{*}}{\partial\theta}=\sum_{i=1}^{T-1}\frac{\partial \omega^{(T)}}{\partial\omega^{(i)}}\frac{\partial\omega^{(i)}}{\partial \mathcal{G}^{(i-1)}}\frac{\partial\mathcal{G}^{(i-1)}}{\partial\theta} \tag{6}\] where T is the iteration when the classifier converges. As \(\omega^{*}\) represents the model weights at training convergence, they depend on \(\theta\) for each iteration of gradient descent. Thus, to compute \(\frac{\partial\omega^{*}}{\partial\theta}\), one has to back-propagate throughout the entire \(T\) iterations of the training cycle. An example of this approach is in Maclaurin et al. (2015). This approach is feasible only for small problems due to the large requirements in terms of computation and memory. However, as optimizing \(\omega^{*}\) is an iterative process, instead of computing \(\frac{\partial\omega}{\partial\theta}\) only at the end of the training loop, we can estimate it at every iteration \(t\): \[\frac{\partial\omega^{*}}{\partial\theta}\approx\frac{\partial\omega^{(t)}}{ \partial\theta^{(t)}}=\sum_{i=1}^{t}\frac{\partial\omega^{(t)}}{\partial \omega^{(i)}}\frac{\partial\omega^{(i)}}{\partial\mathcal{G}^{(i-1)}}\frac{ \partial\mathcal{G}^{(i-1)}}{\partial\theta^{(i)}}, \tag{7}\] This procedure corresponds to dynamically changing \(\theta\) during the training iterations (thus it becomes \(\theta^{(t)}\)) to minimize the current validation loss based on the training history. Although this formulation is different from the original objective function, adapting the data augmentation transformations dynamically with the evolution of the training process can improve generalization performance (Golatkar et al., 2019). This relaxation is often used in constrained optimization for deep models, in which constraints are reformulated as penalties and their gradients are updated online, without waiting for convergence, to save computation (Pathak et al., 2015). However, in our case, we cannot write the bilevel optimization as a single unconstrained formulation in which the constraint in \(\omega^{*}\) is summed with a multiplicative factor that is maximized (i.e., Lagrange multipliers), because the upper level optimization should be performed only on \(\theta\), while the lower level optimization should be performed only on \(\omega\). Nonetheless, even with this relaxation, estimating \(\frac{\partial\omega^{*}}{\partial\theta}\) still remains a challenge as it does not scale well. Indeed, the computational cost of computing \(\frac{\partial\omega^{(t)}}{\partial\theta^{(t)}}\) grows with the number of iterations \(t\) as shown in Eq. 7. To make the gradient computation constant at each iteration we use truncated backpropagation similarly to what is commonly used in recurrent neural networks (Williams and Peng, 1990): \[\frac{\partial\omega^{(t)}}{\partial\theta}\approx\sum_{i=t-K}^{t}\frac{ \partial\omega^{(t)}}{\partial\omega^{(i)}}\frac{\partial\omega^{(i)}}{ \partial\mathcal{G}^{(i-1)}}\frac{\partial\mathcal{G}^{(i-1)}}{\partial \theta^{(i)}}, \tag{8}\] where \(K\) represents the number of gradient unfolding that we use. Fig. 2b. shows the computational graph used for this computation. Additionally, as Williams and Peng (1990), we consider a second parameter \(J\) which defines the number of inner loop training iterations after which \(\theta\) is updated, in other words how often the computation of the gradients of \(\theta\) is performed. The situation where \(K=J=T\) is the exact bilevel optimization as shown in Eq. 1 while \(K=J=1\) corresponds to updating \(\theta\) at each iteration, in our case mini-batch (\(K=1\)), using only one step of gradient unfolding (\(J=1\)). A theoretical analysis of the convergence of this approach is presented in Shaban et al. (2019). ### Augmenter Network In this work, we use an augmenter network that can learn two types of transformations: geometrical and color. We use the transformation model of spatial transformer networks (Jaderberg et al., 2015), but for data augmentation instead of data alignment. Thus, as illustrated in Fig. 1, the augmenter is composed of a module that generates a set of transformation parameters followed by a module that applies the generated transformations to the original image. Note that the learned transformations are not conditioned on the input image but defined only based on random noise. #### 3.2.1 Geometrical In our experiments, we consider scenarios where the augmenter network learns affine transformations. The choice of this kind of transformation is motivated by the fact that tissues are deformed during the image acquisition process. By considering affine transformations in our learned data augmentation, we aim to train the model to become invariant to those geometrical deformations. For affine transformations, the augmenter network receives as input a random noise vector and generates a 2x3 matrix of values representing a variation from the identity transformation. #### 3.2.2 Color We also consider scenarios where the augmenter learns color transformations. Color augmentations are important as they can help the trained model to become invariant to color perturbations appearing during the cell staining process. Color transformations considered are: contrast, brightness and in the HSV space hue and saturation. For color transformations, the augmenter receives as input a random noise vector and generates a single value representing a variation for each color transformation. In our implementation, we use the kornia library (Riba et al., 2019), which follows the specifications of Szeliski (2010). For contrast, the value learned is a non-negative factor applied to the actual color values. 1 represents the initial image whereas values tending to 0 mean a black-and-white image. If we consider the variables \(r\), \(g\), and \(b\) representing the values of the red, green, and blue colors of the images and \(cf\) the contrast factor learned by our network, the new RGB values are obtained using the update rule: \[(r,g,b)\gets clamp((r,g,b)\cdot cf,0,1) \tag{9}\] For brightness, the value learned represents a shift applied to the actual color values. 0 represents the initial image. If we consider the variables \(r\), \(g\), and \(b\) representing the values of the red, green, and blue colors of the images and \(bs\) the brightness shift learned by our network, the new RGB values are obtained using the update rule: \[(r,g,b)\gets clamp((r,g,b)+bs,0,1) \tag{10}\] In the case of saturation, the value learned by the augmenter is a non-negative factor applied to the actual saturation value. A value of 1 represents the original image whereas 0 means a black-and-white image. If we consider the variables \(h\), \(s\), and \(v\) representing the values of the hue, saturation, and value of the images and \(sf\) the saturation factor learned by our network, the new HSV values are obtained using the update rule: \[(h,s,v)\gets clamp((h,s\cdot sf,v),0,1) \tag{11}\] Finally, the value learned by our augmenter for hue is a shift of the hue channel. 0 represents no shift to the hue channel and any other value negative or non-negative is added to the actual value. If we consider the variables \(h\), \(s\), and \(v\) representing the values of the hue, saturation, and value of the images and \(hs\) the hue shift learned by our network, the new HSV values are obtained using the update rule: \[(h,s,v)\leftarrow(mod(h+hs,2\pi),s,v) \tag{12}\] ## 4 Experimental setup ### Datasets and evaluation The datasets used in our experiments are: BACHaresta et al. (2019) is a dataset of 400 H&E (hematoxylin and eosin) stained breast cancer histology images of resolution 2048 x 1536 distributed in 4 balanced classes of 100 images. As there is no test set publicly available, we use in our experiments 40% of the dataset for training, 10% for validation and 50% for testing as in Rony et al. (2023). The values used for the predefined color transformations are brightness=0.5, contrast=0.5, saturation=0.5 and hue=0.05. GlasSirinukunwattana et al. (2016) is a dataset of 165 H&E stained colon cancer histology images of variable resolution (in our experiments, we use an image size of 430x430) distributed in 2 classes (benign and malignant). The dataset is divided in a train set of 85 images (37 benign and 48 malignant) and a test set of 80 images (37 benign and 43 malignant). In our experiments, we use 80% of the training set for training and 20% for validation. The values used for the predefined color transformations are brightness=0.25, contrast=0.25, saturation=0.25, and hue=0.4. HICL LarynxNinos et al. (2015) is a dataset of 450 H&E and P63 stained larynx cancer histology images with 2 magnifying factors (20x and 40x). It has 3 classes corresponding to cancer grades: Grade I, II and III. For the 20x magnification factor, the image resolution is 1728x1296 and the number of images per class is I:87, II:73 and III:64. For the 40x magnification factor, the image resolution is 1300x1030 and the number of images per class is I:88, II:74 and III:64. As there is no test set publicly available, we use in our experiments 70% of the dataset for training, 20% for validation set, and 10% for test. The values used for the predefined color transformations are brightness=0.25, contrast=0.25, saturation=0.25 and hue=0.4. HICL BrainGlotsos et al. (2008) is a dataset of 2548 H&E and P63 stained brain cancer histology images with 2 magnifying factors (20x and 40x). It has 7 classes corresponding to cancer grades: Grade I, I-II, II, II-III, III, III-IV and IV. For the 20x magnification factor, the image resolution is 1728x1296 and the number of images per class is I:123, I-II: 94, II:208, II-III:47, III:367, III-IV:45 and IV:373. For the 40x magnification factor, the image resolution is also 1728x1296 and the number of images per class is I:132, I-II: 73, II:210, II-III:53, III:434, III-IV:32 and IV:357. As there is no test set publicly available, we use in our experiments 70% of the dataset for training, 20% for validation set, and 10% for test. The values used for the predefined color transformations are brightness=0.25, contrast=0.25, saturation=0.25 and hue=0.4. EvaluationTo evaluate the performance of our models, we use the classification accuracy metric, which is defined by the number of samples correctly classified divided by the total number of samples. For each scenario, we do a 5-fold cross-validation and the result reported is the average of the results obtained by the 5 folds. The hyperparameters search is done separately for each dataset. The hyperparameters selected are the ones yielding the best validation results averaged over the 5 folds. We also follow this protocol for Randaugment hyperparameters. ### Implementation details As we can see in Fig.1, our model is composed of a classifier and an augmenter network. As classifier, we use a ResNet18 (He et al., 2015) network pretrained on Imagenet. ResNet18 is an 18 layers deep neural network with residual connections. To align with the image size used during pretraining, we use in our training phase patches of size 224x224 and evaluate the model on whole images during the testing phase. The augmenter learning the geometric and color transformations is a MLP network that receives a noise vector of dimension 100 as input and generates the transformation parameters. The augmenter network has 3 fully connected layers of size 100, 64, and 32 and an output layer of size \(n\), \(n\) being the number of hyperparameters to optimize according to the scenario considered (4 for color transformations, 6 for affine transformations, or 10 when both color and affine transformations are considered). To have differentiable affine and color transformations, we use the affine_grid and grid_samples functions of the torchvision package of Pytorch framework (Paszke et al., 2019). As in Mounsaveng et al. (2021), we adopt a frequency K of updating \(\theta\)\(J\) of 1 and a number of steps of backpropagation J of 1 to update the parameters of our model. ## 5 Results Our proposed method aims at learning data augmentation automatically while training the image classifier. We validate our method on the 6 different datasets presented in 4.1: BACH, Glas, Medisp HICL Larynx with magnification factor 20x, Medisp HICL Larynx with magnification factor 40x, Medisp HICL Brain with magnification factor 20x, Medisp HICL Brain with magnification factor 40x. In a first series of experiments, we compare for each dataset the best performance of an image classifier trained on 3 different kinds of transformations: first, we consider color transformations, then affine transformations, and finally a combination of both transformation types. For each type of transformation, we compare the classification performance of 3 models: first, a classifier trained without data augmentation (baseline), then a classifier trained with hyperparameters for data augmentation found by grid search on a validation set (predefined), and finally a classifier trained with the data augmentation learned by our method. Note that in all our experiments, we divide our data into 3 sets: a training set to train our model, a validation set to select the hyperparameters of the model, and a test set to evaluate the model. Then, in a second series of experiments, we investigate our model more in-depth focusing on the BACH dataset. We chose this dataset as it is the most challenging of the six considered in this paper in terms of image size and difficulty to learn the classification task. More precisely, we investigate two aspects of our model: i) the impact of the amount of training data available on the quality of the learned augmentations ii) the impact of starting the training from a classifier pretrained on Imagenet. Finally, in a third series of experiments, we compare our approach to a model trained in a data augmentation framework similar to RandAugment (Cubuk et al., 2019). Instead of searching for the best sequence of transformations and the best magnitude for each transformation at the same time as in other models of the AutoAugment family, RandAugment relaxes the search problem to the tuning of 2 hyperparameters M and N, M being a global magnitude for all considered transformations and N the number of transformations selected in each sequence of transformations. In our experiments, we define M and N by doing a grid search with values between 1 and 5 for both M and N. This approach is simple yet very efficient and has proven to be state of the art in Faryna et al. (2021) on Camelyon 17 dataset. However, we argue that even if RandAugment is very simple and efficient to use, it still requires prior knowledge to define the initial pool of transformations. For our method, we also need to fine-tune a limited number of hyperparameters (the hyperparameters of the augmenter network), but we can also define a more generic set of differentiable transformations. Moreover, learning the optimal data augmentation at each epoch can be beneficial for the model, as the time when the transformations are presented to the model is important as reported in Golatkar et al. (2019). To be fair in the comparison of the results, we limited the pool of transformations used by RandAugment to the transformations learned by our proposed model. The transformations considered by our adapted RandAugment framework are listed in Tab. 1. \begin{table} \begin{tabular}{c|c} Transformation type & Magnitude Range \\ \hline identity & - \\ rotation & [-30.0, 30.0] \\ translation x & [-0.45, 0.45] \\ translation y & [-0.45, 0.45] \\ shear x & [-0.3, 0.3] \\ shear y & [-0.3, 0.3] \\ contrast & [0, 2] \\ brightness & [-1, 1] \\ hue & [-0.5, 0.5] \\ Saturation & [0, 1] \\ \end{tabular} \end{table} Table 1: **Transformations considered by our adapted RandAugment framework**. To be fair in the comparison with our proposed model, we limited the set of transformations to the differentiable transformations learned by our model. ### Color Transformations In this section, we investigate the impact of color transformations alone on the training of an image classifier. Results are presented in Tab.2. For BACH dataset, using predefined color augmentations to train the model yields an increased classification performance compared to the baseline, but the model has the best performance when trained with the augmentations learned by our augmenter (+2.3% accuracy VS baseline and +0.5% accuracy VS predefined color augmentations). For Glas dataset, the classifier performs better with learned transformations than with predefined ones (+7.75% over baseline and +1.25% over baseline). For Larynx 20x dataset, our model also performs better than predefined augmentations (+3.6% accuracy VS baseline and +0.87% accuracy VS predefined augmentations). For Larynx 40x dataset, our model performs similarly to predefined transformations. However, in both cases, we do not see any improvement over the baseline, which indicates that either color transformations might not be the best ones to use for this dataset or that the performance of the classifier is already saturated to see the improvement brought by those transformations. For Brain 20x dataset, similarly to the Larynx 20x dataset, our model performs on-par with predefined transformations and brings no improvement over the baseline. Also in this case, it seems that color augmentations used are not useful to train the classifier or that the performance is too saturated to see the improvement. For Brain 40x dataset, our model performs slightly better than predefined augmentations (+0.6% accuracy VS baseline and +0.13% accuracy over predefined data augmentations. ### Geometric Transformations In this section, we evaluate our model by investigating the impact of geometric transformations on the classification accuracy. Results are presented in Tab.2. For BACH dataset, our model performs better than predefined affine transformations (+2.1% accuracy over baseline and +1.7% over predefined transformations). However, it does not perform as well as when learning only color transformations, which indicates that this kind of transformations is less efficient for this dataset.For Glas dataset, our model performs also better than predefined affine transformations (+8.75% accuracy over baseline and +0.5% over predefined transformations). Interesting to note is also that for this dataset, affine transformations \begin{table} \begin{tabular}{l|c|c|c|c|c|c} Scenario / Dataset & BACH & Glas & Larynx & Larynx & Brain & Brain \\ & & & 20x & 40x & 20x & 40x \\ \hline Baseline & 83.30\({}_{\pm 1.18}\) & 89.50\({}_{\pm 1.22}\) & 87.51\({}_{\pm 1.27}\) & 86.67\({}_{\pm 1.16}\) & 99.53\({}_{\pm 0.43}\) & 96.97\({}_{\pm 1.13}\) \\ Baseline + color DA & 85.10\({}_{\pm 1.19}\) & 96.00\({}_{\pm 1.25}\) & 90.24\({}_{\pm 1.38}\) & 86.67\({}_{\pm 1.23}\) & 99.53\({}_{\pm 0.43}\) & 97.42\({}_{\pm 1.20}\) \\ Baseline + affine DA & 83.70\({}_{\pm 1.20}\) & 97.75\({}_{\pm 1.27}\) & 95.92\({}_{\pm 1.11}\) & 94.29\({}_{\pm 1.38}\) & 99.54\({}_{\pm 0.68}\) & 98.18\({}_{\pm 1.05}\) \\ Baseline + color\&affine DA & 84.60\({}_{\pm 1.23}\) & 98.25\({}_{\pm 1.12}\) & 95.24\({}_{\pm 1.25}\) & 95.24\({}_{\pm 1.37}\) & 99.53\({}_{\pm 0.43}\) & 98.94\({}_{\pm 1.26}\) \\ \hline Our model (color DA) & 85.60\({}_{\pm 1.18}\) & 97.25\({}_{\pm 1.23}\) & 91.11\({}_{\pm 1.28}\) & 86.67\({}_{\pm 1.22}\) & 99.53\({}_{\pm 0.43}\) & 97.57\({}_{\pm 1.24}\) \\ Our model (affine DA) & 85.40\({}_{\pm 1.25}\) & 98.25\({}_{\pm 1.20}\) & 96.19\({}_{\pm 1.22}\) & 95.24\({}_{\pm 1.26}\) & 99.69\({}_{\pm 0.43}\) & 98.63\({}_{\pm 1.36}\) \\ Our model (color\&affine DA) & **88.90\({}_{\pm 1.25}\)** & **99.25\({}_{\pm 0.56}\)** & **96.61\({}_{\pm 1.23}\)** & **97.14\({}_{\pm 1.26}\)** & **99.84\({}_{\pm 0.35}\)** & **99.24\({}_{\pm 0.54}\)** \\ \end{tabular} \end{table} Table 2: **Impact of color and affine transformations on classification accuracy (%).** Transformations in parentheses are learned, others are predefined. Our model performs better than hand-defined transformations on the six different datasets. Best performances are obtained with a combination of learned color and affine transformations. are helping more to train the model than using only color transformations. For Larynx 20x dataset, we can see that our model performs slightly better than predefined transformations (+8.68% accuracy over baseline and +0.27% over predefined transformations). Also for this dataset, using geometric transformations seems to help train the model more than using color transformations only. For Larynx 40x dataset, similarly to Larynx 20x dataset, our model performs slightly better (+8.57% accuracy over baseline and +0.95% over predefined augmentations) and affine transformations are more helpful than only color transformations. For Brain 20x dataset, our model performs slightly better than predefined affine transformations (+0.16% accuracy over baseline and +0.15% over predefined affine transformations. As opposed to color transformations, affine transformations have a positive impact on the performance of the classification model. For Brain 40x dataset, our model performs also slightly better than predefined affine transformations (+1.66% accuracy over baseline and +0.45% over predefined augmentations). Also for this dataset, affine transformations have a bigger positive impact on the model accuracy than color transformations. ### Combination of color and affine transformations In this section, we evaluate our model by investigating the impact of geometric transformations on the classification accuracy. Results are presented in Tab.2. For BACH dataset, the combination of both kind of transformations is significantly improving the classification score (+5.6% accuracy over baseline and +4.3% over predefined augmentations). For Glas dataset also, the combination of both color and geometric transformations is improving the model performance (+9.75% accuracy over baseline and +1% over predefined augmentations). For Larynx 20x dataset, the combination of color and affine transformations learned by our model has a bigger positive impact on the model final accuracy (+9.1% over baseline and +1.37% over predefined transformations). For Larynx 40x dataset, learning both color and affine transformations is also yielding the model with the best classification accuracy (+10.47% over baseline and +1.9% over predefined transformations). For Brain 20x dataset, our model learning color and affine transformations is performing slightly better than predefined transformations (+0.31% over baseline and predefined transformations). For Brain 40x dataset, when learning color and affine transformations at the same time our model is performing better than the predefined transformations (+2.27% over baseline and +0.3% over predefined transformations). To summarize the results of this series of experiments, we can see that the combination of both color and affine transformations yields the best results, which shows that our model became more invariant to color and shape perturbations thanks to the data augmentation transformation learned along the training. ### Additional experiments on BACH dataset In this section, we run a series of experiments on BACH dataset to have a better understanding of our model. BACH was chosen as it is the most challenging dataset of the six considered in terms of image size and difficulty of the classification task as shown in Tab. 2. In a first experiment, we investigate the evolution of the model accuracy with respect to the amount of training data. In Fig.3, we can see that our model performs better than using only predefined transformations when using the full training set. When we reduce the amount of data gradually, we can see that the amplitude of the improvement decreases for color only and geometric only transformations. Below a threshold of 50% of the training set, our model performs on par with predefined augmentations when learning color or geometric transformations only but yields an inferior performance when learning both types of transformations at the same time. In this case, our model does not have enough data to learn useful transformations. This shows that having a minimum amount of training data is a limitation and a prerequisite of our data-based learning method. In a second experiment, we investigate the impact of starting from a pretrained model when training a classifier with our proposed method. In Tab. 3, we can see that the best classification accuracy is obtained when starting from a model pretrained on ImageNet. \begin{table} \begin{tabular}{l|c|c} BACH & From Scratch & Pretrained model \\ \hline Baseline & 71.60\(\pm_{1.29}\) & 83.30\(\pm_{1.18}\) \\ Baseline + color & 75.20\(\pm_{1.04}\) & 85.10\(\pm_{1.19}\) \\ Baseline + affine & 74.60\(\pm_{1.26}\) & 83.70\(\pm_{1.20}\) \\ Baseline + color\&affine & 83.20\(\pm_{1.29}\) & 84.60\(\pm_{1.23}\) \\ \hline Our model (color DA) & 83.90\(\pm_{1.21}\) & 85.60\(\pm_{1.18}\) \\ Our model (affine DA) & 82.70\(\pm_{1.99}\) & 85.40\(\pm_{1.25}\) \\ Our model (color\&affine DA) & 85.60\(\pm_{1.29}\) & **88.90\(\pm_{1.25}\)** \\ \end{tabular} \end{table} Table 3: **Impact of the pretraining on the classification accuracy (%) on BACH dataset**. Transformations in parentheses are learned, others are predefined. The best classification accuracy is obtained when training a model pretrained on ImageNet. However, we can see that when training a model from scratch, the baseline accuracy is lower and using data augmentation has a bigger impact. (+14% when training from scratch for our learned augmentations VS + 6.6% when starting from a pretrained model. Figure 3: **Classification Accuracy (%) on BACH dataset in function of the amount of training data**. Our model performs better than using only predefined transformations when using the full training set. When we reduce the amount of data gradually, we can see that the amplitude of the improvement decreases for color and geometric only transformations. Below a threshold of 50% of the training set, our model performs on par with predefined augmentations when learning color or geometric transformations but yields an inferior performance when learning both types of transformations at the same time. In this case, our model does not have enough data to learn useful transformations. However, we can see that when training a model from scratch, the baseline accuracy is lower and using data augmentation has a bigger impact. (+14% when training from scratch for our learned augmentations VS + 6.6% when starting from a pretrained model). This experiment shows that using a pretrained model to boost the performances as usually done in the literature is helping, but using an appropriate data augmentation on top during training can further increase the final model performance. ### Comparison with random sequences of data augmentation transformations In Tab. 4, we compare our model to a model trained with a RandAugment based framework on the six same datasets. To be fair in the comparison of the results, we limited the transformations in the RandAugment set of available transformations to the only ones that our model is learning. On 5 datasets, our model yields a better classification accuracy than the RandAugment based method. On Glas, both models yield similar results. Similarly to RandAugment, our model has only a few model-specific hyperparameters to tune (the augmenter network parameters). However, our model requires less prior knowledge as it does not require defining a precise list of possible transformations but works with a more generic set of differentiable transformations. Our intuition to explain the improved classification performance is that learning the optimal data augmentation for each epoch is beneficial for the model, as the time when the transformations are presented to the model is important as reported in Golatkar et al. (2019). ## 6 Conclusion We have presented a novel approach to automatically learn the transformations needed for effective data augmentation for histopathological images. The method is based on an online approximation of the bilevel optimization problem defined by alternating between optimizing the model parameters and the data augmentation hyperparameters. By doing so, we train an augmenter network to generate the right transformations at the same time as we train the classifier network. We evaluated the proposed approach on 6 different datasets with different color and affine transformations. The obtained results were comparable to or \begin{table} \begin{tabular}{l|c|c|c|c|c|c} & \multirow{2}{*}{BACH} & \multirow{2}{*}{Glas} & \multirow{2}{*}{Larynx} & \multirow{2}{*}{Larynx} & \multirow{2}{*}{Brain} & \multirow{2}{*}{Brain} \\ & & & 20x & 40x & 20x & 40x \\ \hline Baseline & 83.30\({}_{\pm 1.18}\) & 89.50\({}_{\pm 1.22}\) & 87.51\({}_{\pm 1.27}\) & 86.67\({}_{\pm 1.16}\) & 99.53\({}_{\pm 0.43}\) & 96.97\({}_{\pm 1.13}\) \\ Predefined color\&affine DA & 84.60\({}_{\pm 1.23}\) & 98.25\({}_{\pm 1.12}\) & 95.24\({}_{\pm 1.25}\) & 95.24\({}_{\pm 1.37}\) & 99.53\({}_{\pm 0.43}\) & 98.94\({}_{\pm 1.26}\) \\ \hline RandAugment & 87.25\({}_{\pm 1.48}\) & **99.25\({}_{\pm 0.68}\)** & 95.83\({}_{\pm 0.23}\) & 96.14\({}_{\pm 1.31}\) & 99.53\({}_{\pm 0.35}\) & 99.18\({}_{\pm 1.03}\) \\ (M,N) hyperparameters & (3.2) & (3.2) & (4.2) & (4.2) & (3.3) & (3.3) \\ \hline Our approach & **88.90\({}_{\pm 1.25}\)** & **99.25\({}_{\pm 0.56}\)** & **96.61\({}_{\pm 1.23}\)** & **97.14\({}_{\pm 1.26}\)** & **99.84\({}_{\pm 0.35}\)** & **99.24\({}_{\pm 0.54}\)** \\ \end{tabular} \end{table} Table 4: **Comparison to a RandAugment based model in terms of classification accuracy (%)**. Our model yields better results than a model trained in a RandAugment based framework on 5 datasets. On Glas it is performing on-par. Our model represents a good solution to learn the optimal data augmentation automatically for color and affine transformations. \begin{table} \begin{tabular}{c c c c} \hline \hline & Color & Affine & Color and Affine \\ \hline & & & \\ BACH & & & \\ Glas & & & \\ HICL Larynx20x & & & \\ HICL Larynx40x & & & \\ HICL Brain20x & & & \\ HICL Brain40x & & & \\ \hline \hline \end{tabular} \end{table} Table 5: **Qualitative results.** For each dataset and each scenario, we see the evolution of the learned transformations along the training. Transformations at the beginning of the training are stronger and tend later to finer transformations useful enough to improve the classification accuracy of the trained model. In each row, the first image is the original patch and the last one is the same patch at the end of the training. The images in-between were extracted at respectively at 25%, 50% and 75% of the total number of training epochs. better than the results obtained with hand-defined transformations. It also yielded better results that a model trained with a RandAugment based framework. This shows that our method is very suitable in the context of histopathological images where potentially useful transformations to train a classifier are not trivial to define by hand. It also eliminates the risk to select transformations that would degrade the model accuracy. ## Acknowledgments This research was supported by the National Science and Engineering Research Council of Canada (NSERC), via its Discovery Grant program and MITACS via its Acceleration program. ## Ethical Standards The work follows appropriate ethical standards in conducting research and writing the manuscript, following all applicable laws and regulations regarding treatment of animals or human subjects. ## Conflicts of Interest We declare we don't have conflicts of interest.
2308.07637
A review on coisotropic reduction in Symplectic, Cosymplectic, Contact and Co-contact Hamiltonian systems
In this paper we study the coisotropic reduction in different types of dynamics according to the geometry of the corresponding phase space. The relevance of the coisotropic reduction is motivated by the fact that these dynamics can always be interpreted as Lagrangian or Legendrian submanifolds. Furthermore, Lagrangian or Legendrian submanifolds can be reduced by a coisotropic one.
Manuel de León, Rubén Izquierdo-López
2023-08-15T08:33:53Z
http://arxiv.org/abs/2308.07637v3
###### Abstract ###### Abstract In this paper we study the coisotropic reduction in different types of dynamics according to the geometry of the corresponding phase space. The relevance of the coisotropic reduction is motivated by the fact that these dynamics can always be interpreted as Lagrangian or Legendrian submanifolds. **MSC2020 classification:** 37J39, 37J55, 70H05, 70H33. **Key words:** Coisotropic reduction; symplectic manifolds; cosymplectic manifolds; contact manifolds; cocontact manifolds; Hamiltonian dynamics. A review on coisotropic reduction in Symplectic, Cosmplectic, Contact and Co-contact Hamiltonian systems Manuel de Leon1 Footnote 1: E-mail: [email protected] Instituto de Ciencias Matematicas, Campus Cantoblanco Consejo Superior de Investigaciones Cientificas C/ Nicolas Cabrera, 13-15, 28049, Madrid, Spain and Real Academia Espanola de las Ciencias. C/ Valverde, 22, 28004 Madrid, Spain. Ruben Izquierdo-Lopez2 Footnote 2: E-mail: [email protected] Instituto de Ciencias Matematicas, Campus Cantoblanco Consejo Superior de Investigaciones Cientificas C/ Nicolas Cabrera, 13-15, 28049, Madrid, Spain ###### Contents * 1 Introduction * 2 Symplectic vector spaces * 3 Coisotropic reduction in symplectic geometry * 3.1 Hamiltonian vector fields as Lagrangian submanifolds * 3.2 Coisotropic reduction * 3.3 Projection of Lagrangian submanifolds * 4 Poisson structures * 5 Coisotropic reduction in cosymplectic geometry * 5.1 Hamiltonian vector fields as Lagrangian submanifolds * 5.2 Coisotropic reduction * 5.3 Vertical coisotropic reduction * 5.3.1 Projection of Lagrangian submanifolds * 5.4 Horizontal coisotropic reduction * 5.4.1 Projection of Lagrangian submanifolds * 6 Jacobi structures * 7 Coisotropic reduction in contact geometry * 7.1 Hamiltonian vector fields as Lagrangian submanifolds * 7.2 Coisotropic reduction * 7.3 Vertical coisotropic reduction * 7.3.1 Projection of Legendrian submanifolds * 7.4 Horizontal coisotropic reduction * 7.4.1 Projection of Legendrian submanifolds * 8 Coisotropic reduction in cocontact geometry * 8.1 Hamiltonian vector fields as Lagrangian submanifolds * 8.2 Coisotropic reduction * 8.3 \(tz\)vertical reduction * 8.3.1 Projection of Legendrian submanifolds * 8.4 \(t\)vertical, \(z\)horizontal reduction * 8.4.1 Projection of Legendrian submanifolds * 8.5 \(z\)vertical, \(t\)horizontal reduction * 8.5.1 Projection of Legendrian submanifolds * 8.6 \(tz\)horizontal reduction * 8.6.1 Projection of Legendrian submanifolds * 9 Coisotropic reduction in stable Hamiltonian structures * 9.1 Hamiltonian vector fields as Lagrangian submanifolds * 9.2 Vertical coisotropic reduction * 9.2.1 Projection of Lagrangian submanifolds * 9.3 Horizontal coisotropic reduction * 9.3.1 Projection of Lagrangian submanifolds * 10 Conclusions ## 1 Introduction The introduction of symplectic geometry in the study of Hamiltonian systems was a tremendous breakthrough, both in quantitative and qualitative aspects. For example, the results in the reduction of the original Hamiltonian system well via the application of momentum mapping when in the presence of symmetries such as the so-called coisotropic reduction [1, 4, 9, 26]. Another relevant example, in the quantitative aspects, is the development of geometric integrators that respect geometric aspects and prove to be more efficient than the traditional ones (see for instance [28, 31]). Regarding the reduction in the presence of symmetries, the most relevant result is the so-called Marsden-Weinstein symplectic reduction theorem [27] (a preliminary version can be found in Meyer [30]), using the momentum mapping, a natural extension of the classical linear and angular momentum. The reduced manifold is obtained using a regular value of the momentum mapping and the corresponding isotropy group, and the dynamics is projected to this reduced manifold, gaining for integration a smaller number of degrees of freedom. This theorem has been extended to many other contexts: cosymplectic, contact, and more general settings (see [2, 3, 7, 10, 24, 29, 42] and the references therein). On the other hand, Lagrangian submanifolds play a crucial role, since it is easy to check that the image of a Hamiltonian vector field \(X_{H}\) in a symplectic manifold \((M,\omega)\) can be interpreted as a Lagrangian submanifold of the symplectic manifold \((TM,\omega^{c})\), where \(\omega^{c}\) is the complete or tangent lift of \(\omega\) to the tangent bundle \(TM\). This result has its equivalent in Lagrangian mechanics, and has led to the so-called Tulczyjew triples, which elegantly relate the different Lagrangian submanifolds that appear in Lagrangian and Hamiltonian descriptions of mechanics via the Legendre transformation [9, 37, 38]. These constructions have been extended to other scenarios, including the Tulczyjew triple [8, 14, 16, 17, 18, 21, 22, 42]. Lagrangian submanifolds are also relevant to develop the so-called Hamilton-Jacobi theory since they provide the geometric setting for solutions of the Hamilton-Jacobi problem (see [15] for a recent topical review on the subject). In this sense, we follow the Weinstein's creed: "Everything is a Lagrangian submanifold" [40]. As we said, the coisotropic reduction works when we give a coisotropic submanifold \(N\) of a symplectic manifold \((M,\omega)\) and we consider (if it is well defined) the quotient manifold \(N/(TN)^{\perp}\), where \((TN)^{\perp}\) is the symplectic complement of \(TN\). Being involutive, this distribution along \(N\) defines a foliation. The corresponding leaf space is again symplectic with a reduced symplectic form of the one given in \(M\). If in addition we have a Lagrangian submanifold \(L\) with clean intersection with \(N\), then \(L\cap N\) projects into a Lagrangian submanifold of the quotient (see [1, 40]). Coisotropic reduction can be combined with symplectic reduction to develop a reduction procedure for the Hamilton-Jacobi equation when we are in presence of symmetries (see [11]). Coisotropic reduction has been extended to the field of contact manifolds (with the interest of being in a dissipative context) [7, 36], but it has not been studied in sufficient detail in the case of cosymplectic manifolds nor in that of co-contact manifolds, the latter the natural settings to study time-dependent Hamiltonian contact systems [6, 12]. The objectives of this paper are twofold. On the one hand, to develop in detail the coisotropic reduction in the case of cosymplectic manifolds and those of co-contact, covering a gap in the literature. On the other hand, to present a survey that brings together in one place the different cases that appear in the study of Hamiltonian systems. The paper is structured as follows. Sections 2 and 3 are devoted to recall the main ingredients concerning symplectic Hamiltonian systems and the classical coisotropic reduction procedure. In order to go to the cosymplectic setting, we recall some general notions in Poisson structures (Section 4) and then we consider the case of coisotropic reduction in the cosymplectic setting in Section 5 (remember that this is the scenario to develop time dependent Hamiltonian systems). Contact manifolds require a more general notion that Poisson structures, indeed, they are examples of Jacobi structures, so that we give some fundamental notions in Section 6. The coisotropic reduction scheme developed in contact manifolds is the subject of Section 7, which is very different to the cosymplectic case since we are in presence of dissipative systems. To combine dissipative systems with Hamiltonians depending also on time, we consider cocontact manifolds in Section 8, and develop there the corresponding coisotropic reduction procedure. Finally, we discuss a recent generalization of contact and cosymplectic systems called stable Hamiltonian systems in Section 9. ## 2. Symplectic vector spaces We refer to [1, 9, 20, 26, 39] for the main definitions and results. **Definition 2.1** (Symplectic vector space).: _A **symplectic vector space** is a pair \((V,\omega)\) where \(V\) is a finite dimensional vector space and \(\omega\) is a non-degenerate \(2\)-form, where non-degenerary means that the map_ \[\flat_{\omega}:V\to V^{*};\ v\mapsto i_{v}\omega\] _is an isomorphism. \(\omega\) will be called a **symplectic form**._ For every non-degenerate \(2\)-form on \(V\) there exist a basis \((x_{i},y^{i})\) such that \(\omega=x^{i}\wedge y_{i}\), where \((x^{i},y_{i})\) is the dual basis. This implies that a symplectic vector space is necessarily of even dimension, say \(2n\). **Definition 2.2** (\(\omega\)-orthogonal).: _Let \(W\subseteq V\) be a subspace of \(V\). We define its \(\omega\)**-orthogonal** complement as_ \[W^{\perp_{\omega}}:=\{v\in V\ |\ \omega(v,w)=0,\ \forall w\in W\}.\] Note that \(W^{\perp_{\omega}}=\operatorname{Ker}(i^{*}\flat_{\omega})\) where \(i:W\hookrightarrow V\) is the natural inclusion. Using the non-degeneracy of \(\omega\), this implies that \(\dim W^{\perp_{\omega}}=\dim V-\dim W\), which will result useful along this paper. The antisymmetry of \(\omega\) gives rise to a wide variety of situations. In particular, we say that \(W\subseteq V\) is: * **Isotropic** if \(W\subseteq W^{\perp_{\omega}}\) (if \(W\) is isotropic, necessarily \(\dim W\leq n\)); * **Coisotropic** if \(W^{\perp_{\omega}}\subseteq W\) (if \(W\) is coisotropic, necessarily \(\dim W\geq n\)); * **Lagrangian** if \(W\) is isotropic and has an isotropic complement. (if \(W\) is Lagrangian, necessarily \(\dim W=n\)); * **Symplectic** if \(V=W\oplus W^{\perp\omega}\). A subspace \(W\) is Lagrangian if and only if \(W=W^{\perp\omega}\). This implies that Lagrangian subspaces are the isotropic subspaces of maximal dimension and the coisotropic subspaces of minimal dimension. It can be easily checked that the symplectic complement has the following properties: * \((W_{1}\cap W_{2})^{\perp\omega}=W_{1}^{\perp\omega}+W_{2}^{\perp\omega}\); * \((W_{1}+W_{2})^{\perp\omega}=W_{1}^{\perp\omega}\cap W_{2}^{\perp\omega}\); * \((W^{\perp\omega})^{\perp\omega}=W.\) ## 3 Coisotropic reduction in symplectic geometry **Definition 3.1** (Symplectic manifold).: _A **symplectic manifold** is pair \((M,\omega)\) where \(M\) is a manifold and \(\omega\) is a closed \(2\)-form such that \((T_{q}M,\omega_{q})\) is a symplectic vector space, for every \(q\in M\). As in the linear case, for the existence of such form, \(M\) needs to have even dimension \(2n\)._ Every manifold is locally isomorphic, that is, there exists a set of canonical coordinates around each point: **Theorem 3.1** (Darboux Theorem).: _Let \((M,\omega)\) be a symplectic manifold and \(q\in M\). There exist a coordinate system around \(q\), \((q^{i},p_{i})\) such that \(\omega=dq^{i}\wedge dp_{i}\).. These coordinates are called Darboux coordinates._ This non-degenerate form induces a bundle isomorphism between the tangent and cotangent bundles of \(M\) point-wise, namely \[\flat_{\omega}:TM\to T^{*}M;\,\,\,v_{q}\mapsto\flat_{\omega}(v_{q})=i_{v_{q}}\omega.\] **Definition 3.2** (Hamiltonian vector field).: _Given \(H\in\mathcal{C}^{\infty}(M)\), we define the **Hamiltonian vector field** of \(H\) as_ \[X_{H}:=\sharp_{\omega}(\operatorname{d}H),\] _where \(\sharp_{\omega}=\flat_{\omega}^{-1}.\) We say that a vector field \(X\) is **Hamiltonian** if \(X=X_{H}\) for some function \(H\) and say that \(X\) is **locally Hamiltonian** if \(X=X_{H}\) for some local function defined in a neighborhood of every point of the manifold._ **Remark 3.1**.: Notice that a vector field is locally Hamiltonian if and only if \(\flat_{\omega}(X)\) is closed, and Hamiltonian if and only if \(\flat_{\omega}(X)\) is exact. Locally, Hamiltonian vector fields have the expression \[X_{H}=\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\frac{ \partial H}{\partial q^{i}}\frac{\partial}{\partial p_{i}}.\] The definitions of the different cases of subspaces given in the linear case can be extended point-wise to submanifolds \(N\hookrightarrow M\). Consequently, we say that \(N\hookrightarrow M\) is: * **Isotropic** if \(T_{q}N\subseteq T_{q}M\) is for every \(q\in N\); * **Coisotropic** if \(T_{q}N\subseteq T_{q}M\) is for every \(q\in N\); * **Lagrangian** if \(N\) is isotropic and there is a isotropic subbundle (where we understand isotropic point-wise) \(E\subseteq TM|N\) such that \(TM=TN\oplus E\) (here \(\oplus\) denotes the Whitney sum). This is exactly the point-wise definition of a Lagrangian subspace asking for the coisotropic complement to vary smoothly; * **Symplectic** if \(T_{q}N\subseteq T_{q}M\) is for every \(q\in N\). These definitions extend naturally to distributions. Just like in the linear case, a submanifold \(N\hookrightarrow M\) is Lagrangian if and only if it is isotropic (or coisotropic) and has maximal (or minimal) dimension. This is a useful characterization that we will use several times in the rest of the paper. **Lemma 3.1**.: _Let \(i:L\to M\) be a submanifold of dimension \(n\). Then, \(L\) is a Lagrangian submanifold of \((M,\omega)\) if and only if \(i^{*}\omega=0\)._ Proof.: It is trivial, since Lagrangian submanifolds are the isotropic submanifolds of maximal dimension, say \(n\). ### Hamiltonian vector fields as Lagrangian submanifolds **Definition 3.3**.: _Let \((M,\omega)\) be a symplectic manifold. Define the **tangent symplectic structure** on \(TM\) as \(\omega_{0}=-d\theta_{0}\) where \(\theta_{0}=\flat_{\omega}^{*}\theta_{M}\), and \(\theta_{M}\) is the Liouville \(1\)-form on the cotangent bundle._ Recall that \(\theta_{M}\) is defined as follows: \[\theta_{M}(\alpha_{x})(X_{\alpha_{x}})=\alpha_{x}(T\pi_{M}(X_{\alpha_{x}}))\] where \(X_{\alpha_{x}}\in T_{\alpha_{x}}(T^{*}M)\), \(\alpha_{x}\in T^{*}_{x}M\), and \(\pi_{M}:T^{*}M\longrightarrow M\) is the canonical projection. The Liouville \(1\)-form can be also defined as the unique \(1\)-form \(\lambda_{M}\) on \(T^{*}M\) such that, for every \(1\)-form \(\alpha:M\to T^{*}M\), \[\alpha^{*}\lambda_{M}=\alpha.\] In coordinates \((q^{i},p_{i},\dot{q}^{i},\dot{p}_{i})\) this form is \[\omega_{0}=-\operatorname{d}q^{i}\wedge\operatorname{d}\dot{p}_{i}- \operatorname{d}\dot{q}^{i}\wedge\operatorname{d}p_{i}.\] **Proposition 3.1**.: _Let \(X:M\to TM\) be a vector field. Then_ \[X^{*}\theta_{0}=\flat_{\omega}(X).\] Proof.: Its a straight-forward verification. Let \(v\in T_{q}M\), then we have: \[(X^{*}\theta_{0})(v) =\theta_{0}(T_{q}X\cdot v)=\theta_{M}(T_{\flat_{\omega}(X)}\flat_ {\omega}\cdot T_{q}X\cdot v)\] \[=((\flat_{\omega}(X))^{*}\theta_{M})(v)=(\flat_{\omega}(X))(v).\] **Proposition 3.2**.: _Let \(X:M\to TM\) be a vector field. Then \(X\) is locally Hamiltonian if and only if \(X(M)\) is a Lagrangian submanifold of \((TM,\omega_{0})\)._ Proof.: We only check that \(X(M)\) is isotropic using Lemma 3.1, since \(\dim X(M)=\dim M=\frac{1}{2}\dim TM.\) In fact: \[X^{*}\omega_{0}=-X^{*}(\operatorname{d}\theta_{0})=-\operatorname{d}(X^{*} \theta_{0})=-\operatorname{d}(\flat_{\omega}(X)),\] which gives the characterization. We can also check this last proposition easily in coordinates. Indeed, let \[X=X^{i}\frac{\partial}{\partial q^{i}}+Y_{i}\frac{\partial}{\partial p_{i}}.\] We have \[-X^{*}\omega_{0}=\frac{\partial Y_{i}}{\partial q^{j}}\operatorname{d}q^{i} \wedge\operatorname{d}q^{j}+\left(\frac{\partial Y_{i}}{\partial p_{j}}+\frac{ \partial X^{j}}{\partial q_{i}}\right)\operatorname{d}q^{i}\wedge\operatorname {d}p_{j}+\frac{\partial X^{i}}{\partial p_{j}}\operatorname{d}p_{j}\wedge \operatorname{d}p_{i},\] and thus, \(X\) defines a Lagrangian submanifold if and only if \[\frac{\partial Y_{i}}{\partial q^{j}}-\frac{\partial Y_{j}}{ \partial q^{i}} =0,\] \[\frac{\partial Y_{i}}{\partial p_{j}}+\frac{\partial X^{j}}{ \partial q_{i}} =0,\] \[\frac{\partial X^{i}}{\partial p_{j}}-\frac{\partial X^{j}}{ \partial p_{i}} =0.\] Taking \[(G^{1},\dots,G^{n},G^{n+1},\dots,G^{2n}):=(X^{i},-Y_{i})\] and \[(x^{1},\dots,x^{n},x^{n+1},\dots,x^{2n}):=(q^{i},p_{i}),\] these conditions become \[\frac{\partial G^{i}}{\partial x^{j}}=\frac{\partial G^{j}}{\partial x^{i}}.\] This implies \(G^{i}=\frac{\partial H}{\partial x^{i}}\), for some local function \(H\). It is clear that locally, we have \(X=X_{H}\). ### 3.2 Coisotropic reduction Now, given a coisotropic submanifold \(N\hookrightarrow M\), we define the distribution \((TN)^{\perp_{\omega}}\) on \(N\) as the subbundle of \(TM|_{N}\) consisting of all \(\omega\)-orthogonal spaces \((T_{q}N)^{\perp_{\omega}}\). Note that this distribution is regular and its dimension is \(\dim M-\dim N\). **Proposition 3.3**.: _Let \((M,\omega)\) be a symplectic manifold and \(N\hookrightarrow M\) be a coisotropic submanifold. The distribution \(q\mapsto(T_{q}N)^{\perp_{\omega}}\) is involutive._ Proof.: Let \(X,Y\) be vector fields along \(N\) with values in \(TN^{\perp_{\omega}}\) and \(Z\) be any other vector field tangent to \(N\). Since \(\omega\) is closed we have \[0= (\mathrm{d}\,\omega)(X,Y,Z)=X(\omega(Y,Z))-Y(\omega(X,Z))+Z\omega( X,Y)\] \[-\omega([X,Y],Z)+\omega([X,Z],Y)-\omega([Y,Z],X)=-\omega([X,Y],Z),\] since \(X,Y\) belong to the orthogonal complement of \(TN\). We conclude that \(\omega([X,Y],Z)=0\) for every field \(Z\) tangent to \(N\), that is, \([X,Y]\in(TN)^{\perp_{\omega}}\). Since the distribution is involutive and regular, the Frobenius Theorem guarantees the existence of a maximal regular foliation \(\mathcal{F}\) of \(N\), that is, a decomposition of \(N\) in maximal submanifolds tangent to the distribution. In what follows, we suppose that \(N/\mathcal{F}\) (the space of all leaves) admits a manifold structure so that the projection \[\pi:N\to N/\mathcal{F}\] is a submersion. The main result is the Weinstein reduction theorem [41]: **Theorem 3.2** (Coisotropic reduction in the symplectic setting).: _Let \((M,\omega)\) be a symplectic manifold and \(N\hookrightarrow N\) be a coisotropic submanifold. If \(N/\mathcal{F}\) (the spaces of all leaves under the distribution \(q\mapsto(T_{q}N)^{\perp_{\omega}}\)) admits a manifold structure such that \(N\xrightarrow{\pi}N/\mathcal{F}\) is a submersion, there exist an unique \(2\)-form \(\omega_{N}\) in \(N/\mathcal{F}\) that defines a symplectic manifold structure such that, if \(N\xrightarrow{i}M\) is the natural inclusion, then \(i^{*}\omega=\pi^{*}\omega_{N}\). The following diagram summarizes the situation:_ Proof.: Uniqueness is guaranteed from the imposed relation since it forces us to define \[(\omega_{N})_{[q]}([u],[v]):=\omega(u,v),\] where \([u]:=T\pi(q)\cdot u\). We only need to check that this is a well-defined closed form and that it is non-degenerate. We begin showing that our definition does not depend on the representative of the vector \([u]\). For this, it is sufficient to observe that \((\omega_{N})_{[q]}([u],[v])=0\) whenever \(u\) is a vector in the distribution. Furthermore, \[\mathcal{L}_{X}\omega=\operatorname{d}i_{X}\omega+i_{X}\operatorname{d}\omega=0\] for every vector field \(X\) in \(N\) with values in \((TN^{\perp_{\omega}})\), and this implies the independece of the point (for every two points in the same leave of the foliation can be joined by a finite union of flows of such fields). It is clearly non-degenerate and it is closed, since \(\operatorname{d}\pi^{*}\omega_{N}=i_{*}\operatorname{d}\omega=0\) and \(\pi\) is a submersion. ### 3.3 Projection of Lagrangian submanifolds **Definition 3.4** (Clean intersection).: _We say that two submanifolds \(L,N\hookrightarrow M\) have **clean intersection** if \(L\cap N\hookrightarrow M\) is again a submanifold and \(T_{q}(L\cap N)=T_{q}L\cap T_{q}N\), for every \(q\in L\cap N\)._ **Proposition 3.4**.: _Let \(L\hookrightarrow M\) be a Lagrangian submanifold and \(N\hookrightarrow M\) a coisotropic submanifold. If they have clean intersection and \(L_{N}:=\pi(L\cap N)\) is a submanifold of \(N/\mathcal{F}\), \(L_{N}\) is Lagrangian._ Proof.: It is sufficient to see that is isotropic and that it has maximal dimension in \(N/\mathcal{F}\). It is isotropic since \([u]\in T_{q}(L_{N})\) implies \(\omega_{N}([u],[v])=\omega(u,v)=0\), for every \([v]\in T_{q}(L_{N})\). Now, since \(\operatorname{Ker}d_{q}\pi=(T_{q}N)^{\perp_{\omega}}\), the kernel-range formula yields \[\dim L_{N}=\dim(L\cap N)-\dim(T_{q}L\cap(T_{q}N)^{\perp_{\omega}}). \tag{1}\] Furthermore, \[\dim(L\cap N)+\dim(T_{q}L+(T_{q}N)^{\perp_{\omega}})=\dim M, \tag{2}\] beacause \(L\) is Lagrangian and \(N\) coisotropic. Substituting (2) in (1) we obtain \[\dim L_{N} =\dim M-\dim(T_{q}L+(T_{q}N)^{\perp_{\omega}})-\dim(T_{q}L\cap(T_ {q}N)^{\perp_{\omega}})\] \[=\dim M-\dim L-\dim(T_{q}N)^{\perp_{\omega}}=\dim M-\dim L-(\dim M -\dim N)\] \[=\dim N-\dim L=\dim N-\frac{1}{2}\dim M,\] which is exactly \(\frac{1}{2}\dim N/\mathcal{F}\), as a direct calculation shows. ## 4 Poisson structures A symplectic structure \((M,\omega)\) induces a Lie algebra structure in the ring of funtions \(\mathcal{C}^{\infty}(M)\). **Definition 4.1** (Poisson bracket).: _Let \((M,\omega)\) be a symplectic manifold and \(f,g\in\mathcal{C}^{\infty}(M).\) We define the Poisson bracket of \(f,g\) as the function_ \[\{f,g\}:=\omega(X_{f},X_{g}).\] It is easily checked that in Darboux coordinates the Poisson bracket is \[\{f,g\}=\frac{\partial f}{\partial q^{i}}\frac{\partial g}{\partial p_{i}}- \frac{\partial f}{\partial p_{i}}\frac{\partial g}{\partial q^{i}}.\] **Proposition 4.1**.: _The Poisson bracket satisfies the following properties:_ * _It is bilinear with respect to_ \(\mathbb{R}\)_;_ * \(\{f,g\cdot h\}=g\cdot\{f,h\}+\{f,g\}\cdot h\) _(the Leibniz rule);_ * \(\{f,\{g,h\}\}+\{h,\{f,g\}\}+\{g,\{h,f\}\}=0\) _(the Jacobi identity)._ Taking into consideration the previous definition, we can generalize the notion of symplectic manifolds as follows: **Definition 4.2** (Poisson manifold).: _A Poisson manifold is a pair \((P,\{\cdot,\cdot\})\) where \(P\) is a manifold and \(\{\cdot,\cdot\}\) is an antisymmetric bracket in the ring of functions \(\mathcal{C}^{\infty}(P)\) satisfying the Leibniz rule and the Jacobi identity._ **Definition 4.3** (Hamiltonian vector field, Characteristic distribution).: _Given \(H\in\mathcal{C}^{\infty}(M),\) the Leibniz rule implies that \(\{H,\cdot\}\) defines a derivation on \(M\) an thus, is associated to a unique vector field \(X_{H}\), which will be called the **Hamiltonian vector field** of \(H\). The collection of all Hamiltonian vector fields generates the **characteristic distribution**, namely_ \[\mathcal{S}_{q}:=\langle v=X_{H}(q)\text{ for all }H\in\mathcal{C}^{\infty}(M)\rangle.\] The definition of Hamiltonian vector field implies that \(\{f,g\}\) only depends on the values of \(df,dg\) and thus we can define a bivector field \[\Lambda(\alpha,\beta):=\{f,g\}\] where \(df=\alpha,dg=\beta.\) We have \(\{f,g\}=\Lambda(df,dg).\)\(\Lambda\) also satisfies the partial differential equation \([\Lambda,\Lambda]=0,\) where \([\cdot,\cdot]\) is the Schouten-Nijenhuis bracket [39]. This last property is actually equivalent to the Jacobi identity, that is, given a bivector field \(\Lambda\), \(\{f,g\}:=\Lambda(df,dg)\) defines a Poisson structure if and only if \([\Lambda,\Lambda]=0.\) **Definition 4.4**.: _Let \((P,\{\cdot,\cdot\})\) be a Poisson manifold, we define_ \[\sharp_{\Lambda}:T^{*}P\to TP;\ \alpha_{q}\mapsto i_{\alpha_{q}}\Lambda.\] _Notice that \(\operatorname{Im}\sharp_{\Lambda}=\mathcal{S}\), the characteristic distribution._ In the case of symplectic manifolds \(\sharp_{\omega}=\sharp_{\Lambda}\), and the characteristic distribution is the whole tangent bundle; however, in the general setting \(\sharp_{\Lambda}\) need not be a bundle isomorphism. Actually, if \(\sharp_{\Lambda}\) is a bundle isomorphism, it arises form a symplectic structure defined as \(\omega(v,w):=\Lambda(\sharp_{\Lambda}^{-1}(v),\sharp_{\Lambda}^{-1}(w))\)[9]. This characteristic distribution is involutive [26] and each leaf of the foliation, \(S\), admits a symplectic structure defining for \(f,g\in\mathcal{C}^{\infty}(S)\) and \(q\in S\), \[\{f,g\}(q):=\{\widetilde{f},\widetilde{g}\}(q)\] for arbitrary extensions \(\widetilde{f},\widetilde{g}\in\mathcal{C}^{\infty}(P)\) of \(f,g\) respectively. It can be easily checked that this definition does not depend on the chosen functions and that it defines a non-degenerate Poisson structure and thus, \(S\) is a symplectic manifold [39]. **Remark 4.1**.: The characteristic distribution of a Poisson manifold is an example of generalized distributions, studied by [34, 35], that extended the Frobenius theorem for this kind of involutive distributions. **Definition 4.5** (\(\Lambda\)-orthogonal).: _Let \(\Delta_{q}\subseteq T_{q}P\) be a subspace on a Poisson manifold \((P,\{\cdot,\cdot\})\). We define the \(\Lambda\)**-orthogonal complement**\(\Delta_{q}^{\perp_{\Lambda}}=\sharp_{\Lambda}(\Delta_{q}^{0})\) where \(\Delta_{q}^{0}\) is the annihilator of \(\Delta_{q}\), that is, \(\Delta_{q}^{0}:=\{\alpha\in T_{q}^{*}P\ |\ \alpha=0\text{ in }\Delta_{q}\}\)._ Just as in the symplectic scenario, we say that a subspace \(\Delta_{q}\subseteq T_{q}P\) is * **Isotropic** if \(\Delta_{q}\subseteq\Delta_{q}^{\perp_{\Lambda}}\) for every \(q\in P\); * **Coisotropic** if \(\Delta_{q}^{\perp_{\Lambda}}\subseteq\Delta_{q}\) for every \(q\in P\); * **Lagrangian** if \(\Delta_{q}=\Delta_{q}^{\perp_{\Lambda}}\cap\mathcal{S}_{q}\) for every \(q\in P\). Notice that this is equivalent to \(\Delta_{q}\cap\mathcal{S}_{q}\) being Lagrangian in each symplectic vector space \(\mathcal{S}_{q}\). The \(\Lambda\)-orthogonal complement satisfies the following properties: * \((W_{1}\cap W_{2})^{\perp_{\Lambda}}=W_{1}^{\perp_{\Lambda}}+W_{2}^{\perp_{ \Lambda}}\); * \((W_{1}+W_{2})^{\perp_{\Lambda}}\subseteq W_{1}^{\perp_{\Lambda}}\cap W_{2}^{ \perp_{\Lambda}}\). **Remark 4.2**.: For symplectic manifolds, the above definitions coincide with the ones previously given. ## 5 Coisotropic reduction in cosymplectic geometry Cosymplectic structures are just relevant because they are the natural arena to develop time-dependent Lagrangian and Hamiltonian mechanics [9]. **Definition 5.1** (Cosymplectic manifold).: _A **cosymplectic manifold** is a triple \((M,\Omega,\theta)\) where \(M\) is a \((2n+1)\)-manifold, \(\theta\) is a closed \(1\)-form and \(\Omega\) is a closed \(2\)-form such that \(\theta\wedge\Omega^{n}\neq 0\)._ Similar to the symplectic setting, there exists canonical coordinates, which will be called Darboux coordinates \((q^{i},p_{i},t)\) such that \(\Omega=\operatorname{d}q^{i}\wedge\operatorname{d}p_{i}\) and \(\theta=\operatorname{d}t\). The existence of such coordinate charts is proven in [20]. There are two natural distributions defined on \(M\): * The **horizontal distribution**\(\mathcal{H}:=\operatorname{Ker}\theta\); * The **vertical distribution**\(\mathcal{V}:=\operatorname{Ker}\Omega\). These distributions induce the following types of tangent vectors in each tangent space. A vector \(v\in T_{q}M\) will be called: * **Horizontal** if \(v\in\mathcal{H}_{q}\); * **Vertical** if \(v\in\mathcal{V}_{q}\). In Darboux coordinates, these distributions are locally generated as follows: \[\mathcal{H}=\langle\frac{\partial}{\partial q^{i}},\frac{\partial}{\partial p _{i}}\rangle;\ \mathcal{V}=\langle\frac{\partial}{\partial t}\rangle.\] Just as before, we can define a bundle isomorphism between the tangent and cotangent bundles: \[\flat_{\theta,\Omega}:TM\to T^{*}M;\ \ v_{q}\mapsto\flat_{\theta,\Omega}(v_{q}) =i_{v_{q}}\Omega+\theta(v_{q})\cdot\theta.\] The vector field defined as \(\mathcal{R}:=\sharp_{\theta,\Omega}(\theta)\) is called the **Reeb vector field**. The Reeb vector field is locally given by \[\mathcal{R}=\frac{\partial}{\partial t}.\] Let \(H\) be a differentiable function on \(M\). We define the following vector fields: * The **gradient vector field**\(\operatorname{grad}H:=\sharp_{\theta,\Omega}(dH)\); * The **Hamiltonian vector field**\(X_{H}:=\operatorname{grad}H-\mathcal{R}(H)R\); * The **evolution vector field**\(\mathcal{E}_{H}:=X_{H}+\mathcal{R}\). These vector fields have the local expressions: \[\operatorname{grad}H =\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}- \frac{\partial H}{\partial q^{i}}\frac{\partial}{\partial p_{i}}+\frac{ \partial H}{\partial t}\frac{\partial}{\partial t},\] \[X_{H} =\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}- \frac{\partial H}{\partial q^{i}}\frac{\partial}{\partial p_{i}},\] \[\mathcal{E}_{H} =\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}- \frac{\partial H}{\partial q^{i}}\frac{\partial}{\partial p_{i}}+\frac{ \partial}{\partial t}.\] Notice that the horizontal distribution \(\mathcal{H}\) is the distribution generated by all Hamiltonian vector fields. Just as in the symplectic case, we can define a Poisson bracket: **Definition 5.2** (Poisson bracket).: _Let \(\{\cdot,\cdot\}\) be the bracket in the ring \(\mathcal{C}^{\infty}(M)\) given by_ \[\{f,g\}:=\Omega(X_{f},X_{g}).\] We can easily check that this is indeed a Poisson structure observing that in coordinates is given by \[\{f,g\}=\frac{\partial f}{\partial q^{i}}\frac{\partial g}{\partial p_{i}}- \frac{\partial f}{\partial p_{i}}\frac{\partial g}{\partial q^{i}}.\] And, thus, the coordinate expression of the Poisson tensor is \[\Lambda=\frac{\partial}{\partial q^{i}}\wedge\frac{\partial}{\partial p_{i}}.\] This induces all the definitions from Poisson manifolds given in Section 4. In particular, given \(\Delta_{q}\subseteq T_{q}M\), we have \[\Delta_{q}^{\perp_{\Lambda}}=\sharp_{\Lambda}(\Delta_{q}^{0}).\] Note that \(\operatorname{Ker}\sharp_{\Lambda}=\langle\theta\rangle\) and that \(\operatorname{Im}\sharp_{\Lambda}=\mathcal{H}\), that is, \(\mathcal{H}\) is the characteristic distribution of the Poisson structure induced by \((\theta,\Omega)\). This implies the following result: **Proposition 5.1**.: \(i:L\to M\) _is a Lagrangian submanifold if and only if_ \[T_{q}L^{\perp_{\Lambda}}=T_{q}L\cap\mathcal{H}_{q}\] _for every \(q\in L\)._ Proof.: It follows from the definition of Lagrangian submanifold (Section 4) and the fact that \(\mathcal{H}\) is the characteristic distribution on \(M\). It is also easy to see that \[\Lambda(\alpha,\beta)=\Omega(\sharp_{\theta,\Omega}(\alpha),\sharp_{\theta, \Omega}(\beta))\] observing that we have \(\Omega(X_{f},X_{g})=\Omega(\operatorname{grad}f,\operatorname{grad}g)\). ### Gradient, Hamiltonian and evolution vector fields as Lagrangian submanifolds **Definition 5.3**.: _Given a cosymplectic manifold \((M,\Omega,\theta)\), we define the symplectic structure on \(TM\) as \(\Omega_{0}:=-\operatorname{d}\lambda_{0}\), where \(\lambda_{0}=\flat_{\theta,\Omega}^{*}\lambda_{M}\), \(\lambda_{M}\) being the Liouville \(1\)-form in the cotangent bundle \(T^{*}M\)._ There is another expression of \(\Omega_{0}\), namely \[\Omega_{0}=-\Omega^{c}-\theta^{c}\wedge\theta^{v}\] as one can verify [5]. Here, \(\alpha^{v},\alpha^{c}\) denote the complete and vertical lifts of a form \(\alpha\) on \(M\) to its tangent bundle \(TM\)[9]. This implies that in the induced coordinates in \(TM\), \((q^{i},p_{i},t,\dot{q}^{i},\dot{p}_{i},\dot{t})\), \[\Omega_{0}=-dq^{i}\wedge d\dot{p}_{i}-d\dot{q}^{i}\wedge dp_{i}-d\dot{t}\wedge dt.\] **Proposition 5.2**.: _Let \((M,\Omega,\theta)\) be a cosymplectic manifold and \(X:M\to TM\) a vector field. Then \(X(M)\) is a Lagrangian submanifold of \((TM,\Omega_{0})\) if and only if \(X\) is locally a gradient vector field._ Proof.: It is easily checked that \(X^{*}\lambda_{0}=\flat_{\theta,\Omega}(X)\) (just like in Proposition 3.1) and then, \(X(M)\) is Lagrangian if and only if \[0=X^{*}\Omega_{0}=-X^{*}\operatorname{d}\lambda_{0}=-\operatorname{d}\flat_{ \theta,\Omega}(X),\] that is, \(X\) is locally a gradient vector field. We can also check this in coordinates. Indeed, let \[X=X^{i}\frac{\partial}{\partial q^{i}}+Y_{i}\frac{\partial}{\partial p_{i}}+Z \frac{\partial}{\partial t}\] be a vector field on \(M\). \(X:M\hookrightarrow TM\) defines a Lagrangian submanifold if and only if \(X^{*}\Omega_{0}=0.\) An easy calculation gives \[-X^{*}\Omega_{0}= \left(\frac{\partial X^{i}}{\partial q^{j}}+\frac{\partial Y_{j} }{\partial p_{i}}\right)\operatorname{d}q^{j}\wedge\operatorname{d}p_{i}+ \left(\frac{\partial X^{i}}{\partial t}-\frac{\partial Z}{\partial p_{i}} \right)\operatorname{d}t\wedge\operatorname{d}p_{i}+\left(\frac{\partial Y_{i }}{\partial t}+\frac{\partial Z}{\partial q^{i}}\right)\operatorname{d}q^{i} \wedge\operatorname{d}t\] \[\frac{\partial X^{i}}{\partial p_{j}}\operatorname{d}p_{j}\wedge \operatorname{d}p_{i}+\frac{\partial Y_{i}}{\partial q^{j}}\operatorname{d}q^ {j}\wedge\operatorname{d}q^{i}.\] Therefore, \(X\) defines a Lagrangian submanifold of \((TM,\Omega_{0})\) if and only if \[\frac{\partial X^{i}}{\partial q^{j}}+\frac{\partial Y_{j}}{ \partial p_{i}} =0, \tag{3}\] \[\frac{\partial X^{i}}{\partial t}-\frac{\partial Z}{\partial p_{i}} =0,\] (4) \[\frac{\partial Y_{i}}{\partial t}+\frac{\partial Z}{\partial q^{i}} =0,\] (5) \[\frac{\partial X^{i}}{\partial p_{j}}-\frac{\partial X^{j}}{ \partial p_{i}} =0,\] (6) \[\frac{\partial Y_{i}}{\partial q^{j}}-\frac{\partial Y_{j}}{ \partial q^{i}} =0. \tag{7}\] The equations above can be summarized taking \[(G^{1},\ldots,G^{n},G^{n+1},\ldots,G^{2n},G^{2n+1}):=(X^{i},-Y_{i},Z),\] \[(x^{1},\ldots,x^{n},x^{n+1},\ldots,x^{2n},x^{2n+1}):=(q^{i},p_{i},t),\] since they translate to \[\frac{\partial G^{i}}{\partial x^{j}}=\frac{\partial G^{j}}{\partial x_{i}}.\] We conclude that \(G^{i}=\frac{\partial H}{\partial x^{i}},\) for some local function \(H\), that is, locally, \(X=\operatorname{grad}H\). In general, the Hamiltonian and evolution vector field do not define a Lagrangian submanifold in \((TM,\Omega_{0})\). However, modifying the form we can achieve this. First, let us study the Hamiltonian vector field \(X_{H}\). We have \[X_{H}^{*}\Omega_{0}=(\operatorname{grad}H-\mathcal{R}(H)\mathcal{R})^{*}\Omega _{0}=-\operatorname{d}(\mathcal{R}(H)\theta)=-\operatorname{d}(\mathcal{R}( H))\wedge\theta.\] The form defined as \[\Omega_{H}:=\Omega_{0}+(\operatorname{d}\mathcal{R}(H)\wedge\theta)^{v}\] is a symplectic form and has the local expression \[\Omega_{H}=-\operatorname{d}q\wedge\operatorname{d}\dot{p}_{i}-\operatorname {d}\dot{q}^{i}\wedge\operatorname{d}p_{i}-\operatorname{d}\dot{t}\wedge \operatorname{d}t+\operatorname{d}\left(\frac{\partial H}{\partial t}\right) \wedge\operatorname{d}t.\] Also, \[X_{H}^{*}\Omega_{H}=-\operatorname{d}\mathcal{R}(H)\wedge\theta+\operatorname {d}(\mathcal{R}(H))\wedge\theta=0.\] We have proved that \(X_{H}\) defines a Lagrangian submanifold of \((TM,\Omega_{H}).\) Furthermore, since \[\mathcal{R}^{*}\Omega_{0}=0,\] it follows that the evolution vector field \(\mathcal{E}_{H}\) also defines a Lagrangian submanifold of \((TM,\Omega_{H}).\) This also gives a way of interpreting both vector fields as Lagrangian submanifolds of a the cosymplectic submanifold \((TM\times\mathbb{R},\Omega_{H},\operatorname{d}s)\), taking the coordinate in \(\mathbb{R}\) to be constant. ### 5.2 Coisotropic reduction We can interpret the orthogonal complement defined by the Poisson structure using the cosymplectic structure. We note that \(\Omega|_{\mathcal{H}}\) defined as \(\Omega\) restricted to the distribution \(\mathcal{H}\) induces a symplectic vector space in each \(\mathcal{H}_{q}\) and thus we have a symplectic vector bunde \(\mathcal{H}\to M\). If \(\Delta_{q}\subseteq\mathcal{H}_{q}\), we have the \(\Omega|_{\mathcal{H}}\)-orthogonal complement \[(\Delta_{q})^{\perp_{\Omega|_{\mathcal{H}}}}=\{v\in\mathcal{H}\ |\ \Omega(v,w)=0,\ \forall w\in\Delta_{q}\}.\] **Proposition 5.3**.: _Let \(\Delta_{q}\subseteq T_{q}M\). Then \(\Delta_{q}^{\perp_{\Lambda}}=(\Delta_{q}\cap\mathcal{H})^{\perp_{\Omega|_{ \mathcal{H}}}}\)._ Proof.: Let \(v\in\Delta_{q}^{\perp_{\Lambda}}\), that is, \(v=\sharp_{\Lambda}(\alpha)\) with \(\alpha\in\Delta_{q}^{0}\). This implies that \(v\) is horizontal. We only need to check that \(\Omega(v,w)=0\) for every \(w\in\Delta_{q}\cap\mathcal{H}_{q}\). Indeed, since \(\theta(w)=0\), \[\Omega(\sharp_{\Lambda}\alpha,w) =-\Omega(w,\sharp_{\Lambda}\alpha)-\theta(w)\theta(\sharp_{ \Lambda}\alpha)=-(\flat_{\theta,\Omega}w)(\sharp_{\Lambda}\alpha)=-\Lambda( \alpha,\flat_{\theta,\Omega}w)\] \[=-\Omega(\sharp_{\theta,\Omega}\alpha,w)=-\Omega(\sharp_{\theta, \Omega}\alpha,w)-\theta(\sharp_{\theta,\Omega}\alpha)\theta(w)=-\alpha(w)=0.\] Now we compare dimensions. We distinguish two cases, if \(\theta\in\Delta_{q}^{0}\), we have \[\dim\Delta_{q}^{\perp_{\Lambda}}=\dim\Delta_{q}^{0}-1=2n-\dim\Delta_{q}\] which is exactly \(\dim(\Delta_{q}\cap\mathcal{H}_{q})^{\perp_{\Omega|_{\mathcal{H}}}}\), for \(\Delta_{q}\subseteq\mathcal{H}_{q}\) and \((\mathcal{H}_{q},\Omega|_{\mathcal{H}})\) is symplectic. Now, if \(\theta\not\in\Delta_{q}^{0}\), then \[\dim\Delta_{q}^{\perp_{\Lambda}}=2n+1-\dim\Delta_{q}\] and, since \(\Delta_{q}\not\subseteq\mathcal{H}_{q}\), we have \(\dim(\Delta_{q}\cap\mathcal{H}_{q})=\dim\Delta_{q}-1\) which implies that \(\dim(\Delta_{q}\cap\mathcal{H}_{q})^{\perp_{\Omega|_{\mathcal{H}}}}=2n+1- \dim\Delta\). This last proposition clarifies the situation. The \(\Lambda\)-orthogonal of a subspace \(\Delta\) is just the symplectic orthogonal of the intersection with the symplectic leaf. This means that coisotropic reduction in cosymplectic geometry will be performed in each leaf of the characteristic distribution \(\mathcal{H}\). Also, because the \(\Lambda\)-orthogonal complement is just the symplectic complement of the intersection with \(\mathcal{H}\), we have the following properties: * \((\Delta_{1}\cap\Delta_{2})^{\perp_{\Lambda}}=\Delta_{1}^{\perp_{\Lambda}}+ \Delta_{2}^{\perp_{\Lambda}}\). * \((\Delta_{1}+\Delta_{2})^{\perp_{\Lambda}}=\Delta_{1}^{\perp_{\Lambda}}\cap \Delta_{2}^{\perp_{\Lambda}}\). * \((\Delta^{\perp_{\Lambda}})^{\perp_{\Lambda}}=\Delta\cap\mathcal{H}\). It will also be important to distinguish submanifolds \(N\hookrightarrow M\) acording to the position relative to the distributions \(\mathcal{H},\mathcal{V}\). **Definition 5.4** (Horizontal, non-horizontal and vertical submanifolds).: _Let \(i:N\hookrightarrow M\) be a submanifold. \(N\) will be called a:_ * _Horizontal submanifold_ _if_ \(T_{q}N\subseteq\mathcal{H}_{q}\) _for every_ \(q\in N\)_;_ * _Non-horizontal submanifold_ _if_ \(T_{q}N\not\subseteq\mathcal{H}_{q}\) _for every_ \(q\in N\) * _Vertical submanifold if the Reeb vector field is tangent to_ \(N\)_, that is,_ \(\mathcal{R}(q)\in T_{q}N\) _for every_ \(q\in N\)_._ **Remark 5.1**.: Note that if \(N\hookrightarrow M\) is a vertical submanifold, then \(N\) is non-horizontal. Lagrangian submanifolds are characterized as follows: **Lemma 5.1**.: _Let \(L\hookrightarrow M\) be a Lagrangian submanifold and \(q\in L\). Then_ * _If_ \(T_{q}L\subseteq\mathcal{H}_{q}\)_,_ \(\dim T_{q}L^{\perp_{\Lambda}}=\dim M-\dim L-1\)_._ * _If_ \(T_{q}L\not\subseteq\mathcal{H}_{q}\)_,_ \(\dim T_{q}L^{\perp_{\Lambda}}=\dim M-\dim L\)_._ _and, in either case,_ \(\dim T_{q}L^{\perp_{\Lambda}}=n\)_, where_ \(\dim M=2n+1\)_._ Proof.: * Since \(\theta\in T_{q}L^{0}\) we have \[\dim T_{q}L^{\perp_{\Lambda}}=\dim\sharp_{\Lambda}(T_{q}L^{0})=\dim M-\dim L -\dim(\operatorname{Ker}\sharp_{\Lambda}\cap T_{q}L)=\dim M-\dim L-1.\] * It follows from the previous calculation using that \(\theta\not\in T_{q}L^{0}\) because \[\dim(\operatorname{Ker}\sharp_{\Lambda}\cap T_{q}L)=0.\] The proof of the equality \(\dim T_{q}L^{\perp_{\Lambda}}=n\) is straightforward using that \(T_{q}L\cap\mathcal{H}_{q}\) is a Lagrangian subspace of \((\mathcal{H}_{q},\Omega|_{\mathcal{H}})\). Lemma 5.1 guarantees that either \(\dim L=n\), in which case \(L\) is horizontal, or \(\dim L=n+1\), in which case \(L\) is non-horizontal. We have the following useful characterization of Larangian submanifolds: **Lemma 5.2**.: _Let \(L\hookrightarrow M\) be a submanifold. We have_ * _If_ \(\dim L=n\)_, then_ \(L\) _is Lagrangian if and only if_ \(i^{*}\theta=0,i^{*}\Omega=0\)_._ * _If_ \(L\) _is non-horizontal and_ \(\dim L=n+1\)_, then_ \(L\) _is Lagrangian if and only if_ \(i^{*}\Omega=0\)_._ Proof.: Both assertions are proved by a comparison of dimensions. **Proposition 5.4**.: _Let \(i:N\hookrightarrow M\) be a coisotropic submanifold. Then the distribution \((TN)^{\perp_{\Lambda}}\) is involutive._ Proof.: We start proving that \(\mathcal{H}\) is an involutive distribution. Let \(X,Y\) be vector fields tangent to \(\mathcal{H}\). Since \(\theta\) is closed we have \[0=(\operatorname{d}\theta)(X,Y)=X(\theta(Y))-Y(\theta(X))-\theta([X,Y])=- \theta([X,Y]),\] that is, \([X,Y]\) is tangent to \(\mathcal{H}\). Denote \(\Omega_{0}:=i^{*}\Omega\). Let \(X,Y\) be vector fields in \(N\) tangent to \((TN)^{\perp_{\Lambda}}\). Using Proposition 5.3, \([X,Y]\in(TN)^{\perp_{\Lambda}}\) if and only if \([X,Y]\in(TN\cap\mathcal{H})^{\perp_{\Omega_{[\mathcal{H}]}}}\). In order to see this, we take an arbitrary vector field \(Z\) on \(N\) tangent to \(\mathcal{H}\) and check that \(\Omega_{0}([X,Y],Z)=0\). Because \(\Omega\) is closed, we have \[0= i^{*}(\operatorname{d}\Omega)=(\operatorname{d}\Omega_{0})(X,Y, Z)=X(\Omega_{0}(Y,Z))-Y(\Omega_{0}(X,Z))+Z(\Omega_{0}(X,Y))\] \[-\Omega_{0}([X,Y],Z)+\Omega_{0}([X,Z],Y)-\Omega_{0}([Y,Z],X)=- \Omega_{0}([X,Y],Z)\] where we have used that \(X,Y,Z,[Y,Z],[X,Z]\) are horizontal (since \(\mathcal{H}\) is involutive) and that \(X,Y\in(TN\cap\mathcal{H})^{\perp_{\Omega_{[\mathcal{H}]}}}\). ### Vertical coisotropic reduction We shall now study coisotropic reduction of a **vertical submanifold**\(N\hookrightarrow M\). Let \(q\in N\). We have \(\dim(T_{q}N)^{0}=\dim M-\dim N\). Since \(N\) is vertical, \(\theta\not\in(T_{q}N)^{0}\) and we have \[\dim(T_{q}N)^{\perp_{\Lambda}}=\dim\sharp_{\Lambda}(T_{q}N)^{0}=\dim M-\dim N -\dim(\operatorname{Ker}\sharp_{\Lambda}\cap(T_{q}N)^{0})=\dim M-\dim N.\] In particular, \((TN)^{\perp_{\Lambda}}\) is a regular distribution. **Theorem 5.1** (Vertical coisotropic reduction in the cosymplectic setting).: _Let \((M,\Omega,\theta)\) be a cosymplectic manifold and \(i:N\hookrightarrow M\) be an coisotropic vertical submanifold. Denote by \(\mathcal{F}\) the maximal foliation of the involutive regular distribution \((TN)^{\perp_{\Lambda}}\). If the space of all leaves \(N/\mathcal{F}\) admits a manifold structure such that the projection \(\pi:N\to N/\mathcal{F}\) is a submersion, then there exist unique \(\theta_{N}\), \(\omega_{N}\) such that_ \[i^{*}\omega =\pi^{*}\omega_{N},\] \[i^{*}\theta =\pi^{*}\theta_{N},\] _and they define a cosymplectic structure on \(N/\mathcal{F}\). The following diagram summarizes the situation:_ Proof.: Uniqueness is clear from the imposed relation. Denote \(\Omega_{0}:=i^{*}\Omega\), \(\theta_{0}:=i^{*}\theta\). We only need to verify that the following forms are closed, well defined and define a cosymplectic structure: \[\Omega_{N}([u],[v]) :=\Omega_{0}(u,v),\] \[\theta_{N}([u]) :=\theta_{0}(u),\] where \([u]:=T\pi(q)\cdot u\in T_{[q]}N/\mathcal{F}\). If they were well defined, it is clear that they are smooth and closed since \(\pi^{*}d\theta_{N}=d\theta_{0}=0\), \(\pi^{*}\Omega_{N}=d\Omega_{0}=0\) and \(\pi\) is a submersion. Let us first check that these definitions do not depend on the chosen representatives of the vectors. It suffices to observe that for vectors in the distribution, say \(v\in(T_{q}N)^{\perp_{\Lambda}}\), we have \(i_{v}\Omega_{0}=0\) and \(i_{v}\theta_{0}=0\). This easily follows from Proposition 5.3 using that the horizontal proyection of every vector \(u\in T_{q}N\) is tangent to \(N\) (here we use the condition \(\mathcal{R}(q)\in T_{q}N\)). To see the independece of the point in the leave chosen, it is enough to observe that \[\mathcal{L}_{X}\Omega_{0}=0;\ \ \mathcal{L}_{X}\theta_{0}=0\] for every vector field on \(N\) tangent to the distribution \((T_{q}N)^{\perp_{\Lambda}}\) (since every two points in the same leave of the foliation can be joined by a finite union of flows of such fields). Indeed, we have \[\mathcal{L}_{X}\Omega_{0} =i_{X}\,\mathrm{d}\,\Omega_{0}+\mathrm{d}\,i_{X}\Omega_{0}=0,\] \[\mathcal{L}_{X}\theta_{0} =i_{X}\,\mathrm{d}\,\theta_{0}+\mathrm{d}\,i_{X}\theta_{0}=0.\] Now we check that they define a cosymplectic structure. Assuming \(k=\dim N\) and \(2n+1=\dim M\), from the remark above we have \[\dim N/\mathcal{F}=\dim N-\dim(TN)^{\perp_{\Lambda}}=2k-2n-1=2(k-n-1)+1\] and hence, \((N/\mathcal{F},\Omega_{N},\theta_{N})\) is a cosymplectic manifold if and only if \[\theta_{N}\wedge\Omega_{N}^{k-n-1}\neq 0,\] which is equivalent to \(\theta_{0}\wedge\Omega_{0}^{k-n-1}\neq 0\), because \(\pi\) is a submersion. For every point \(q\in N\), \(TqN\) can be decomposed in \[TqN=TqN_{\mathcal{H}}\oplus\mathcal{V}\] where \(T_{q}N_{\mathcal{H}}=(T_{q}N)\cap\mathcal{H}_{q}\). It is easy to see that \((T_{q}N)_{\mathcal{H}}\) is a coisotropic subspace of \((\mathcal{H}_{q},\Omega|_{\mathcal{H}})\). This implies (using symplectic reduction) that there are \(\dim(T_{q}N)_{\mathcal{H}}-\dim(T_{q}N)^{\perp_{\bar{\Omega}}}_{\mathcal{H}}= k-1-(2n+1-k)=2k-2n-2\) horizontal vectors, say \(u_{1},\ldots,u_{2k-2n-2}\) such that \(\Omega_{0}^{k-n-1}(u_{1},\ldots,u_{2k-2n-2})\neq 0\). Taking the last vector to be \(\mathcal{R}(q)\), it is clear that \[(\theta_{0}\wedge\Omega_{0}^{k-n-1})(\mathcal{R}(q),u_{1},\ldots,u_{2k-2n-2}) \neq 0.\] #### 5.3.1 Projection of Lagrangian submanifolds Now we will proof that Lagrangian submanifolds \(L\hookrightarrow M\) project to Lagrangian submanifolds in \(N/\mathcal{F}\). **Proposition 5.5** (Projection of horizontal Lagrangian submanifolds is Lagrangian).: _In the hypothesis of Theorem 5.1, let \(L\hookrightarrow M\) be an horizontal Lagrangian submanifold such that \(L\) and \(N\) have clean intersection. If \(L_{N}:=\pi(L\cap N)\) is a submanifold of \(N/\mathcal{F}\), then \(L_{N}\) is Lagrangian._ Proof.: Let \(\mathcal{H}_{N}\) be the horizontal distribution in \(N/\mathcal{F}\). It is clear that \(L_{N}\) is horizontal, because \(T_{[q]}L_{N}=T\pi(q)(T_{q}L\cap T_{q}N)\). Using Proposition 5.3 we have \[T_{[q]}L_{N}^{\perp_{\Lambda_{N}}}=(T_{[q]}L_{N}\cap\mathcal{H}_{N})^{\perp_{ \Omega_{N}|\mathcal{H}_{N}}}=T_{[q]}L_{N}^{\perp_{\Omega_{N}|\mathcal{H}_{N}}}.\] We will check that \[T_{[q]}L_{N}\subseteq T_{[q]}L_{N}^{\perp_{\Lambda_{N}}}\] and prove that \(\dim L_{N}=\dim N-n-1\), which with Lemma 5.2 together with the calculation of the dimension of \(N/\mathcal{F}\) done in Theorem 5.1, yields the result. Let \([v],[w]\in T_{[q]}L_{N}\). Then \[\Omega_{N}([v],[w])=\Omega(v,w)=0,\] since \(L\) is Lagrangian. Because \([w]\) is arbitrary, this last calculation implies that \([v]\in T_{[q]}L_{N}^{\perp_{\Omega_{N}|\mathcal{H}_{N}}}\subseteq T_{[q]}L_{N }^{\perp_{\Lambda_{N}}}\). Now, \[\dim L_{N}=\dim(L\cap N)-\dim(T_{q}L\cap(T_{q}N)^{\perp_{\Lambda}}). \tag{8}\] Furthermore, since \(L\) is Lagrangian and horizontal, \((T_{q}L\cap(T_{q}N)^{\perp_{\Lambda}})^{\perp_{\Lambda}}=T_{q}L\cap\mathcal{H }_{q}+(T_{q}N^{\perp_{\Lambda}})^{\perp_{\Lambda}}=T_{q}L+(T_{q}N^{\perp_{ \Lambda}})^{\perp_{\Lambda}}\) and thus (using that \(T_{q}L\cap(T_{q}N^{\perp_{\Lambda}})^{\perp_{\Lambda}}\) is necessarily horizontal), \[\dim(T_{q}L\cap(T_{q}N)^{\perp_{\Lambda}})=\dim M-\dim(T_{q}L+(T_{q}N^{\perp_{ \Lambda}})^{\perp_{\Lambda}})-1.\] Since \(\dim(T_{q}L\cap\mathcal{H}_{q}+(T_{q}N^{\perp_{\Lambda}})^{\perp_{\Lambda}})= \dim(T_{q}L\cap\mathcal{H}_{q}+T_{q}N)-1\) (which comes from the fact that \(\dim(T_{q}N^{\perp_{\Lambda}})^{\perp_{\Lambda}}=\dim T_{q}N-1\)) we have \[\dim(T_{q}L\cap\mathcal{H}_{q}+T_{q}N^{\perp_{\Lambda}})=\dim M-\dim(T_{q}L \cap T_{q}N^{\perp_{\Lambda}}). \tag{9}\] Substituting (4) in (3) and using \(\dim L=n\), we conclude \[\dim L_{N} =\dim(L\cap N)-(\dim M-\dim(T_{q}L+T_{q}N))\] \[=\dim(L\cap N)-\dim M+\dim L+\dim N-\dim(L\cap N)\] \[=-2n-1+n+\dim N=\dim N-n-1.\] **Proposition 5.6** (Projection of non-horizontal Lagrangian submanifold is Lagrangian).: _Under the hypothesis of Theorem 5.1, let \(L\hookrightarrow M\) be a non-horizontal Lagragian submanifold. If \(L\) and \(N\) have clean intersection and \(L_{N}:=\pi(L\cap N)\hookrightarrow N/\mathcal{F}\) is a submanifold, then \(L_{N}\) is Lagrangian._ Proof.: The proof follows the same lines as that of Proposition 5.5. That \(L_{N}\) is isotropic follows easily from Proposition 5.3. However, in order to calculate \(\dim L_{N}\), we need to distinguish whether \(L\cap N\) is horizontal or not. * If \(L\cap N\) is horizontal, we need to check that \(\dim L_{N}=\dim N-n-1\), since \(L_{N}\) is horizontal. Because \((T_{q}L\cap T_{q}N^{\perp_{\Lambda}})^{\perp_{\Lambda}}=T_{q}L\cap\mathcal{H}_{q }+(T_{q}N^{\perp_{\Lambda}})^{\perp_{\Lambda}}\), we have \[\dim(T_{q}L\cap\mathcal{H}_{q}+(T_{q}N^{\perp_{\Lambda}})^{\perp_{\Lambda}})= \dim M-\dim(T_{q}L\cap T_{q}N^{\perp_{\Lambda}})-1.\] It is easy to check that \(\dim(T_{q}L\cap\mathcal{H}_{q}+(T_{q}N^{\perp_{\Lambda}})^{\perp_{\Lambda}})= \dim(T_{q}L\cap\mathcal{H}_{q}+T_{q}N)-1\) and thus, \[\dim(T_{q}L\cap T_{q}N^{\perp_{\Lambda}})=\dim M-\dim(T_{q}L\cap\mathcal{H}_{q }+T_{q}N).\] We conclude that \[\dim L_{N} =\dim(L\cap N)-\dim(T_{q}L\cap T_{q}N^{\perp_{\Lambda}})\] \[=\dim(L\cap N)-(\dim M-\dim(T_{q}L\cap\mathcal{H}_{q}+T_{q}N))\] \[=\dim(L\cap N)-(\dim M-\dim(T_{q}L\cap\mathcal{H}_{q})-\dim N+ \dim(T_{q}L\cap\mathcal{H}_{q}\cap T_{q}N))\] \[=\dim(L\cap N)-\dim M+\dim L-1+\dim N-\dim(T_{q}L\cap T_{q}N)\] \[=\dim N-\dim M+\dim L-1=\dim N-2n-1+n+1-1=\dim N-n-1,\] where we have used that \(T_{q}L\cap\mathcal{H}_{q}\cap T_{q}N=T_{q}L\cap T_{q}N\), since \(N\cap L\) is horizontal, and \(\dim T_{q}L\cap\mathcal{H}_{q}=\dim L-1\), because \(T_{q}L\not\subseteq\mathcal{H}_{q}\). * If \(L\cap N\) is not horizontal, we need to check that \(\dim L_{N}=\dim N-n\). This follows from the same calculation done in i), using that \[\dim(T_{q}L\cap\mathcal{H}_{q}\cap T_{q}N)=\dim(T_{q}L\cap T_{q}N)-1.\] ### Horizontal coisotropic reduction We will restrict the study to **horizontal** coisotropic submanifolds \(N\hookrightarrow M\), that is, manifolds satisfying \(T_{q}N\subseteq\mathcal{H}_{q}\) for every \(q\in N\). Note that in this case the distribution \((TN)^{\perp_{\Lambda}}\) is also regular, since \[\dim(T_{q}N)^{\perp_{\Lambda}}=\dim M-\dim N-\dim(\operatorname{Ker}\sharp_{ \Lambda}\cap(T_{q}N)^{0})=\dim M-\dim N-1.\] **Theorem 5.2** (Horizontal coisotropic reduction in the cosymplectic setting).: _Let \((M,\Omega,\theta)\) be a cosymplectic manifold and \(i:N\hookrightarrow M\) be an horizontal coisotropic submanifold. Denote by \(\mathcal{F}\) the space of leaves determined by the regular and involutive distribution \((TN)^{\perp_{\Lambda}}.\) If \(N/\mathcal{F}\) admits a manifold structure such that \(\pi:N\to N/\mathcal{F}\) is a submersion, then there exists a unique \(2\)- form \(\Omega_{N}\) in \(N/\mathcal{F}\) such that_ \[\pi^{*}\Omega_{N}=i^{*}\Omega\] _and \((N/\mathcal{F},\Omega_{N})\) is a symplectic manifold._ Proof.: Since \(N\) is horizontal and the horizontal distribution is integrable, \(N\) will be contained in an unique symplectic leaf and thus, we are performing symplectic reduction. The proof is just repeating what has been done in Theorem 3.2. We can generalize this process to arbitrary submanifolds. Let \(N\hookrightarrow M\) be a coisotropic submanifold. Since in general we cannot guarantee the well-definedness of the 2-form in the quotient, we will reduce the intersection of \(N\) with each one of the symplectic leaves. It is clear that \(TN\cap\mathcal{H}\) is an involutive distribution, since \(TN\) and \(\mathcal{H}\) are. If this distribution was regular, for every \(q\in N\) there would exist an unique maximal leaf of the distribution, say \(S_{q}\). Notice that \(S_{q}\hookrightarrow M\) is an horizontal submanifold. We can perform coisotropic reduction in each of this submanifolds. #### 5.4.1 Projection of Lagrangian submanifolds **Proposition 5.7**.: _Let \(L\hookrightarrow M\) be a Lagrangian submanifold. If \(L\) and \(N\) have clean intersection and \(L_{N}:=\pi(L\cap N)\) is a submanifold, then \(L_{N}\) is Lagrangian._ Proof.: It follows from Proposition 3.4. ## 6 Jacobi structures Contact and cocontact manifolds are not Poisson manifolds. However, there is still a Lie bracket defined in the algebra of functions, as we will see. This bracket induces what is called a Jacobi manifold. In this section we define and study such structures (see [23, 26] for more details). **Definition 6.1** (Jacobi Manifold).: _A **Jacobi structure** on a manifold \(M\) is a Lie bracket defined in the algebra of functions \((\mathcal{C}^{\infty}(M),\{\cdot,\cdot\})\) such that it satisfies the weak Leibniz rule, that is,_ \[\operatorname{supp}\{f,g\}\subseteq\operatorname{supp}f\cap\operatorname{ supp}g.\] Every Jacobi bracket can be uniquely expressed as \[\{f,g\}=\Lambda(\operatorname{d}f,\operatorname{d}g)+fE(g)-gE(f),\] where \(\Lambda\) is a bivector field (called the **Jacobi tensor**) and \(E\) is a vector field. \(\Lambda\) and \(E\) satisfy the equalities \[[E,\Lambda]=0,\ [\Lambda,\Lambda]=2E\wedge\Lambda;\] where \([\cdot,\cdot]\) is the Schouten-Nijenhuis bracket. Conversely, given a bivector field \(\Lambda\) and a vector field \(E\), \[\{f,g\}:=\Lambda(\operatorname{d}f,\operatorname{d}g)+fE(g)-gE(f)\] defines a Jacobi bracket if and only if both equalities above hold. **Remark 6.1**.: It is clear that Poisson manifolds are Jacobi manifolds, taking \(E=0\). The Jacobi tensor allows us to define the morphism \[\sharp_{\Lambda}:T^{*}M\to TM;\ \alpha\mapsto i_{\alpha}\Lambda.\] Define the \(\Lambda\)-orthogonal of distributions \(\Delta\) as \[\Delta^{\perp_{\Lambda}}:=\sharp_{\Lambda}(\Delta^{0}).\] We can define the Hamiltonian vector field defined by a function \(H\) as \[X_{H}=\sharp_{\Lambda}(\operatorname{d}H)+HE\] Just like in the Poisson case, we say that a distribution \(\Delta\) is: * **Isotropic** if \(\Delta\subseteq\Delta^{\perp_{\Lambda}}\); * **Coisotropic** if \(\Delta^{\perp_{\Lambda}}\subseteq\Delta\); * **Legendrian** if \(\Delta^{\perp_{\Lambda}}=\Delta\). These definitions extend naturally to submanifolds. **Remark 6.2**.: As in the case of Poisson manifolds, a Jacobi structure on a manifold \(M\) defines a characteristic distribution \(\mathcal{S}\) as follows: \(\mathcal{S}_{x}\) is the vector subspace of \(T_{x}M\) generated by the values of all Hamiltonian vector fields at \(x\) and the vector field \(E\) evaluated at \(x\). This is again an involutive distribution in the sense of Stefan and Sussmann, and the leaves of the corresponding foliation are contact manifolds if the leaf has odd dimension, and locally conformal symplectic manifolds, if the leaf has even dimension [26]. ## 7 Coisotropic reduction in contact geometry Contact manifolds are the natural settings for Hamiltonian systems with dissipation, instead of symplectic Hamiltonian systems where the antisymmetry of the symplectic form provides conservative properties. In the Lagrangian picture, contact Lagrangian systems correspond to the so-called Lagrangians depending on the action, and instead of Hamilton principle, one has to use the so-called Herglotz principle to obtain the dynamics [13]. **Definition 7.1** (Contact manifold).: _A contact manifold is a couple \((M,\eta)\) where \(M\) is a \((2n+1)\)-dimensional manifold, \(\eta\) is a \(1\)-form and \(\eta\wedge(\operatorname{d}\eta)^{n}\neq 0\)._ In this case we also have Darboux coordinates \((q^{i},p_{i},z)\) in \(M\)[20] such that \[\eta=\operatorname{d}z-p_{i}\operatorname{d}q_{i};\] We have also have a bundle isomorphism defined as in the cosymplectic case \[\flat_{\eta}:TM\to T^{*}M;\ v_{q}\mapsto i_{v_{q}}\operatorname{d}\eta+\eta(v _{q})\cdot\eta,\] its inverse \(\sharp_{\eta}=\flat_{\eta}^{-1}\), and a couple of natural distributions: * The **horizontal** distribution \(\mathcal{H}:=\operatorname{Ker}\eta\); * The **vertical** distribution \(\mathcal{V}:=\operatorname{Ker}\mathrm{d}\,\eta\). We can find different types of tangent vectors at a point \(q\in M\). Indeed, a tangent vector \(v\in T_{q}M\) will be called * **Horizontal** if \(v\in\mathcal{H}_{q}\); * **Vertical** if \(v\in\mathcal{V}_{q}\). This time, however, we cannot define a canonical Poisson structure since the bivector field \[\Lambda(\alpha,\beta):=-\operatorname{d}\eta(\sharp_{\eta}(\alpha),\sharp_{ \eta}(\beta))\] is not a Poisson tensor. In fact, \[[\Lambda,\Lambda]=-2\mathcal{R}\wedge\Lambda;\ [E,\Lambda]=0\] where \(\mathcal{R}\) is the **Reeb vector field** defined as \(\mathcal{R}:=\sharp_{\eta}(\eta)\) (locally \(\mathcal{R}=\frac{\partial}{\partial z}\)). This is easily seen performing a direct calculation in Darboux coordinates using the local expresion \[\Lambda=\frac{\partial}{\partial p_{i}}\wedge\frac{\partial}{\partial q^{i}}+ p_{i}\frac{\partial}{\partial p_{i}}\wedge\frac{\partial}{\partial z}.\] This defines a Jacobi structure in \(M\) taking \(\Lambda\) as above and \(E=-\mathcal{R}\) (see Section 6). The Jacobi bracket is locally expressed as \[\{f,g\}=\frac{\partial f}{\partial p_{i}}\frac{\partial g}{\partial q^{i}}- \frac{\partial f}{\partial q^{i}}\frac{\partial g}{\partial p_{i}}+p_{i}\left( \frac{\partial f}{\partial p_{i}}\frac{\partial g}{\partial z}-\frac{\partial g }{\partial p_{i}}\frac{\partial f}{\partial z}\right)+g\frac{\partial f}{ \partial z}-f\frac{\partial g}{\partial z}.\] The morphism induced by the Jacobi tensor \(\Lambda\) satisfies \[\operatorname{Ker}\sharp_{\Lambda}=\langle\eta\rangle,\ \ \operatorname{Im} \sharp_{\Lambda}=\mathcal{H}.\] ### 7.1 Hamiltonian and evolution vector fields as Lagrangian and Legendrian submanifolds **Definition 7.2** (Hamiltonian vector field).: _Let \(H\in\mathcal{C}^{\infty}(M)\). Define the **Hamiltonian vector field** of \(H\) as_ \[X_{H}:=\sharp_{\Lambda}(dH)-H\mathcal{R}=\sharp(\operatorname{d}H)-(\mathcal{ R}(H)+H)\mathcal{R}.\] Locally, it has the local expression \[X_{H}=\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\left( \frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial z}\right) \frac{\partial}{\partial p_{i}}+\left(p_{i}\frac{\partial H}{\partial p_{i}}- H\right)\frac{\partial}{\partial z}.\] We can define a symplectic structure in \(TM\) taking \(\Omega_{0}:=\flat_{\eta}^{*}\Omega_{M}\), where \(\Omega_{M}\) is the canonical symplectic structure in \(T^{*}M\). In local coordinates \((q^{i},p_{i},z,\dot{q}^{i},\dot{p}_{i},\dot{z})\), it has the expression \[\Omega_{0}= \,p_{i}p_{j}\,\mathrm{d}\,q^{i}\wedge\mathrm{d}\,\dot{q}^{j}+ \left((1+\delta_{i}^{j})p_{i}\dot{q}^{j}-\delta_{i}^{j}\dot{z}\right)\mathrm{d} \,q^{i}\wedge\mathrm{d}\,p_{j}-\mathrm{d}\,q^{i}\wedge\mathrm{d}\,\dot{p}_{i}- p_{i}\,\mathrm{d}\,q^{i}\wedge\mathrm{d}\,\dot{z}\] \[+\mathrm{d}\,p_{i}\wedge\mathrm{d}\,\dot{q}^{i}+\mathrm{d}\,z \wedge\mathrm{d}\,\dot{z}-p_{i}\,\mathrm{d}\,z\wedge\mathrm{d}\,\dot{q}^{i}- \dot{q}^{i}\,\mathrm{d}\,z\wedge\mathrm{d}\,p_{i}\] \[= \,\mathrm{d}\,q^{i}\wedge\left(p_{i}p_{i}\,\mathrm{d}\,q^{j}+(1 +\delta_{i}^{j})p_{i}\dot{q}^{j}\,\mathrm{d}\,p_{j}-\dot{z}\,\mathrm{d}\,p_{i }-\mathrm{d}\,\dot{p}_{i}-p_{i}\,\mathrm{d}\,\dot{z}\right)\] \[+\mathrm{d}\,p_{i}\wedge\mathrm{d}\,\dot{q}^{i}+\mathrm{d}\,z \wedge\left(\mathrm{d}\,\dot{z}-p_{i}\,\mathrm{d}\,\dot{q}^{i}-\dot{q}^{i}\, \mathrm{d}\,p_{i}\right)\] **Definition 7.3** (Gradient vector field).: _Given a Hamiltonian \(H\) on \(M\), define the **gradient vector field** of \(H\) as_ \[\operatorname{grad}H:=\sharp_{\eta}(\mathrm{d}\,H).\] Locally, it is given by \[\operatorname{grad}H=\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q ^{i}}-\left(\frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial z }\right)\frac{\partial}{\partial p_{i}}+\left(p_{i}\frac{\partial H}{\partial p _{i}}+\frac{\partial H}{\partial z}\right)\frac{\partial}{\partial z}.\] We have the following relation between both vector fields \[X_{H}=\operatorname{grad}H-(\mathcal{R}(H)+H)\mathcal{R}.\] Just like in the previous sections, a vector field \(X:M\to TM\) is locally a gradient vector field if and only if it defines a Lagrangian submanifold in \((TM,\Omega_{0})\). The proof is straight-forward, checking that \[X^{*}\Omega_{0}=-\operatorname{d}X^{\flat_{\eta}}.\] We can also interpret the Hamiltonian vector field \(X_{H}\) as a Lagrangian submanifold of \(TM\), but we need to modify slightly the symplectic form. It is easy to verify that \[X_{H}^{*}\Omega_{0}=\mathrm{d}(\mathcal{R}(H)\eta+H\eta),\] therefore, taking \[\Omega_{H}:=\Omega_{0}-\mathrm{d}(\mathcal{R}(H)\eta+H\eta)^{v},\] we have \[X_{H}^{*}\Omega_{H}=0.\] It is clear that \(\widetilde{\Omega}_{0}\) is a symplectic form. We have proved: **Proposition 7.1**.: _The Hamiltonian vector field \(X_{H}:M\to TM\) defines a Lagrangian submanifold of the symplectic manifold \((TM,\Omega_{H})\)._ Now we study evolution vector fields, an important vector field in the application of contact geometry to thermodynamics. For more details, see [32, 33]. **Definition 7.4**.: _Given a Hamiltonian \(H\), we define the **evolution vector field** as_ \[\mathcal{E}_{H}:=X_{H}+H\mathcal{R}.\] Locally, the evolution vector field is written \[\mathcal{E}_{H}=\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}- \left(\frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial z} \right)\frac{\partial}{\partial p_{i}}+p_{i}\frac{\partial H}{\partial p_{i}} \frac{\partial}{\partial z}.\] Let us see how we can modify the sympelctic form \(\Omega_{0}\) in such a way that \(\mathcal{E}_{H}\) defines a Lagrangian submanifold. We have \[\mathcal{E}_{H}^{*}\Omega_{0}=X_{H}^{*}\Omega_{0}+(H\mathcal{R})^{*}\Omega_{0 }=\operatorname{d}(\mathcal{R}(H)\eta+H\eta)-\operatorname{d}(H\eta)= \operatorname{d}(\mathcal{R}(H)\eta),\] and thus, \(\mathcal{E}_{H}\) defines a Lagrangian submanifold of \((TM,\widetilde{\Omega}_{H})\), where \[\widetilde{\Omega}_{H}=\Omega_{0}-\operatorname{d}(\mathcal{R}(H)\eta).\] We can also interpret Hamiltonian and evolution vector fields as Legendrian submanifolds of a certain contact structure defined on \(TM\times\mathbb{R}\). **Definition 7.5**.: _Let \((M,\eta)\) be a contact manifold. Define the contact form on \(TM\times\mathbb{R}\) as_ \[\hat{\eta}:=\eta^{c}+t\eta^{v},\] _where \(\eta^{c}\), \(\eta^{v}\) are the complete and vertical lifts [9]. It is easily checked that \(\hat{\eta}\) defines a contact structure [7]._ In local coordinates it has the expression: \[\hat{\eta}=\operatorname{d}\dot{z}-\dot{p}_{i}\operatorname{d}q^{i}-p_{i} \operatorname{d}\dot{q}^{i}+t\operatorname{d}z-tp_{i}\operatorname{d}q^{i}\] We have the following [7]: **Proposition 7.2**.: _Let \(X_{H}:M\to TM\) be a Hamiltonian vector field. Then, the submanifold defined by the immersion_ \[i:M\hookrightarrow TM\times\mathbb{R};\ \ p\mapsto(X_{H}(p),\mathcal{R}(H))\] _is a Legendrian submanifold of \((TM\times\mathbb{R},\hat{\eta})\)._ Proof.: Using the properties of complete and vertical lifts we have \[(X_{H}\times\mathcal{R}(H))^{*}\hat{\eta}=\mathcal{L}_{X_{H}}\eta+\mathcal{R} (H)\eta.\] Using Lemma 7.2 it will be sufficient to see that \(\mathcal{L}_{X_{H}}\eta=-\mathcal{R}(H).\) This is a straight-forward verification using \[\mathcal{L}_{X_{H}}\eta=\operatorname{d}i_{X_{H}}\eta+i_{X_{H}}\operatorname{ d}\eta=-\operatorname{d}H+\operatorname{d}H-\mathcal{R}(H)\eta.\] ### 7.2 Coisotropic reduction Coisotropic reduction in contact manifolds have been developed in [7] (see also [25, 36]). The following definition will result useful. Given a subspace \(\Delta_{q}\subseteq T_{q}M\), we define the \(d\eta\)**-orthogonal complement** as \[\Delta_{q}^{\perp_{d\eta}}:=\{v\in T_{q}M,\,|\,\,d\eta(v,w)=0\,\,\forall w\in \Delta_{q}\}.\] **Proposition 7.3**.: _Let \(\Delta_{q}\subseteq T_{q}M\) be a subspace. Then_ \[\Delta_{q}^{\perp_{d\eta}}\cap\mathcal{H}_{q}\subseteq\Delta_{q}^{\perp_{ \Lambda}}.\] _Furthermore, if \(\mathcal{R}(q)\in\Delta_{q}\) or \(\Delta_{q}\subseteq\mathcal{H}_{q}\), the equality holds._ Proof.: Let \(v\in\Delta_{q}^{\perp_{d\eta}}\cap\mathcal{H}_{q}\) and take \(\alpha:=i_{v}d\eta\). It is clear that \(\alpha\in\Delta_{q}^{0}\). We will proof that \(\sharp_{\Lambda}(-\alpha)=v\). Indeed, for \(\beta\in T_{q}^{*}M\), since \(b_{\eta}(v)=\alpha\) (as a direct calculation shows), we have \[\langle\beta,\sharp_{\Lambda}(\alpha)\rangle =\Lambda(\alpha,\beta)=\Omega(\sharp_{\eta}(\alpha),\sharp_{\eta} (\beta))=\Omega(v,\sharp_{\eta}(\beta))\] \[=-\Omega(\sharp_{\eta}(\beta),v)-\eta(\sharp_{\Lambda}(\beta)) \eta(v)=-\langle\flat_{\eta}(\sharp_{\eta}(\beta)),v\rangle=-\langle\beta,v\rangle,\] that is, \(v=\sharp_{\Lambda}(-\alpha)\). Now, if \(\mathcal{R}(q)\in\Delta_{q}\), we compare dimensions. Since \(\operatorname{Ker}\sharp_{\Lambda}=\langle\eta\rangle\) and \(\eta\not\in\Delta_{q}^{0}\), we have \[\dim\Delta_{q}^{\perp_{\Lambda}}=\dim\Delta_{q}^{0}=\dim M-\dim\Delta_{q}.\] Furthermore, \(\Delta^{\perp_{d\eta}}\cap\mathcal{H}_{q}\) has the same dimension, since \[\Delta_{q}^{\perp_{d\eta}}\cap\mathcal{H}_{q} =(\Delta_{q}\cap\mathcal{H}_{q}\oplus\mathcal{V}_{q})^{\perp_{d \eta}}\cap\mathcal{H}_{q}=(\Delta_{q}\cap\mathcal{H}_{q})^{\perp_{d\eta}}\cap \mathcal{V}_{q}^{\perp_{d\eta}}\cap\mathcal{H}_{q}\] \[=(\Delta_{q}\cap\mathcal{H}_{q})^{\perp_{d\eta}}\cap\mathcal{H}_ {q}.\] This latter is just the symplectic complement in \((\mathcal{H}_{q},d\eta|_{\mathcal{H}})\) and hence, \[\dim(\Delta_{q}^{\perp_{d\eta}}\cap\mathcal{H}_{q})=\dim\mathcal{H}_{q}-\dim( \Delta_{q}\cap\mathcal{H}_{q})=\dim M-1-(\dim\Delta_{q}-1).\] Now, if \(\Delta_{q}\subseteq\mathcal{H}_{q}\), \(\eta\in\Delta_{q}^{0}\) and, thus, \[\dim\Delta_{q}^{\perp_{d\eta}}=\dim M-\dim\Delta_{q}-1.\] Since \(\Delta_{q}^{\perp_{d\eta}}\cap\mathcal{H}_{q}\) is just the symplectic complement of \(\Delta_{q}\) we have \[\dim(\Delta_{q}^{\perp_{d\eta}}\cap\mathcal{H}_{q})=\dim\mathcal{H}_{q}-\dim \Delta_{q}=\dim M-1-\dim\Delta_{q}.\] This proposition allows us to characterize Legendrian submanifolds: **Lemma 7.1**.: _If \(L\hookrightarrow M\) is a Legendrian submanifold, then \(L\) is horizontal and \(\dim L=n\) (where \(\dim M=2n+1\)). Furthermore, if \(L\) is horizontal and isotropic (or coisotropic) with \(\dim L=n\), \(L\) is Legendrian._ Proof.: Since \(\sharp_{\Lambda}\) takes values in \(\mathcal{H}\), it is clear that every Legendrian submanifold is horizontal. Since \(L\) is horizontal, \[\dim T_{q}L^{\perp_{\Lambda}}=\dim M-\dim L-1.\] From the previous equation and using that \(T_{q}L^{\perp_{\Lambda}}=T_{q}L\), we deduce that \(\dim L=\dim M-\dim L-1\). This implies \(\dim L=n\). The last property is easily seen via a direct comparison of dimensions. We will also need characterization of isotropic submanifolds in contact geometry: **Lemma 7.2**.: _A submanifold \(N\hookrightarrow M\) is isotropic if and only if \(i^{*}\eta=0\)._ Proof.: Necessity is clear, since \(\sharp_{\Lambda}\) takes values in \(\mathcal{H}\). Now suppose that \(N\) is horizontal. We have \(i^{*}\eta=0\) and thus, \(i^{*}\operatorname{d}\eta=0\). This implies that \(T_{q}N\subseteq T_{q}N^{\perp_{dq}}\cap\mathcal{H}_{q}\subseteq T_{q}N^{\perp_ {\Lambda}}\), using Proposition 7.3. **Proposition 7.4**.: _Let \(i:N\hookrightarrow M\) be a coisotropic submanifold such that \(\mathcal{R}(q)\in T_{q}N\) for every \(q\in N\) or \(T_{q}N\subseteq\mathcal{H}_{q}\) for every \(q\in N\). Define \(\eta_{0}:=i^{*}\eta\). Then_ \[T_{q}N^{\perp_{\Lambda}}=\operatorname{Ker}\operatorname{d}\eta_{0}\cap \operatorname{Ker}\eta_{0}.\] Proof.: Let \(q\in N\). Proposition 7.3 implies that \[T_{q}N^{\perp_{\Lambda}}=T_{q}N^{\perp_{q}}\cap\mathcal{H}_{q}.\] But, since \(T_{q}N\) is coisotropic, then it is just \(\operatorname{Ker}\operatorname{d}\eta_{0}\cap\operatorname{Ker}\eta_{0}\). We then have the following result: **Proposition 7.5**.: _Let \(i:N\hookrightarrow M\) be a coisotropic subamnifold such that \(\mathcal{R}(q)\in T_{q}N\) for every \(q\in N\) or \(T_{q}N\subseteq\mathcal{H}_{q}\) for every \(q\in N\). Then, the distribution \(TN^{\perp_{\Lambda}}\) defined by \(q\mapsto T_{q}N^{\perp_{\Lambda}}\) is involutive._ Proof.: Denote \(\eta_{0}:=i^{*}\eta\) and let \(X,Y\) be vector fields along \(N\) taking values in \(TN^{\perp_{\Lambda}}\). Proposition 7.4 implies that \[i_{X}\operatorname{d}\eta_{0}=i_{Y}\operatorname{d}\eta_{0}=0;\ \ i_{X}\eta_{0}=i_{Y}\eta_{0}=0.\] It suffices to check that \[i_{[X,Y]}\operatorname{d}\eta_{0}=0;\ \ i_{[X,Y]}\eta_{0}=0.\] Indeed, taking \(Z\) an arbitrary vector field in \(N\), we have \[0 =\operatorname{d}^{2}\eta_{0}(X,Y,Z)=X(\operatorname{d}\eta_{0}(Y,Z ))-Y(\operatorname{d}\eta_{0}(X,Z))+Z(\operatorname{d}\eta_{0}(X,Y))\] \[-\operatorname{d}\eta_{0}([X,Y],Z)+\operatorname{d}\eta_{0}([X,Z ],Y)-\operatorname{d}\eta_{0}([Y,Z],X)=-\operatorname{d}\eta_{0}([X,Y],Z),\] where we have used that \(X,Y\in\operatorname{Ker}d\eta_{0}.\) In a similar way we obtain \[0=\operatorname{d}\eta_{0}(X,Y)=X(\eta_{0}(Y))-Y(\eta_{0}(X))-\eta_{0}([X,Y])= -\eta_{0}([X,Y]),\] that is, \([X,Y]\in\operatorname{Ker}\operatorname{d}\eta_{0}\cap\operatorname{Ker}\eta_{ 0}=TN^{\perp_{\Lambda}}.\) ### Vertical coisotropic reduction We will restrict the study to vertical submanifolds, that is, submanifolds satisfying \(\mathcal{R}(q)\in T_{q}N\), for every \(q\in N\). Notice that if \(N\) is a coisotropic vertical submanifold, the distribution \((TN)^{\perp_{\Lambda}}\) is regular with dimension \[\dim(TN)^{\perp_{\Lambda}}=\dim M-\dim N.\] **Theorem 7.1** (Vertical coisotropic reduction in the contact setting).: _Let \((M,\eta)\) be a contact manifold and \(i:N\hookrightarrow M\) be a coisotropic submanifold such that \(\mathcal{R}(q)\in T_{q}N\) for every \(q\in N\). If the space of all leaves \(N/\mathcal{F}\) admits a manifold structure such that the projection \(\pi:N\to N/\mathcal{F}\) is a submersion, then there exists a unique \(1\)-form \(\eta_{N}\) in \(N/\mathcal{F}\) such that \(\pi^{*}\eta_{N}=i_{*}\eta\) and \((N/\mathcal{F},\eta_{N})\) is a contact manifold._ Proof.: Denote \(\eta_{0}:=i^{*}\eta\). Uniqueness is clear from the imposed relation since it forces us to define \[\eta_{N}([u]):=\eta_{0}(u).\] It only remains to check well-definedness and that it defines a contact manifold. That this definition does not depend on the chosen representative vector is clear since a vector tangent to the distribution is necessarily in the kernel of \(\eta\). Furthermore, if \(X\) is a vector field tangent to the distribution \(TN^{\perp_{\Lambda}}\), Proposition 7.4 implies \[\mathcal{L}_{X}\eta_{0}=0=\operatorname{d}i_{X}\eta_{0}+i_{X}\operatorname{d} \eta_{0}=0,\] since \(X\in\operatorname{Ker}d\eta_{0}\cap\operatorname{Ker}\eta_{0}\). To check that it is a contact manifold, we calculate the dimension of \(N/\mathcal{F}\). We have that \(\dim(T_{q}N)^{\perp_{\Lambda}}=\dim M-\dim N\). We conclude, taking \(k:=\dim N\), that \[\dim N/\mathcal{F}=2\dim N-\dim M=2(k-n-1)+1\] and therefore, \((N/\mathcal{F},\eta_{N})\) is a contact manifold if and only if \[\eta_{N}\wedge(\operatorname{d}\eta_{N})^{k-n-1}\neq 0.\] Since \(\pi\) is a submersion, this is equivalent to \(\eta_{0}\wedge(\operatorname{d}\eta_{0})^{k-n-1}\neq 0\). This is straightforward using Proposition 7.4. #### 7.3.1 Projection of Legendrian submanifolds Now we check that the image of a Legendrian submanifold \(L\hookrightarrow M\) under the projection \(\pi:N\to N/\mathcal{F}\) is again a Legendrian submanifold. **Proposition 7.6**.: _Let \(L\hookrightarrow M\) be a Legendrian submanifold such that \(L\) and \(N\) have clean intersection. If \(L_{N}:=\pi(L\cap N)\) is a submanifold of \(N/\mathcal{F}\), then \(L_{N}\) is Legendrian._ Proof.: It suffices to check that \(L_{N}\) is horizontal, isotropic and \(\dim L_{N}=\dim N-n-1\) using Lemma 7.2. Since \(L\) is horizontal, \(L_{N}\) is horizontal and thus, \(L_{N}\) is isotropic. For the comparison of dimensions, we have \[\dim T_{[q]}L_{N}=\dim T_{q}L\cap T_{q}N-\dim T_{q}L\cap T_{q}N^{\perp_{ \Lambda}}. \tag{10}\] Now, since \((T_{q}L\cap T_{q}N^{\perp_{\Lambda}})^{\perp_{\Lambda}}=T_{q}L+(T_{q}N^{\perp_ {\Lambda}})^{\perp_{\Lambda}}\) and \((T_{q}L\cap T_{q}N)^{\perp_{\Lambda}}\) is horizontal, we have \[\dim(T_{q}L+(T_{q}N^{\perp_{\Lambda}})^{\perp_{\Lambda}})=\dim M-\dim(T_{q}L \cap T_{q}N^{\perp_{\Lambda}})-1. \tag{11}\] Using \(\dim(T_{q}L\cap T_{q}N^{\perp_{\Lambda}})=\dim(T_{q}L+T_{q}N)-1\) and substituting (6) in (5), we obtain \[\dim L_{N} =\dim L\cap N-(\dim M-\dim(T_{q}L+T_{q}N))\] \[=\dim L\cap N-\dim M+\dim N+\dim L-\dim L\cap N\] \[=\dim N-2n-1+n=\dim N-n-1.\] ### 7.4 Horizontal coisotropic reduction We will rersrict the study to **horizontal** coisotropic submanifolds \(N\to M\), that is, manifolds satisfying \(T_{q}N\subseteq\mathcal{H}_{q}\) for every \(q\in N\). **Remark 7.1**.: Notice that in this case reduction is trivial, since the only coisotropic horizontal submanifolds of a contact manifold are those that are Legendrian. This would imply \[\dim N/\mathcal{F}=0,\] making the resulting manifold trivial. Given an arbitrary coisotropic submanifold \(N\hookrightarrow M\), we cannot guarantee the well-definedness of the 2-form in the quotient \(N/\mathcal{F}\) (actually, in the contact setting, we cannot even guarantee the integrability of \(TN^{\perp_{\Lambda}}\)) so this time (referring to horizontal reduction in cosymplectic geometry) we cannot obtain a foliation of \(N\) in symplectic leaves, since \(TN\cap\mathcal{H}|N\) is not integrable in the general setting. #### 7.4.1 Projection of Legendrian submanifolds The triviality of this case makes the projection of Lagrangian submanifolds trivial. ## 8 Coisotropic reduction in cocontact geometry Cocontact manifolds have been introduced in [6] just to provide a setting for dissipative systems which also depend on time. In geometric terms, we are combining cosymplectic and contact structures. **Definition 8.1** (Cocontact manifold).: _A cocontact manifold is a triple \((M,\theta,\eta)\), where \(M\) is a \((2n+2)\)-dimensional manifold, \(\theta\) is a closed \(1\)-form, \(\eta\) is a \(1\)-form and, \(\theta\wedge\eta\wedge(\mathrm{d}\,\eta)^{n}\neq 0\) is a volume form._ The bundle isomorphism in this case is defined as \[\flat_{\theta,\eta}:TM\to T^{*}M;\ v\mapsto\theta(v)\theta+i_{v}\,\mathrm{d} \,\eta+\eta(v)\eta,\] and its inverse is denoted by \(\sharp_{\theta,\eta}=\flat_{\theta,\eta}^{-1}\). In cocontact geometry there exists aswell a set of canonical coordinates \((q^{i},p_{i},z,t)\), which will be called Darboux coordinates, such that \[\eta=\mathrm{d}\,z-p_{i}\,\mathrm{d}\,q^{i};\ \theta=\mathrm{d}\,t.\] We can define aswell the Reeb vector fields as \[\mathcal{R}_{z}:=\sharp_{\theta,\eta}(\eta);\ \mathcal{R}_{t}=\sharp_{\theta, \eta}(\theta),\] which can be expressed locally as \[\mathcal{R}_{z}=\frac{\partial}{\partial z};\ \mathcal{R}_{t}=\frac{\partial}{ \partial t}.\] We also have vertical and horizontal distributions: * The \(z\)**-horizontal** distribution, \(\mathcal{H}_{z}:=\mathrm{Ker}\,\eta\); * The \(t\)**-horizontal** distribution, \(\mathcal{H}_{t}:=\mathrm{Ker}\,\theta\); * The \(tz\)**-horizontal** distribution \(\mathcal{H}_{tz}:=\mathcal{H}_{t}\cap\mathcal{H}_{z}\); * The \(t\)**-vertical** distribution, \(\mathcal{V}_{t}:=\langle\mathcal{R}_{t}\rangle\); * The \(z\)**-vertical** distribution, \(\mathcal{V}_{z}:=\langle\mathcal{R}_{z}\rangle\). ### 8.1. Hamiltonian vector fields as Lagrangian and Legendrian submanifolds Just like in previous sections, define the **gradient vector field** of certain Hamiltonian \(H\in\mathcal{C}^{\infty}(M)\) as \[\operatorname{grad}H=\sharp_{\theta,\eta}(\operatorname{d}H).\] Locally, the gradient vector field is expressed: \[\operatorname{grad}H=\frac{\partial H}{\partial p_{i}}\frac{\partial}{ \partial q^{i}}-\left(\frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H }{\partial z}\right)\frac{\partial}{\partial p_{i}}+\left(p_{i}\frac{\partial H }{\partial p_{i}}+\frac{\partial H}{\partial z}\right)\frac{\partial}{\partial z }+\frac{\partial H}{\partial t}\frac{\partial}{\partial t}.\] We can define a symplectic structure in \(TM\) taking \[\Omega_{0}:=\flat_{\theta,\eta}^{*}\Omega_{M},\] where \(\Omega_{M}\) is the canonical symplectic form on the cotangent bundle. In the induced coordinates \((q^{i},p_{i},z,\dot{q}^{i},\dot{p}_{i},\dot{z})\), \(\Omega_{0}\) takes the form \[\Omega_{0}= \operatorname{d}q^{i}\wedge\left(p_{i}p_{i}\operatorname{d}q^{j }+(1+\delta_{i}^{j})p_{i}\dot{q}^{j}\operatorname{d}p_{j}-\dot{z}\operatorname {d}p_{i}-\operatorname{d}\dot{p}_{i}-p_{i}\operatorname{d}\dot{z}\right)\] \[+\operatorname{d}p_{i}\wedge\operatorname{d}\dot{q}^{i}+ \operatorname{d}z\wedge\left(\operatorname{d}\dot{z}-p_{i}\operatorname{d} \dot{q}^{i}-\dot{q}^{i}\operatorname{d}p_{i}\right)+\operatorname{d}t\wedge \operatorname{d}\dot{t}.\] It is easy to verify that a vector field \(X:M\to TM\) is a locally gradient vector field if and only if it defines a Lagrangian submanifold in \((TM,\Omega_{0})\). **Definition 8.2** (Hamiltonian vector field).: _Given a Hamiltonian \(H\) on \(M\), define its **Hamiltonian vector field** as_ \[X_{H}:=\sharp_{\theta,\eta}(\operatorname{d}H)-(\mathcal{R}_{z}(H)+H)\mathcal{ R}_{z}+(1-\mathcal{R}_{t}(H))\mathcal{R}_{t}.\] The Hamiltonian vector field has the local expression \[X_{H}=\frac{\partial H}{\partial p_{i}}\frac{\partial}{\partial q^{i}}-\left( \frac{\partial H}{\partial q^{i}}+p_{i}\frac{\partial H}{\partial z}\right) \frac{\partial}{\partial p_{i}}+\left(p_{i}\frac{\partial H}{\partial p_{i}}- H\right)\frac{\partial}{\partial z}+\frac{\partial}{\partial t}.\] In general, \(X_{H}\) does not define a Lagrangian submanifold of \((TM,\Omega_{0})\); but, just like in the cosymplectic and contact scenario, we can achieve this by modifying the symplectic form. Indeed, since \[X_{H}^{*}\Omega_{0}=-\operatorname{d}X_{H}^{\flat_{\theta,\eta}}=\operatorname {d}\left((\mathcal{R}_{z}(H)+H)\eta\right)-\operatorname{d}\left((1-\mathcal{ R}_{t}(H))\theta\right),\] defining \[\Omega_{H}:=\Omega_{0}-\operatorname{d}((\mathcal{R}_{z}(H)+H)\eta)+ \operatorname{d}((1-\mathcal{R}_{t}(H))\theta),\] we have that \(X_{H}\) defines a Lagrangian submanifold of \((TM,\Omega_{H})\). Now we interpret the Hamiltonian vector field \(X_{H}\) as a Legendrian submanifold of \(TM\times\mathbb{R}\times\mathbb{R}\) with the cocontact structure given by the forms \[\widetilde{\eta}:=\eta^{c}+s\eta^{v}+\theta^{c}+e\theta^{v};\ \ \widetilde{\theta}= \theta^{c},\] where \((s,e)\) are the parameters in \(\mathbb{R}\times\mathbb{R}\). In local coordinates \((q^{i},p_{i},z,t,\dot{q}^{i},\dot{p}_{i},\dot{z},\dot{t},s,e)\), these forms have the expression: \[\widetilde{\eta} =\operatorname{d}\dot{z}-\dot{p}_{i}\operatorname{d}q^{i}-p_{i} \operatorname{d}\dot{q}^{i}+s\operatorname{d}z-sp_{i}\operatorname{d}q^{i}+ \operatorname{d}\dot{t}+e\operatorname{d}t,\] \[\widetilde{\theta} =\operatorname{d}\dot{t}.\] It is easy to see that these forms define a cocontact structure. Now, given a vector field \(X:M\to TM\) and two functions \(f,g\) on \(M\), define \[X\times f\times g:M\to TM\times\mathbb{R}\times\mathbb{R};\ \ p\mapsto(X(p),f(p),g(p)).\] Applying the properties of complete and vertical lifts, namely \(X^{*}\alpha^{c}=\mathcal{L}_{X}(\alpha)\), we have \[(X\times f\times g)^{*}\widetilde{\eta}=\mathcal{L}_{X}\eta+f\eta+\mathcal{L}_ {X}\theta+g\theta\] and \[(X\times f\times g)^{*}\widetilde{\theta}=\operatorname{d}\theta(X).\] **Proposition 8.1**.: _Let \(H\) be a Hamiltonian on \(M\). Then \(X_{H}\times\mathcal{R}_{z}(H)\times 0\) defines a Legendrian submanifold of \((TM\times\mathbb{R}\times\mathbb{R},\widetilde{\theta},\widetilde{\eta})\)._ Proof.: Using the observation above and Lemma 8.1, it is sufficient to observe that \[\mathcal{L}_{X_{H}}\eta=-\mathcal{R}_{z}(H)\eta,\ \mathcal{L}_{X_{H}}\theta=0.\] ### 8.2 Coisotropic reduction A cocontact manifold is also a Jacobi manifold defining \[\Lambda(\alpha,\beta):=-\operatorname{d}\eta(\sharp(\alpha),\sharp(\beta)),\ E=-\mathcal{R}_{z},\] and thus, we have the \(\Lambda\) -orthogonal and the corresponding definitions of isotropic, coisotropic or Legendrian submanifolds and distributions. Notice that \(\mathcal{H}_{t}\) is an integrable distribution and that each leave of its foliation inherits a contact structure. Indeed, \(\mathcal{H}_{t}\) is the characteristic distribution defined by the Jacobi structure, \(\mathcal{S}\). Now we give a symplectic interpretation of the \(\Lambda\)-orthogonal. Notice that the restriction of \(\mathrm{d}\,\eta\) to \(\mathcal{H}_{tz}\) defines a symplectic structure on the distribution. Denote by \(\perp_{d\eta|}\) its symplectic orthogonal. The \(\Lambda\)-orthogonal is just the symplectic orthogonal of the intersection with \(\mathcal{H}_{tz}\). **Proposition 8.2**.: _Given a distribution \(\Delta\) on a cocontact manifold \((M,\eta,\theta)\),_ \[\Delta^{\perp_{\Lambda}}=(\Delta\cap\mathcal{H}_{tz})^{\perp_{\mathrm{d}\, \eta|}}.\] Proof.: We check one inclusion and compare dimensions: Let \(\alpha\in\Delta_{q}^{0}\) and \(u\in\Delta_{q}^{0}\cap(\mathcal{H}_{tz})_{q}.\) We will see that \(\mathrm{d}\,\eta_{q}(u,\sharp_{\Lambda}(\alpha))=0.\) Indeed, \[\mathrm{d}\,\eta_{q}(u,\sharp_{\Lambda}(\alpha)) =\mathrm{d}\,\eta_{q}(u,\sharp_{\Lambda}(\alpha))+\theta_{q}(u) \theta_{q}(\sharp_{\Lambda}(\alpha))+\eta_{q}(u)\eta_{q}(\sharp_{\Lambda}( \alpha))\] \[=\langle b(u),\sharp_{\Lambda}(\alpha)\rangle=\Lambda_{q}( \alpha,\flat(u))=-\,\mathrm{d}\,\eta_{q}(\sharp(\alpha),\sharp(\flat(u)))\] \[=-\,\mathrm{d}\,\eta_{q}(\sharp(\alpha),u)=-\,\mathrm{d}\,\eta_{ q}(\sharp(\alpha),u)-\theta_{q}(\sharp(\alpha))\theta_{q}(u)-\eta_{q}(\sharp( \alpha))\eta_{q}(u)\] \[=-\langle\flat(\sharp(\alpha)),u\rangle=-\alpha(u)=0,\] that is, \(\Delta^{\perp_{\Lambda}}\subseteq(\Delta\cap(\mathcal{H}_{tz})_{q})^{\perp_{ \mathrm{d}\,\eta|}}.\) Now we compare both dimensions. Let \(k:=\dim\Delta,r_{q}:=\dim(\Delta_{q}^{0}\cap\langle\theta_{q},\eta_{q}\rangle).\) Since \[\Delta_{q}^{0}\cap\langle\theta_{q},\eta_{q}\rangle=(\Delta_{q}\oplus( \mathcal{H}_{tz})_{q})^{0},\] we have \[\begin{split} r_{q}&=\dim(\Delta_{q}^{0}\cap\langle \theta_{q},\eta_{q}\rangle)=\dim(\Delta_{q}\oplus(\mathcal{H}_{tz})_{q})^{0}=2 n+2-\dim(\Delta_{q}\oplus(\mathcal{H}_{tz})_{q})\\ &=2n+2-(\dim\Delta_{q}+\dim(\mathcal{H}_{tz})_{q}-\dim(\Delta_{q }\cap(\mathcal{H}_{tz})_{q}))=2n+2-k-2n+\dim(\Delta_{q}\cap(\mathcal{H}_{tz} )_{q})\\ &=2+\dim(\Delta_{q}\cap(\mathcal{H}_{tz})_{q})-k,\end{split}\] which implies that \[\dim(\Delta_{q}\cap(\mathcal{H}_{tz})_{q})=k+r_{q}-2.\] It only remains to observe that \[\dim\Delta^{\perp_{\Lambda}}=2n+2-k-r_{q}=\dim\Delta^{0}-\dim(\Delta^{0}\cap \operatorname{Ker}\sharp_{\Lambda})\] and that \[\dim(\Delta\cap(\mathcal{H}_{tz})_{q})^{\perp_{\mathrm{d}\,\eta|}}=2n+2-k-r_{q }=2n-\dim(\Delta\cap(\mathcal{H}_{tz})_{q}).\] Now we can give a characterization of Legendrian submanifolds: **Lemma 8.1**.: \(i:L\to M\) _is a Legendrian submanifold (\((TL)^{\perp_{\Lambda}}=TL\)) if and only if \(\dim L=n\) and_ \[i^{*}\theta=0,i^{*}\eta=0.\] Proof.: Necessity is clear, since \(L\) is necessarily horizontal. Sufficiency follows from \(i^{*}\operatorname{d}\eta=0\) which, together with \(\dim L=n\), implies that \(T_{q}L\) is a Lagrangian submanifold of \((\mathcal{H}_{tz})_{q}\), for every \(q\in L\). Using Proposition 8.2, we have the equivalence. This allows us to express the \(\Lambda\)-orthogonal complement of a coisotropic distribution in a more convenient way: **Corollary 8.1**.: _Let \(\Delta\) be a coisotropic distribution on \(M\) (\(\Delta^{\perp_{\Lambda}}\subseteq\Delta\)), then_ \[\Delta^{\perp_{\Lambda}}=\operatorname{Ker}\operatorname{d}\eta_{0}|_{\Delta \cap\mathcal{H}_{tz}}\cap\operatorname{Ker}\eta_{0}\cap\operatorname{Ker} \theta_{0},\] _where \(\eta_{0}\) and \(\theta_{0}\) are the restrictions of \(\eta\) and \(\theta\) to \(\Delta\), respectively._ **Corollary 8.2**.: _Let \(i:N\hookrightarrow M\) be a coisotropic submanifold of \((M,\theta,\eta).\) Then the distribution \(TN^{\perp_{\Lambda}}\) is involutive._ Proof.: Denote \(\eta_{0}:=i^{*}\eta\), \(\theta_{0}:=i^{*}\theta\) and let \(X,Y\) be vector fields on \(N\) tangent to the distribution \(TN^{\perp_{\Lambda}}\) and \(Z\) be an arbitrary \(tz\)-horizontal vector field on \(N\). Using Corollary 8.2, we only need to check that \([X,Y]\in\operatorname{Ker}\operatorname{d}\eta_{0}|_{\Delta\cap\mathcal{H}_{tz }}\cap\operatorname{Ker}\eta_{0}\cap\operatorname{Ker}\theta_{0}.\) Indeed, expanding the expressions \(0=\operatorname{d}^{2}\eta_{0}(X,Y,Z)\), \(0=\operatorname{d}\eta_{0}(X,Y)\), \(0=\operatorname{d}\theta_{0}(X,Y)\); we obtain \[0 =-\operatorname{d}\eta_{0}([X,Y],Z),\] \[0 =-\eta_{0}([X,Y]),\] \[0 =-\theta_{0}([X,Y]).\] Now, given a coisotropic submanifold \(N\hookrightarrow M\), since \(TN^{\perp_{\Lambda}}\) is involutive, it provides a maximal foliation, \(\mathcal{F}\). We assume that \(N/\mathcal{F}\) inherits a manifold structure such that the canonical projection \(\pi:N\to N/\mathcal{F}\) is a submersion. Just like in the previous cases, for the well-definedness and non-degeneracy of the forms in the quotient, we need to restrict the coisotropic submanifolds we are studying. Consequently, we will say that a submanifold \(N\hookrightarrow M\) is: * \(t\)**-vertical** (resp. \(z\)**-vertical**) if \(\mathcal{V}_{t}\subseteq TN\) (resp. \(\mathcal{V}_{z}\subseteq TN\)). * \(tz\)**-vertical**, if it is both \(t\)-vertical and \(z\)-vertical. * \(t\)**-horizontal** (resp. \(z\)**-horizontal**) if \(\mathcal{H}_{t}\subseteq TN\) (resp. \(\mathcal{H}_{z}\subseteq TN\)). * \(tz\)**-horizontal** if it is both \(t\)-horizontal and \(z\)-horizontal, that is, if \(TN\subseteq\mathcal{H}_{tz}\). ### 8.3 \(tz\)vertical reduction Let \(i:N\hookrightarrow M\) be a \(tz\)-vertical submanifold. It is easy to check that under these conditions \[TN^{\perp_{\Lambda}}=\operatorname{Ker}\operatorname{d}\eta_{0}\cap \operatorname{Ker}\eta_{0}\cap\operatorname{Ker}\theta_{0},\] and that \((TN)^{\perp_{\Lambda}}\) is a regular distribution of dimension \[\dim(TN)^{\perp_{\Lambda}}=\dim M-\dim N.\] **Theorem 8.1** (\(tz\)-vertical coisotropic reduction).: _Let \(i:N\hookrightarrow M\) be a \(tz\)-vertical submanifold of a cocontact manifold \((M,\theta,\eta).\) Denote by \(\mathcal{F}\) the maximal foliation induced by the integrable distribution \(TN^{\perp_{\Lambda}}\) on \(N\). If \(N/\mathcal{F}\) admits a manifold structure such that the canonical projection \(\pi:N\to N/\mathcal{F}\) defines a submersion, then there exists unique forms \(\theta_{N},\eta_{N}\) on \(N/\mathcal{F}\) such that \((N/\mathcal{F},\theta_{N},\eta_{N})\) defines a cocontact structure and_ \[i^{*}\theta=\pi^{*}\theta_{N},\ i^{*}\eta_{N}=\pi^{*}\theta_{N}.\] Before proving the theorem, let us calculate the dimension of the quotient. Let \(k+2:=\dim N\). We have \[\dim TN^{\perp_{\Lambda}}=2n+2-(k+2)=2n-k\] and, therefore, \[\dim N/\mathcal{F}=2(k-n)+2.\] Proof.: Uniqueness is clear, since \(\pi\) is a submersion. We only need to check the well-definedness taking \[\theta_{N}([u]):=\theta_{0}(u),\ \eta_{N}([u]):=\eta_{0}(u).\] Independence of the vector is clear, using Proposition 8.2. For independence on the point, let \(X\) be a vector field on \(N\) tangent to the distribution. It is easy to check that \[\mathcal{L}_{X}\eta_{0}=0,\mathcal{L}_{X}\theta_{0}=0;\] and thus, well-definedness follows. For non-degeneracy, it is enough to proof that \[\theta_{0}\wedge\eta_{0}\wedge(\operatorname{d}\eta_{0})^{k-n}\neq 0.\] This follows easily from \(TN^{\perp_{\Lambda}}=\operatorname{Ker}\operatorname{d}\eta_{0}\cap \operatorname{Ker}\eta_{0}\cap\operatorname{Ker}\theta_{0}.\) #### 8.3.1 Projection of Legendrian submanifolds **Proposition 8.3**.: _Let \(L\hookrightarrow M\) be a Legendrian submanifold and \(i:L\hookrightarrow M\) be a \(tz\)-vertical coisotropic submanifold. If \(L\) and \(N\) have clean intersection and \(L_{N}=\pi(L\cap N)\) is a submanifold in \(N/\mathcal{F}\), \(L_{N}\) is Legendrian._ Proof.: Using Lemma 8.1, \(L_{N}\) is clearly isotropic. Now we need to check that \(\dim L_{N}=k-n\), given that \(\dim N/\mathcal{F}=2(k-n)+2.\) We have \[\dim\pi(L\cap N)=\dim(L\cap N)-\dim(TL\cap(TN)^{\perp_{\Lambda}}). \tag{12}\] Furthermore, since \((TL\cap(TN)^{\perp_{\Lambda}})=TL+(TN^{\perp_{\Lambda}})^{\perp_{\Lambda}}\) and \(TL\cap(TN)^{\perp_{\Lambda}}\) is \(tz\)-horizontal, \[\dim(TL+(TN^{\perp_{\Lambda}})^{\perp_{\Lambda}})=2n-\dim(TL\cap TN^{\perp_{ \Lambda}}). \tag{13}\] Now, using the Grassman formula: \[\dim(TL+(TN^{\perp_{\Lambda}})^{\perp_{\Lambda}}) =\dim L+\dim(TN^{\perp_{\Lambda}})^{\perp_{\Lambda}}-\dim(TL\cap( TN^{\perp_{\Lambda}})^{\perp_{\Lambda}}) \tag{14}\] \[=\dim L+(\dim N-2)-\dim(L\cap N)\] (15) \[=n+k-\dim(L\cap N). \tag{16}\] From (13) and (16) we obtain \[\dim(TL\cap TN^{\perp_{\Lambda}})=n-k+\dim(L\cap N). \tag{17}\] Substituting in (12) yields \(\dim\pi(L\cap N)=k-n.\) ### \(t\)vertical, \(z\)horizontal reduction Suppose \(i:N\hookrightarrow M\) is a \(t\)-vertical and \(z\)-horizontal coisotropic submanifold. This time we have \[(TN)^{\perp_{\Lambda}}=\operatorname{Ker}\theta_{0},\] since \(\eta_{0}=0\) implies \(\operatorname{d}\eta_{0}=0\). We conclude that \((TN)^{\perp_{\Lambda}}=TN\cap\mathcal{H}_{tz}\), which implies that \[\dim N/\mathcal{F}=1.\] This means that reduction is trivial, leaving the trivial cosymplectic submanifold of dimension 1: **Theorem 8.2**.: _Let \(i:N\hookrightarrow M\) be a \(t\)-vertical, \(z\)-horizontal coisotropic submanifold of a coconatct manifold \((M,\theta,\eta)\). Denote by \(\mathcal{F}\) the maximal foliation defined by the distribution \((TN)^{\perp_{\Lambda}}\). If \(N/\mathcal{F}\) has a manifold structure such that the canonical projection \(\pi:N\to N/\mathcal{F}\) defines a submersion, then \(N/\mathcal{F}\) is one-dimensional and there exists and unique volume form \(\theta_{N}\) on \(N/\mathcal{F}\) such that_ \[i^{*}\theta=\pi^{*}\theta_{N}.\] #### 8.4.1 Projection of Legendrian submanifolds Given the triviality of reduction in the \(t\)-vertical and \(z\)-horizontal case, projection of Legendrian submanifolds in \(M\) will always result in 0-dimensional Lagrangian submanifolds in \(N/\mathcal{F}\). ### 8.5. \(z\)**vertical, \(t\)**horizontal reduction** Let \(i:N\to M\) be a \(z\)-vertical and \(t\)-horizontal coisotropic submanifold. It is easy to check that this time we have the equality \[TN=\operatorname{Ker}\operatorname{d}\eta_{0}\cap\operatorname{Ker}\eta_{0}.\] Since \(\mathcal{H}_{t}\) is integrable, we have that coisotropic reduction of \(N\) is actually happening in one of the leaves of the foliation that inherits a contact structure from the cocontact structure. We conclude, from Theorem7.1: **Theorem 8.3**.: _Let \(i:N\to M\) be a \(z\)-vertical and \(t\)-horizontal coisotropic submanifold of a cocontact manifold \((M,\theta,\eta)\). Denote by \(\mathcal{F}\) the maximal foliation on \(N\) defined by the distribution \(TN^{\perp_{\Lambda}}\). If \(N/\mathcal{F}\) has a manifold structure such that the canonical projection \(\pi:N\to N/\mathcal{F}\) defines a submersion, then there exists an unique form \(\eta_{N}\) such that \((N/\mathcal{F},\eta_{N})\) is a contact manifold and_ \[i^{*}\eta=\pi^{*}\eta_{N}.\] ### 8.5.1. Projection of Legendrian submanifolds **Proposition 8.4**.: _Let \(L\hookrightarrow M\) be a Legendrian submanifold. If \(L\) and \(N\) have clean intersection and \(L_{N}=\pi(L\cap N)\) is a submanifold in \(N/\mathcal{F}\), \(L_{N}\) is Legendrian in \((N/\mathcal{F},\theta_{N})\)._ Proof.: It is clearly horizontal and, therefore, using Lemma7.2, it is isotropic. Now, supposing \(k=\dim N\), we only need to check that \[\dim L_{N}=k-n,\] since \(\dim N/\mathcal{F}=2(k-n)+1.\) This is straight-forward, following the same steps given in Proposition8.3. ### 8.6. \(tz\)**horizontal reduction** Let \(i:N\hookrightarrow M\) be a \(tz\)-horizontal coisotropic submanifold. Since \(\eta_{0}=0\), \(\operatorname{d}\eta_{0}=0\) and \(\theta_{0}=0\), we have \[(TN)^{\perp_{\Lambda}}=TN,\] wich implies that \[\dim N/\mathcal{F}=0,\] leaving a trivial symplectic manifold, having as many points as path components of \(N\). This means that if \(N/\mathcal{F}\) admits a manifold structure, it will be a symplectic manifold. #### 8.6.1 Projection of Legendrian submanifolds Again, the projection of Legendrian submanifolds is trivial. ## 9 Coisotropic reduction in stable Hamiltonian structures There were several attempts to combine cosymplectic and contact structures. The first one is due to Albert [3], using a combination of a \(1\)-form and a \(2\)-form; however, the setting is not useful for us since the lack of integrability. The second attempt in this direction is due to Acakpo [2], which is studied in this section. **Definition 9.1** (Stable Hamiltonian structure).: _A **stable Hamiltonian structure** (SHS) is a triple \((M,\omega,\lambda)\) where \(M\) is a \(2n+1\) dimensional manifold, \(\omega\) is a closed \(2\)-form and \(\lambda\) is a \(1\)-form such that_ \[\lambda\wedge\omega^{n}\neq 0,\ \operatorname{Ker}\omega\subseteq\operatorname{ Ker}\operatorname{d}\lambda.\] There exists, just like in the previous cases, a natural isomorphism \[\flat_{\lambda\omega}:TM\to T^{*}M;\ \ v_{q}\mapsto i_{vq}\omega+\lambda(v_{q}) \cdot\lambda.\] and its inverse \(\sharp_{\lambda,\omega}:=\flat_{\lambda,\omega}^{-1}.\) Let us perform some calculations in coordinates. Since \(\operatorname{d}\omega\) is closed and of constant range \(2n\), around any point, there exists coordinates \((q^{i},p_{i},z)\) such that \[\omega=\operatorname{d}q^{i}\wedge\operatorname{d}p_{i}\] (see [20]). In this coordinate chart \(\lambda\) will have an expression of the form \[\lambda=a_{i}\operatorname{d}q^{i}+b^{i}\operatorname{d}p_{i}+c\operatorname {d}z.\] Since \[0\neq\lambda\wedge\omega^{n}=c\operatorname{d}z\wedge\omega^{n},\] we conclude that \(c\neq 0\). Let \(\varphi_{t}(q^{i},p_{i},z)\) be the (local) flow of the vector field \(\frac{1}{c}\frac{\partial}{\partial z}\). Fix some value \(z_{0}\), and define the map \[\psi(q^{i},p_{i},t):=(q^{i},p_{i},z(\varphi_{t}(q^{i},p_{i},z_{0}))).\] It is clear that this defines a local diffeomorphism. Take the new set of coordinates to be \[(q^{i},p_{i},t):=\psi^{-1}\circ(q^{i},p_{i},z).\] We have \[\operatorname{d}z=\frac{\partial z}{\partial q^{i}}\operatorname{d}q^{i}+ \frac{\partial z}{\partial p_{i}}\operatorname{d}p_{i}+\frac{\partial z}{ \partial t}\operatorname{d}t=\frac{\partial z}{\partial q^{i}}\operatorname{ d}q^{i}+\frac{\partial z}{\partial p_{i}}\operatorname{d}p_{i}+\frac{1}{c} \operatorname{d}t.\] Therefore, in the new coordinate chart, \[\lambda=\left(a_{i}+\frac{\partial z}{\partial q^{i}}\right)\operatorname{d} q^{i}+\left(b_{i}+\frac{\partial z}{\partial p_{i}}\right)\operatorname{d}p_{i}+ \operatorname{d}t.\] We conclude: **Proposition 9.1**.: _Around every point of \(M\) there exists a coordinate chart \((q^{i},p_{i},z)\) such that_ \[\omega=\mathrm{d}\,q^{i}\wedge\mathrm{d}\,p_{i},\lambda=a_{i}\,\mathrm{d}\,q^{i }+b^{i}\,\mathrm{d}\,p_{i}+\mathrm{d}\,z.\] _We call these coordinates Darboux coordinates._ In Darboux coordinates, the condition \(\mathrm{Ker}\,\mathrm{d}\,\omega\subseteq\mathrm{d}\,\lambda\) translates to \[\frac{\partial a_{i}}{\partial z}=\frac{\partial b^{i}}{\partial z}=0.\] Also, the musical isomorphisms take the expression: \[\flat_{\lambda,\omega}\left(\frac{\partial}{\partial q^{i}}\right) =\mathrm{d}\,p_{i}+a_{i}a_{j}\,\mathrm{d}\,q^{j}+a_{i}b^{j}\, \mathrm{d}\,p_{j}+a_{i}\,\mathrm{d}\,z,\] \[\flat_{\lambda,\omega}\left(\frac{\partial}{\partial p_{i}}\right) =-\,\mathrm{d}\,q^{i}+b^{i}a_{j}\,\mathrm{d}\,q^{j}+b^{i}b^{j}\, \mathrm{d}\,p_{j}+b^{i}\,\mathrm{d}\,z,\] \[\flat_{\lambda,\omega}\left(\frac{\partial}{\partial z}\right) =a_{i}\,\mathrm{d}\,q^{i}+b^{i}\,\mathrm{d}\,p_{i}+\mathrm{d}\,z,\] \[\sharp_{\lambda,\omega}\left(\mathrm{d}\,q^{i}\right) =-\frac{\partial}{\partial p_{i}}+b^{i}\frac{\partial}{\partial z},\] \[\sharp_{\lambda,\omega}\left(\mathrm{d}\,p_{i}\right) =\frac{\partial}{\partial q^{i}}-a_{i}\frac{\partial}{\partial z},\] \[\sharp_{\lambda,\omega}\left(\mathrm{d}\,z\right) =\frac{\partial}{\partial z}+a_{i}\frac{\partial}{\partial p_{i}}- b^{i}\frac{\partial}{\partial q^{i}}.\] Imitating the definitions in the contact case, we can define a bivector field on \(M\) as \[\Lambda_{q}(\alpha_{q},\beta_{q}):=\omega_{q}(\sharp_{\lambda,\omega}(\alpha_{q }),\sharp_{\lambda,\omega}(\beta_{q})),\] and the morphism \[\sharp_{\Lambda}:T^{*}M\to TM;\ \ \alpha_{q}\mapsto i_{\alpha_{q}}\Lambda\] with the induced \(\Lambda\)-orthogonal complement for distributions \[\Delta_{q}^{\perp_{\Lambda}}:=\sharp_{\Lambda}(\Delta_{q}^{0}).\] In coordinates \((q^{i},p_{i},z)\) the bivector field \(\Lambda\) takes the local form: \[\Lambda=\frac{\partial}{\partial q^{i}}\wedge\frac{\partial}{\partial p_{i}}+ \left(a_{i}\frac{\partial}{\partial p_{i}}-b^{i}\frac{\partial}{\partial q^{i }}\right)\wedge\frac{\partial}{\partial z}.\] We also have the distributions * \(\mathcal{H}_{q}:=\mathrm{Ker}\,\lambda_{q}\), * \(\mathcal{V}_{q}:=\operatorname{Ker}\omega_{q},\) and the Reeb vector field \(\mathcal{R}_{q}:=\sharp_{\lambda,\omega}(\lambda_{q}).\) Locally, \[\mathcal{H} =\langle\frac{\partial}{\partial q^{i}}-a_{i}\frac{\partial}{ \partial z},\frac{\partial}{\partial p_{i}}-b^{i}\frac{\partial}{\partial z}\rangle, \tag{18}\] \[\mathcal{V} =\langle\frac{\partial}{\partial z}\rangle,\] (19) \[\mathcal{R} =\frac{\partial}{\partial z}. \tag{20}\] A natural question to ask is wether the bivector field \(\Lambda\) arises from a Jacobi bracket. We have \[[\Lambda,\Lambda]=2\frac{\partial}{\partial p_{i}}\wedge\left(\frac{\partial a _{j}}{\partial q^{i}}\frac{\partial}{\partial p_{j}}-\frac{\partial b^{j}}{ \partial q^{i}}\frac{\partial}{\partial q^{j}}\right)\wedge\frac{\partial}{ \partial z}-2\frac{\partial}{\partial q^{i}}\wedge\left(\frac{\partial a_{j}} {\partial p_{i}}\frac{\partial}{\partial p_{j}}-\frac{\partial b^{j}}{ \partial p_{i}}\frac{\partial}{\partial q^{j}}\right)\wedge\frac{\partial}{ \partial z}.\] Taking an arbitrary vector field \[E=X^{i}\frac{\partial}{\partial q^{i}}+Y_{i}\frac{\partial}{\partial p_{i}}+Z \frac{\partial}{\partial z},\] \((\Lambda,E)\) defines a Jacobi structure if and only if \[[\Lambda,\Lambda]=2E\wedge\Lambda,\ \ [E,\Lambda]=0.\] It is easily checked that the first equality holds when \[\frac{\partial a_{j}}{\partial q^{i}}-\frac{\partial a_{i}}{ \partial q^{j}}=\frac{\partial b^{j}}{\partial p_{i}}-\frac{\partial b^{i}}{ \partial p_{j}}=0, \tag{21}\] \[\frac{\partial b^{i}}{\partial q^{j}}-\frac{\partial a_{j}}{ \partial p_{i}}=0,\ i\neq j,\] (22) \[\frac{\partial b^{i}}{\partial q_{i}}-\frac{\partial a_{i}}{ \partial p_{i}}=f,\] (23) \[X^{i}=Y_{i}=0,\] (24) \[Z=f, \tag{25}\] for certain local unique funtion \(f\). It is easy to check that this relations translate intrinsically to \[\operatorname{d}\lambda=f\omega,\ E=f\mathcal{R}.\] Now, let us compute \([E,\Lambda]\) for \(E=f\frac{\partial}{\partial z}\). \[[E,\Lambda]=\left(\frac{\partial f}{\partial q^{i}}-a_{i}\frac{\partial f}{ \partial z}\right)\frac{\partial}{\partial p_{i}}\wedge\frac{\partial}{ \partial z}+\left(-\frac{\partial f}{\partial p_{i}}+b^{i}\frac{\partial f}{ \partial z}\right)\frac{\partial}{\partial q^{i}}\wedge\frac{\partial}{ \partial z}.\] Therefore, \([E,\Lambda]=0\) if and only if \[\frac{\partial f}{\partial q^{i}}-a_{i}\frac{\partial f}{\partial z}=-\frac{ \partial f}{\partial p_{i}}+b^{i}\frac{\partial f}{\partial z}=0. \tag{26}\] This is easily seen to be equivalent to \[\sharp_{\lambda,\omega}(\mathrm{d}\,f)\in\mathcal{V}.\] We have concluded the following: **Proposition 9.2**.: _The bivector field \(\Lambda\) arises from a Jacobi structure if and only if there exists some \(f\in\mathcal{C}^{\infty}(M)\) such that_ \[\mathrm{d}\,\lambda=f\omega,\ \ \sharp_{\lambda,\omega}(\mathrm{d}\,f)\in \mathcal{V}.\] _And, in that case, the Jacobi structure is defined by the pair \((\Lambda,f\mathcal{R})\)._ **Remark 9.1**.: Notice that we recover the cosymplectic scenario when \(f=0\) and the contact scenario when \(f=1\) (because the definition of \(\Lambda\) in contact geometry is the opposite of the definition we gave in SHS). Let us return to the study of coisotropic reduction. It is easy to see that \(\omega\) induces a symplectic form in \(\mathcal{H}\), \(\omega|_{\mathcal{H}}\). This induces the symplectic orthogonal for \(\Delta_{q}\subset\mathcal{H}_{q}\): \[\Delta_{q}^{\perp_{\omega|_{\mathcal{H}}}}.=\{v\in\mathcal{H}_{q}\,|\,\omega( v,w)=0\,\forall w\in\Delta_{q}\}.\] A distribution \(\Delta\) in \(M\) will be called: * **Isotropic** if \(\Delta\subseteq\Delta^{\perp_{\Lambda}}\); * **Coisotropic** if \(\Delta^{\perp_{\Lambda}}\subseteq\Delta\); * **Lagrangian** if \(\Delta^{\perp_{\Lambda}}=\Delta^{\perp_{\Lambda}}\cap\mathcal{H}\). We have the following equality: **Proposition 9.3**.: _Let \(\Delta\) be a distribution on \(M\). Then_ \[\Delta^{\perp_{\Lambda}}=(\Delta\cap\mathcal{H})^{\perp_{\omega|_{\mathcal{H}} }}.\] Proof.: The proof follows the same lines as that of Proposition 5.3 Just like in previous sections, we say that a Lagrangian submanifold \(L\hookrightarrow M\) is **horizontal** if \(T_{q}L\subseteq\mathcal{H}_{q}\,\forall q\in L\) and say that it is **non-horizontal** if \(T_{q}L\not\subseteq\mathcal{H}_{q}\,\forall q\in L.\) We have the following characterization: **Lemma 9.1**.: _Let \(L\hookrightarrow M\) be an isotropic (or coisotropic) submanifold. We have_ * _If_ \(L\) _is horizontal and_ \(\dim L=n\)_, then_ \(L\) _is Lagrangian._ * _If_ \(L\) _is non-horizontal and_ \(\dim L=n+1\)_, then_ \(L\) _is Lagrangian._ Proof.: The proof goes as Lemma 5.2, since we only need to check the condition at each tangent space. Now, given a coisotropic submanifold \(N\hookrightarrow M\) (that is, \((T_{q}N)^{\perp_{\Lambda}}\subseteq T_{q}N\)), the distribution \((TN)^{\perp_{\Lambda}}\) is not necessarily integrable and we shall assume it in what follows: ### Gradient and Hamiltonian vector fields as Lagrangian submanifolds We can define a symplectic structure on \(TM\) taking \[\Omega_{0}:=\flat_{\lambda,\omega}^{*}\Omega_{M},\] where \(\Omega_{M}\) is the canonical symplectic form on \(T^{*}M\). In Darboux coordinates it has the expression: \[\Omega_{0}= \,\mathrm{d}\,q^{j}\wedge\mathrm{d}\left(\dot{q}^{i}a_{i}a_{j}+ \dot{p}_{i}b^{i}a_{j}+\dot{z}a_{j}-\dot{p}_{j}\right)+\] \[\,\mathrm{d}\,p_{j}\wedge\mathrm{d}\left(\dot{q}^{i}a_{i}b^{j}+ \dot{p}_{i}b^{i}b^{j}+\dot{z}b^{j}+\dot{q}^{j}\right)+\] \[\,\mathrm{d}\,z\wedge\mathrm{d}\left(\dot{q}^{i}a_{i}+\dot{p}_{i }b^{i}+\dot{z}\right).\] **Definition 9.2** (Gradient vector field).: _Given a Hamiltonian \(H\in\mathcal{C}^{\infty}(M),\) define the **gradient vector field** of \(H\) as_ \[\operatorname{grad}H:=\sharp_{\lambda,\omega}(\mathrm{d}\,H).\] In Darboux coordinates, the gradient vector field is written \[\operatorname{grad}H=\left(\frac{\partial H}{\partial p_{i}}-b^{i}\frac{ \partial H}{\partial z}\right)\frac{\partial}{\partial q^{i}}+\left(-\frac{ \partial H}{\partial q^{i}}+a_{i}\frac{\partial H}{\partial z}\right)\frac{ \partial}{\partial p_{i}}+\left(b^{i}\frac{\partial H}{\partial q^{i}}-a_{i} \frac{\partial H}{\partial p_{i}}+\frac{\partial H}{\partial z}\right)\frac{ \partial}{\partial z}.\] It is easily checked that \(X:M\to TM\) is locally a gradient vector field if and only if \(X(M)\) is a Lagrangian submanifold of \((TM,\Omega_{0})\). Indeed, we have the equality \[X^{*}\Omega_{0}=-\,\mathrm{d}\,\flat_{\lambda,\omega}(X).\] When \(\Lambda\) comes from a Jacobi bracket on \(M\), that is, when \[\mathrm{d}\,\lambda=f\omega,\,\,\,[f\mathcal{R},\Lambda]=0,\] for some function \(f\) on \(M\), we have the Hamiltonian vector field of the Jacobi structure \((\Lambda,f\mathcal{R})\): \[X_{H}=\sharp_{\Lambda}(\mathrm{d}\,H)+fH\mathcal{R}=-\operatorname{grad}H+( \mathcal{R}(H)+fH)\mathcal{R}.\] In Darboux coordinates it has the expression: \[X_{H}=\left(-\frac{\partial H}{\partial p_{i}}+b^{i}\frac{\partial H}{\partial q ^{i}}\right)\frac{\partial}{\partial q^{i}}+\left(\frac{\partial H}{\partial q ^{i}}-a_{i}\frac{\partial H}{\partial z}\right)\frac{\partial}{\partial p_{i}}+ \left(a_{i}\frac{\partial H}{\partial p_{i}}-b^{i}\frac{\partial H}{\partial q ^{i}}+fH\right)\frac{\partial}{\partial z}.\] Let us interpret the Hamiltonian vector field as a Lagrangian submanifold of \(TM\), with some sympelctic form. First, observe that \[X_{H}^{*}\Omega_{0}=-\operatorname{d}(\flat_{\lambda,\omega}(\operatorname{d }H))=-\operatorname{d}\left(\mathcal{R}(H)\lambda+fH\lambda\right).\] Therefore, defining the symplectic form \[\Omega_{H}:=\Omega_{0}+\operatorname{d}(\mathcal{R}(H)\lambda+fH\lambda)^{v},\] we have that \(X_{H}\) defines a Lagrangian submanifold of \((TM,\Omega_{H})\). ### Vertical coisotropic reduction **Theorem 9.1** (Vertical coisotropic reduction in stable Hamiltonian structures).: _Let \(i:N\hookrightarrow M\) be a vertical coisotropic submanifold such that \((TN)^{\perp_{\lambda}}\) defines an integrable distribution. Let \(\mathcal{F}\) be the set of leaves and suppose that \(N/\mathcal{F}\) admits a manifold structure such that the canonical projection \(\pi:N\to N/\mathcal{F}\) defines a submersion. If \(i^{*}\operatorname{d}\lambda=0\) in \(TN\cap\mathcal{H}\), then \(N/\mathcal{F}\) admits an unique stable Hamiltonian system structure \((\omega_{N},\lambda_{N})\) such that \(\pi^{*}\omega_{N}=i^{*}\omega\) and \(\pi^{*}\lambda_{N}=i^{*}\lambda\). The following diagram summarizes the situation:_ Proof.: The proof goes as Theorem 5.1. Asking \(i^{*}\operatorname{d}\lambda=0\) is necessary to guarantee the well-definedness of \(\lambda_{N}\) in the quotient using \[\mathcal{L}_{X}\lambda_{0}=i_{X}\operatorname{d}\lambda-0+\operatorname{d}i_ {X}\lambda_{0}=0,\] where \(\lambda_{0}=i^{*}\lambda\). It would only remain to check that \[\operatorname{Ker}\omega_{N}\subseteq\operatorname{Ker}\operatorname{d} \lambda_{N}.\] Indeed, since \(\operatorname{Ker}\omega_{N}=\langle\mathcal{R}_{\mathcal{N}}\rangle\), and \(\mathcal{R}_{N}=\pi_{*}\mathcal{R}\), it follows from \[\pi^{*}(i_{\mathcal{R}_{N}}\operatorname{d}\lambda_{N})=i_{\mathcal{R}} \operatorname{d}\lambda=0.\] #### 9.2.1 Projection of Lagrangian submanifolds We have the result: **Proposition 9.4** (Projection of Lagrangian submanifolds).: _Let \(i:L\hookrightarrow M\) be a Lagrangian submanifold. If \(L\) and \(N\) have clean intersection and \(\pi(L\cap N)\) is a submanifold in \(N/\mathcal{F}\), then it is Lagrangian._ Proof.: The proof goes as Proposition 5.5 and Proposition 5.6, since the proof reduces to the study of each tangent space. ### Horizontal coisotropic reduction **Theorem 9.2** (Horizontal coisotropic reduction in stable Hamiltonian structures).: _Let \(i:N\to M\) be a coisotropic horizontal submanifold such that \((TN)^{\perp_{\Lambda}}\) defines an integrable distribution. Let \(\mathcal{F}\) be the set of leaves of the foliation and suppose that \(N/\mathcal{F}\) admits a manifold structure such that the canonical projection \(\pi:N\to N/\mathcal{F}\) defines a submersion. Then \(N/\mathcal{F}\) admits and unique symplectic structure \(\omega_{N}\) such that \(\pi^{*}\omega_{N}=i^{*}\omega\). The following diagram summarizes the situation:_ Proof.: The proof goes as Theorem 3.2. #### 9.3.1 Projection of Lagrangian submanifolds **Proposition 9.5** (Projection of Lagrangian submanifolds).: _Let \(L\hookrightarrow M\) be a Lagrangian submanifold. If \(L\) and \(N\) have clean intersection and \(\pi(L\cap N)\) is a submanifold in \(N/\mathcal{L}\), then it is Lagrangian._ Proof.: The proof goes as Proposition 3.4 since we only need to check it in every tangent space. ## 10 Conclusions The interpretations of the different types of vector fields in the different types of geometry as Lagrangian or Legendrian submanifolds can be summarized in Table 1. Also, the results on coisotropic reduction can be summarized in Table 2 ## Acknowledgements We acknowledge the financial support of Grant PID2019-106715GBC21, the Severo Ochoa Programme for Centres of Excellence in R&D (CEX2019-000904-S), and JAE Intro Programme 2022 (Becas de Introduccion a la Investigacion para estudiantes universitarios).
2307.08381
2P-BFT-Log: 2-Phase Single-Author Append-Only Log for Adversarial Environments
Replicated append-only logs sequentially order messages from the same author such that their ordering can be eventually recovered even with out-of-order and unreliable dissemination of individual messages. They are widely used for implementing replicated services in both clouds and peer-to-peer environments because they provide simple and efficient incremental reconciliation. However, existing designs of replicated append-only logs assume replicas faithfully maintain the sequential properties of logs and do not provide eventual consistency when malicious participants fork their logs by disseminating different messages to different replicas for the same index, which may result in partitioning of replicas according to which branch was first replicated. In this paper, we present 2P-BFT-Log, a two-phase replicated append-only log that provides eventual consistency in the presence of forks from malicious participants such that all correct replicas will eventually agree either on the most recent message of a valid log (first phase) or on the earliest point at which a fork occurred as well as on an irrefutable proof that it happened (second phase). We provide definitions, algorithms, and proofs of the key properties of the design, and explain one way to implement the design onto Git, an eventually consistent replicated database originally designed for distributed version control. Our design enables correct replicas to faithfully implement the happens-before relationship first introduced by Lamport that underpins most existing distributed algorithms, with eventual detection of forks from malicious participants to exclude the latter from further progress. This opens the door to adaptations of existing distributed algorithms to a cheaper detect and repair paradigm, rather than the more common and expensive systematic prevention of incorrect behaviour.
Erick Lavoie
2023-07-17T10:39:57Z
http://arxiv.org/abs/2307.08381v2
# 2P-BFT-Log: 2-Phase Single-Author Append-Only Log for Adversarial Environments ###### Abstract Replicated append-only logs sequentially order messages from the same author such that their ordering can be eventually recovered even with out-of-order and unreliable dissemination of individual messages. They are widely used for implementing replicated services in both clouds and peer-to-peer environments because they provide simple and efficient incremental reconciliation. However, existing designs of replicated append-only logs assume replicas faithfully maintain the sequential properties of logs and do not provide eventual consistency when malicious participants fork their logs by disseminating different messages to different replicas for the same index, which may result in partitioning of replicas according to which branch was first replicated. In this paper, we present _2P-BFT-Log_, a two-phase replicated append-only log that provides eventual consistency in the presence of forks from malicious participants such that all correct replicas will eventually agree either on the most recent message of a valid log (first phase) or on the earliest point at which a fork occurred as well as on an irrefutable proof that it happened (second phase). We provide definitions, algorithms, and proofs of the key properties of the design, and explain one way to implement the design onto Git, an eventually consistent replicated database originally designed for distributed version control. Our design enables correct replicas to faithfully implement the happens-before relationship first introduced by Lamport that underpins most existing distributed algorithms, with eventual detection of forks from malicious participants to exclude the latter from further progress. This opens the door to adaptations of existing distributed algorithms to a cheaper _detect_ and _repair_ paradigm, rather than the more common and expensive systematic _prevention_ of incorrect behaviour. ## 1 Introduction An append-only log, as used in eventually consistent replicated databases in clouds [24, 7] or peer-to-peer systems [10, 1], is a totally ordered replicated list of messages such that each index has at most one unique message associated to it and indexes are associated with new messages from lowest to greatest. In trusted environments, such as clouds, messages are associated to globally assigned process identifiers, while in peer-to-peer environments, messages are usually associated to authors as represented by the public key of a public-private key pair. In this paper we focus on append-only logs using messages associated to authors. In the simplest implementation of an append-only log, each new message from the same author is stored at the end of an ever growing list. When two replicas reconcile the state of the log associated to a given author, they compare the index of the latest messages they have. When one replica's index is larger, the replica with the most recent messages sends the missing messages to the other [31, 10]. To prevent tampering and ensure authenticity, some peer-to-peer systems [10, 1] use cryptographic hashes to link a log's new message to its immediate predecessor. However, this strategy does not prevent a malicious author from creating two _concurrent_ messages for the same index [16, 8, 26] thereby _forking_ their log and turning it into a tree. Cryptographic hashes are therefore not sufficient to sequentially order the messages of a log. Also, append-only logs, as currently implemented in some peer-to-peer systems [10, 16, 8, 26], cannot tolerate malicious participants [11, 9] because once a replica has replicated one of the concurrent branches of a forked log, the replica will consider any other branch updates as invalid. This results in a partitioning of correct replicas, preventing convergence. In this paper, we present the design of a two-phase Byzantine Fault-Tolerant append-only log, _2P-BFT-Log_, which provides eventual consistency in the presence of malicious authors that may intentionally fork their log. The key idea and novel contribution in our design is to introduce a second _shrinking_ phase after a fork has been discovered, such that all correct replicas agree on the earliest fork that has been observed so far. An irrefutable proof, in the form of at least two signed messages from the malicious author that have the same predecessor message, is replicated between correct replicas to establish the earliest fork point. In the context of decentralized accounting [16], for example, our design provides two key properties for both correct and malicious authors: for correct authors, the total ordering of messages guarantees valid updates always maintain non-negative balances for accounts, while for malicious authors, it provides eventual detection of potential double-spending because double-spending requires concurrent messages. Moreover, the switch of forked logs to an explicit and separate shrinking phase eventually prevents malicious authors from being able to extend their log with correct replicas: as soon a correct replica finds a fork proof, malicious authors have a limited window of opportunity to propagate new forks to an ever shrinking set of correct replicas that have not yet received the fork proof. After that opportunity is over, the log of a malicious author is _dead_ as it cannot be extended anymore. In the rest of this paper, we first introduce relevant background (Section 2), then the design of _2P-BFT-Log_ (Section 3), an implementation based on Git (Section 4), proofs for convergence and key properties (Section 5), a review of related work (Section 6), and we finally conclude with a brief recap and directions for future work (Section 7). ## 2 Background In this section, we present relevant background, including some of its limitations, that motivates our design. ### Byzantine Processes A Byzantine process [15] is a process that may deviate arbitrarily from the behaviour otherwise expected of correct processes, i.e. it does not execute the algorithms faithfully. For example, it may omit to send some messages, send invalid messages, try impersonating other processes, broadcast inconsistent messages to different processes, etc. In general, it is not possible to detect Byzantine processes until they misbehave, and not all misbehaviour can be reliably detected. However, using cryptographic signatures it is possible to remove the ability of Byzantine processes to impersonate or tamper with messages they relay from other correct processes [15], and detect inconsistent messages broadcast to different correct processes by comparing the messages received by correct processes. ### Happens-Before Relationship The _happens-before_ relationship [14] is a causal ordering of events happening on concurrent processes. We say that an event \(e\) happens before \(e^{\prime}\), written \(e\rightsquigarrow e^{\prime}\), if there is a sequence of causes and effects that connects \(e\) to \(e^{\prime}\). If no sequence of causes and effects connect \(e\) and \(e^{\prime}\), i.e. \(e\not\rightsquigarrow e^{\prime}\) and \(e^{\prime}\not\rightsquigarrow e\), then we say that \(e\) and \(e^{\prime}\) are concurrent, written \(e\parallel e^{\prime}\). More formally, we assume that all processes locally execute sequentially so that there exists a total order \(<_{e}\) on all events happening within that process. Then, for two events \(e\) and \(e^{\prime}\) happening on processes \(p\) and \(p^{\prime}\), \(e\rightsquigarrow e^{\prime}\) if one of the following conditions is true, otherwise \(e\not\rightsquigarrow e^{\prime}\): * **local total order**: \(p=p^{\prime}\) and \(e<_{e}e^{\prime}\), i.e. \(e\) happened before \(e^{\prime}\) within the same process; * **interprocess communication**: \(e\) happened before communication between \(p\) and \(p^{\prime}\), for example message \(m\) was sent from \(p\) to \(p^{\prime}\), and \(e^{\prime}\) happened after communication between \(p\) and \(p^{\prime}\), for example message \(m\) was received on \(p^{\prime}\)1; Footnote 1: \(p\) may send a message to itself, therefore \(p\) may be the same as \(p^{\prime}\). * **transitivity**: there exists \(e^{\prime\prime}\) such that \(e\rightsquigarrow e^{\prime\prime}\) and \(e^{\prime\prime}\rightsquigarrow e^{\prime}\). Note that the assumption of a local total order is easy to satisfy on correct processes but impossible to enforce on Byzantine processes. Also, the communication between processes could actually be a full reconciliation protocol (ex: [13]) rather than a single message sent. ### Hash Graphs A hash graph [9, 11] is a widely used implementation technique to encode the causal relationships of messages using cryptographic hashes. It is used, for example, to order Git commits [4], authenticate file updates [19], and design Byzantine fault-tolerant algorithms [21, 3, 13, 11]. It is standard to assume that the cryptographic hashes of messages are unique,2 i.e. there does not exist messages \(M,M^{\prime}\) such that \(M\neq M^{\prime}\) and \(H(M)=H(M^{\prime})\). By including the cryptographic hashes of all messages \(M^{\prime}\) on which \(M\) depends within \(M\), we can then encode the fact that \(M\) happened before and may have caused \(M^{\prime}\). Footnote 2: More precisely, there is a negligible probability for such a collision. Assume that any message \(M\) stores the set of messages it depends on in \(M_{\textit{deps}}\). We say that given two valid messages \(M\) and \(M^{\prime}\), \(M\) is a _predecessor_ of \(M^{\prime}\) if and only if one of the following conditions is true: * **direct dependency**: \(H(M)\in M^{\prime}_{\textit{deps}}\); * **transitivity**: there exists \(M^{\prime\prime}\) such that \(M\) is a predecessor of \(M^{\prime\prime}\) and \(M^{\prime\prime}\) is a predecessor of \(M^{\prime}\). Similarly, a successor of \(M\) is a message \(M^{\prime}\) such that \(M\) is a predecessor of \(M^{\prime}\). By construction, a hash graph is a directed acyclic graph, i.e. a successor \(M^{\prime}\) of \(M\) cannot be a dependency of \(M\) because the successors between \(M^{\prime}\) and \(M\) would have to be created before \(H(M^{\prime})\) is computed which would require \(H(M)\) to be computed first, which in turn requires knowing \(H(M^{\prime})\), introducing a circular dependency.3 It is customary to assign an explicit author to messages of hash graphs. In general, the authors of hash graph messages are independent of the underlying processes that generate messages: a process may sequentially generate messages from a single author that appear concurrent in the hash graph, i.e. two messages with the same predecessor in their dependencies. Therefore, although the cryptographic hashes encode causal relationships and authors may be seen as a reification of process identifiers of the happens-before relationship, the analogy is incomplete because a hash graph does not enforce a total ordering of messages from the same author. ### Conflict-Free Replicated Data Types Conflict-Free Replicated Data Types (CRDTs) [28] are mutable replicated objects designed with constraints on concurrent modifications such that all replicas4 are always guaranteed to converge to the same state _eventually_. More precisely, they provide _strong eventual consistency_, defined as: Footnote 4: When discussing CRDTs, we use _replica_ to designate the process that executes the algorithm, to emphasize the idea that the purpose of the process is to replicate the object. * **eventual update**: If an update is applied by a correct replica, then all correct replicas will eventually apply that update; * **convergence**: Any two correct replicas that have applied the same set of updates are in the same state (even if updates were applied in different orders). State-based CRDTs immediately update their local state and later send their new state to other replicas. The eventual update property is obtained by assuming correct replicas are transitively connected and new states are eventually received and merged by neighbours. Convergence is obtained by organizing all possible states of objects into a partially ordered set, defining a deterministic merge function for any two possible states that computes the least upper bound within that partial order, and constraining update operations such that they only produce a new state that is equal or larger within the partial order. Therefore, the convergence proofs for state-based CRDTs are relatively easy and do not require reasoning about possible orderings of updates. Operation-based CRDTs define conflict resolution rules for all possible concurrent updates but require reliable delivery of all updates to all replicas. The convergence proofs for operation-based CRDTs are somewhat more involved and require reasoning about causal ordering of updates. In the rest of this paper, we only concern ourselves with state-based CRDTs. ### Byzantine State-Based CRDTs Most of the literature on CRDTs [12] assumes that processes are non-Byzantine and many CRDT designs break down in the presence of Byzantine replicas [11]. State-based CRDTs however do not require adjustments [9]. Beside producing invalid states that can be easily rejected by correct replicas, Byzantine processes are left with two possibilities: * _omission_ of sending some state updates to certain replicas; * _equivocation_, i.e. sending different state updates to different replicas. The second case is actually a variation of the first one, in that two different state updates are sent to different strict subsets of correct replicas. As long as at least one correct replica receives each state update, the _eventual update_ property (Section 2.4) guarantees all correct replicas will apply it and the _convergence_ property guarantees that all correct replicas converge to the same state. The only issue remaining is to define what are valid states and only merge those, which is not significantly more involved than designing CRDTs in a non-Byzantine context, making their description and proofs quite accessible. ### Self-Certification Self-certification [22] is a property of a combination of reference and the referred data such that they can be compared to ensure authenticity. We use two variations in this paper: * a message \(m\) retrieved using its hash \(H(m)\) is self-certifying because the retriever can compare the hash of the data obtained to ensure it is equal to the hash they requested; * the messages of a cryptographically signed append-only log retrieved using the public key of the author of messages can be checked against the public key. When used with public-private keys, self-certification does not however by itself guarantees that a public key actually belongs to the author it claims to represent. It does however, removes the ability from Byzantine authors to impersonate another author without the need for a third-party to manage keys. We now combine these ideas into a new append-only log design. ## 3 2P-BFT-Logs Design A _2P-BFT-Log_ solves the problem of totally ordering messages from a given author, similar to an append-only log, while providing eventual consistency and fault-detection in the presence of Byzantine authors that create concurrent messages. Our key and novel contribution is to correctly handle the possibility of _multiple forks_ on the same log by implementing a distinct _shrinking phase_. We present our design progressively in three steps: first we discuss the underlying grow-only set of immutable messages (Section 3.2), then a single mutable log made of messages (Section 3.3), and finally sets of logs (Section 3.4). ### System Model We assume there is an arbitrary large set of replicas, among which there is subset of at least two that are correct, and the rest is Byzantine, i.e. can deviate arbitrarily from our algorithms. Every correct replica is connected to at least one other correct replica such that, through that replica, they receive updates from all other correct replica, i.e. correct replicas from a connected component. We further assume that every state update of a correct replica is eventually received by all other replicas: this is a weaker assumption than reliable broadcast because the update may be received indirectly through another correct replica and might have been merged with another state update before being received. This last assumption can be captured theoretically by assuming every replica send their latest state infinitely often to their immediate neighbour in the connection graph so that at least one transmission eventually succeed. In addition, we have a set of authors that use the replicas to propagate their messages. Authors may be correct, in which case they only create valid messages that follow a sequential ordering according to the hash graph we will explain shortly, or they may be Byzantine in which case they may create both invalid messages as well as valid messages that do not follow a sequential ordering, i.e. multiple concurrent messages with the same predecessor in the hash graph. Correct replicas will always reject invalid messages but Byzantine replicas may do anything. We assume both Byzantine authors and replicas do not have access to the private keys of correct authors, therefore they cannot forge messages in their name. We make no assumption on whether Byzantine authors or replicas share private keys. ### Message Graph The message graph is the underlying backbone of logs. It explicitly encodes the causal dependencies between messages, similar to the commit graph in Git [4]. We do not discuss how messages are replicated: there already exists reconciliation protocols that would be sufficient for the task (ex: [13]), our only requirement is that all dependencies of a message should also be replicated to enable validation. Our message graph is similar to a hash graph that includes cryptographic signatures from authors (Section 2.3). The signatures prevent Byzantine authors from impersonating other authors or tampering with messages during replication. In contrast to some hash graph designs [13, 11, 9], but similar to entangled timelines [21], we restrict valid messages to at most one message dependency from the same author: this constrains the message graph of any given author to a sequence, if correct, or a tree, if Byzantine. Messages follow the schema of Table 1. In contrast to the usual definition for append-only logs [10], we also encode dependencies to messages from other authors so that the full causal history of a message can easily be recovered. To ease the presentation of later relations and algorithms we use a separate field \(M_{\mathit{prev}}\) for the dependency from the same author, while all other dependencies are stored in \(M_{\mathit{deps}}\). A message identifier for message \(M\), abbreviated \(\mathtt{MsgId}(M)\), is computed by hashing the concatenation of all message fields, making the identifer and message pair self-certifying (Section 2.6): \[\mathtt{MsgId}(M)\stackrel{{\mathrm{def}}}{{=}}\mathtt{hash}(M_ {author}\oplus M_{\mathit{prev}}\oplus M_{idx}\oplus M_{\mathit{payload}} \oplus M_{\mathit{deps}}\oplus M_{\mathit{signature}})\] #### 3.2.1 Validity Properties We validate messages as follows. Either there is no previous message from the same author, or there is at most one and it is valid: **M1**: _(single previous dependency)_: either \(\begin{cases}M_{\mathit{prev}}=\bot;\\ M_{\mathit{prev}}=\mathtt{MsgId}(M^{\prime}):M^{\prime}\text{ exists and is valid.}\end{cases}\) Previous messages are from the same author: \begin{table} \begin{tabular}{l l l} **Field** & **Type** & **Purpose** \\ \hline \(M_{author}\) & String (Public Key) & Author of the message. \\ \(M_{\mathit{prev}}\) & \(\mathtt{MsgId}\) or \(\bot\) & Previous message from the same author, \(\bot\) (null) if none. \\ \(M_{\mathit{deps}}\) & Set of \(\mathtt{MsgIds}\) & State of logs from other authors on which it depends. \\ \(M_{\mathit{payload}}\) & String & Content of message, which typically represents an update. \\ \(M_{\mathit{signature}}\) & String & Signature of the concatenation of the previous fields using _author_’s private key. \\ \end{tabular} \end{table} Table 1: Message schema. **M2**: _(single author)_**: if** \(M_{prev}=\texttt{MsgId}(M^{\prime})\) **then** \(M^{\prime}_{author}=M_{author}\)**** **All dependencies are from different authors and valid:** **M3**: _(valid external dependencies)_**: for all** \(\textit{msgId}\in M_{\textit{deps}}\)**, there exists a valid** \(M^{\prime}\) **such that** \(\texttt{MsgId}(M^{\prime})=\textit{msgId}\) **and** \(M^{\prime}_{author}\neq M_{author}\)**.** **There is at most one external dependency per author:** **M4**: _(single author dependencies)_**: for any possible** _author_ **not equal to** \(M_{author}\)**, there exists at most one** \(\textit{msgId}\in M_{\textit{deps}}\)**, such that** \(\texttt{MsgId}(M^{\prime})=\textit{msgId}\) **and** \(M^{\prime}_{author}=\textit{author}\)**.** **The signature is valid:** **M5**: _(self-certifying)_**:** \(M_{\textit{signature}}\) **is consistent with** \(M_{author}\)**.** **There are no dependency cycle (the definition for successor is given later in Section** 3.2.3**):** **M6**: _(acyclic)_**:** \(M_{\textit{prev}}\) **does not reference a successor of** \(M\)**, i.e.** \((M_{\textit{prev}}\neq M^{\prime}:M\stackrel{{\textit{log}}}{{ \rightsquigarrow}}M^{\prime})\)**.** **A standard cryptographic assumption is to assume the hash function used for** \(\texttt{MsgId}(M)\) **does not have collisions, which prevents cycles. In practice, even if that possibility is non-null it suffices to only validate a message if all its predecessors and dependencies have been previously successfully validated. This way even if a successful hash cycle had been found by a Byzantine author, no correct replica will accept it.** #### 3.2.2Relations and Queries on Messages **We now present useful relationships and queries on messages in a message graph. These help make the presentation of algorithms precise and concise, and they also provide formal semantics for** \(\text{Git}+\text{Bash}\) **commands used in our implementation (Section** 4.8**). All the following relations and queries are only defined for valid messages, i.e. messages that meet the properties of Section** 3.2.1**.** **The** _happens-before relationship_ **follows easily from its usual definition (Section** 2.2**):** \[M\rightsquigarrow M^{\prime}\stackrel{{\text{def}}}{{=}}\begin{cases}M=M^{ \prime}_{prev}=\bot\\ \texttt{MsgId}(M)=M^{\prime}_{prev}\\ \texttt{MsgId}(M)\in M^{\prime}_{\textit{deps}}\\ \exists M^{\prime\prime}:M\rightsquigarrow M^{\prime\prime}\rightsquigarrow M ^{\prime}\end{cases} \tag{1}\] **From the happens-before relationship, we define a partial order on messages (**\(\leq_{\mathcal{M}}\)**) in which** \(M\) **is smaller or equal to** \(M^{\prime}\) **if it is either equal or happens before** \(M^{\prime}\)**:** \[M\leq_{\mathcal{M}}M^{\prime}\stackrel{{\text{def}}}{{=}}M=M^{ \prime}\lor M\rightsquigarrow M^{\prime} \tag{2}\] **It is easy to show that** \(\leq_{\mathcal{M}}\) **is** _reflexive_**,** _transitive_**, and** _antisymmetric_ **and indeed forms a partial order, so we omit the proof for brevity. A partial ordering is useful to define CRDTCs as we do later.** If neither \(M\) or \(M^{\prime}\) is smaller or equal to the other, then they are concurrent (\(\|_{\mathcal{M}}\)): \[M\;\|_{\mathcal{M}}\;M^{\prime}\stackrel{{\mathrm{def}}}{{=}}M \not\leq_{\mathcal{M}}M^{\prime}\wedge M^{\prime}\;\not\leq_{\mathcal{M}}M \tag{3}\] We can finally define the set of all messages that are smaller or equal to \(M\), also know as the _causal history_[27]: \[\mathcal{H}(M)\stackrel{{\mathrm{def}}}{{=}}\{M^{\prime}:M^{ \prime}\leq_{\mathcal{M}}M\} \tag{4}\] #### 3.2.3 Author-Specific Relations and Queries Given the _single previous dependency_ and _single author_ constraints on messages, it is useful to define additional relationships and queries specific to the author of messages, and therefore intended for this author's log. Again, the following relations and queries are only defined for valid messages, i.e. messages that meet the properties of Section 3.2.1. The happens-before relationship is similar but ignores other dependencies: \[M\stackrel{{\mathit{log}}}{{\rightsquigarrow}}M^{\prime} \stackrel{{\mathrm{def}}}{{=}}\begin{cases}M=M^{\prime}_{prev}= \bot\\ \mathtt{MsgId}(M)=M^{\prime}_{prev}\\ \exists M^{\prime\prime}:M\stackrel{{\mathit{log}}}{{\rightsquigarrow}}M^{ \prime\prime}\stackrel{{\mathit{log}}}{{\rightsquigarrow}}M^{ \prime}\end{cases} \tag{5}\] The partial order and causal history for messages from the same log are analogous to those for the message graph: \[M\leq_{\mathit{log}}M^{\prime}\stackrel{{\mathrm{def}}}{{=}}M=M^ {\prime}\lor M\stackrel{{\mathit{log}}}{{\rightsquigarrow}}M^{\prime} \tag{6}\] \[\mathcal{H}_{\mathit{log}}(M)\stackrel{{\mathrm{def}}}{{=}}\{M^{ \prime}:M^{\prime}\leq_{\mathit{log}}M\} \tag{7}\] In addition, we define a range of messages as the messages of a log that happen between two messages (including the last): \[\begin{array}{rcl}\left]M,M^{\prime}\right]_{\mathit{log}}&\stackrel{{ \mathrm{def}}}{{=}}&\mathcal{H}_{\mathit{log}}(M^{\prime})\backslash \mathcal{H}_{\mathit{log}}(M)\end{array} \tag{8}\] This operation is similar to the commit range operation of Git (Sec. 4.8). Since Byzantine authors may fork their logs to turn them into trees, it is useful to compute the longest prefix of both branches, i.e. the greatest lower bound of two messages: \[\begin{array}{rcl}\mathtt{LogPrefix}(M,M^{\prime})\stackrel{{ \mathrm{def}}}{{=}}&M^{\prime\prime}:M^{\prime\prime}\leq_{\mathit{log}}M \wedge M^{\prime\prime}\leq_{\mathit{log}}M^{\prime}\backslash\\ &\nexists M^{\prime\prime\prime}:M^{\prime\prime}\stackrel{{ \mathit{log}}}{{\rightsquigarrow}}M^{\prime\prime\prime}\wedge M^{\prime \prime\prime}\leq_{\mathit{log}}M\wedge M^{\prime\prime\prime}\leq_{\mathit{ log}}M^{\prime}\end{array} \tag{9}\] We can then use the first message of each branch of the fork as proofs that a fork happened: \[\mathtt{ForRProof}(M,M^{\prime})\stackrel{{\mathrm{def}}}{{=}}\{M^ {\prime\prime}\in\;\left]P,M\right]\;\cup\;\left]P,M^{\prime}\right]\;:P=\mathtt{ LogPrefix}(M,M^{\prime})\wedge P=M^{\prime\prime}_{\mathit{prev}}\} \tag{10}\] ### Log In this section, we define a two-phase append-only log as a state-based CRDT: in the first _growing_ phase, a log replica is extended by appending a new message after the last one; in the second _shrinking_ phase, a log replica's last message is the greatest lower bound of all known forks made of valid messages. In addition, the log replica maintains a proof of the fork, in the form of a set of at least two messages that have the log's last message as predecessor. Both phases are eventually consistent: the first phase implements the same behaviour as regular append-only logs while the second provides the longest valid prefix log among all forks known by correct replicas. The state of a log \(L\) is defined as follows: it is associated to a given author \(L_{author}\); it has a reference \(L_{last}\) to the last message from \(L_{author}\) that forms a strict sequence; and it has a set \(L_{forks}\) of messages, if any, that prove that there was a fork after \(L_{last}\). \(L\) is in the growing phase if there are no known forks, i.e. \(L_{forks}=\emptyset\), otherwise it is in the shrinking phase. Once a fork as been found, \(L\) stays in the shrinking phase forever. We first present the properties of valid logs for both phases to constrain the behaviour of Byzantine authors (Section 3.3.1), then discuss how a log is initialized and appended to (Section 3.3.2), then how our logs form valid state-based CRDT, with partial ordering of states (Section 3.3.3) and merging of different states (Section 3.3.4). #### 3.3.1 Validity Properties The growing and shrinking phases have different validity properties. In the growing (correct) phase, in addition to the absence of fork proofs, **CL1**: _(no forks)_: \(L_{forks}=\emptyset\). a log's last message must be valid and consistent with the log's author: **CL2**: _(valid last message)_: if \(L_{last}=M\neq\bot\) then \(M\) is valid. **CL3**: _(consistent previous author)_: if \(L_{last}\neq\bot\) then \(L_{author}=(L_{last})_{author}\). This protects the log from invalid message updates and makes it self-certifying (Section 2.6). Once a log has forked and is now in the shrinking phase, property _CL1_ is replaced by _FL1_, while _CL2_ and _CL3_ still apply but have been relabeled here for clarity: **FL1**: _(non-empty forks)_: \(L_{forks}\neq\emptyset\). **FL2**: _(valid last message)_: _idem CL2_ **FL3**: _(consistent previous author)_: _idem CL3_ In addition, similar properties now have to apply to forks: they must be valid messages, forks must have been created by \(L_{author}\), there must exist at least two messages with the same predecessor, and this predecessor must be the last message of the log: **FL4**: _(valid forks)_: for all \(M\in L_{forks}\), \(M\) is valid. **FL5**: _(consistent fork author)_: for all \(M\in L_{forks}\), \(L_{author}=M_{author}\) and if \(L_{last}\neq\bot\) then \(L_{author}=(L_{last})_{author}\). **FL6**: _(valid proof)_: if \(M\in L_{forks}\) then \(\exists M^{\prime}\in L_{forks}\) such that \(M\neq M^{\prime}\) and \(M_{prev}=M^{\prime}_{prev}\). **FL7**: _(consistent proof)_: for all \(M\in L_{forks}\), \(L_{last}=M_{prev}\). All other validity properties actually come from the underlying message graph (Section 3.2). Together, the validity properties of both phases constrain a Byzantine log author to only two options: either produce a correct sequential log, or produce forks of valid messages. Any other states of logs will be ignored by correct replicas. #### 3.3.2 Initialization and Append The initialization and appending operation for a log are listed in Alg. 1. By convention, we use \(L\) to denote the current state of a log, and \(L^{\prime}\) to denote the next state of log, if any. This makes the exposition and later proofs easier to read, and follows the conventions of functional programming. An implementation could update the state in-place for performance. Also by convention, for all state modifying operations, the current state of the log is the first argument to a function. A log is initialized from an _author_ and starts in the growing phase with no last message, which makes it trivially valid. A message \(M\) is appended to a log \(L\) only if both are valid and have consistent authors, otherwise the state of \(L\) does not change. In the growing phase, i.e. \(L_{forks}=\emptyset\), when appending a new message \(M\) to \(L\), there are three possible cases. In the first case, when \(M\) is a successor of \(L_{last}\), the log is updated by setting \(L^{\prime}_{last}=M\). In the second case, when \(M\) is smaller or equal to \(L_{last}\), i.e. it is the same or a predecessor, the message is ignored and the log is not updated. In the third case, when \(M\) is concurrent with \(L_{last}\), then we found a new fork and the log switches to the shrinking phase. In the shrinking phase, i.e. \(L_{forks}\neq\emptyset\), when appending a new message \(M\) to \(L\), there are two possible cases. In the first case, \(M\) does not provide a new fork proof, i.e. \(M\) might be a predecessor of \(L_{last}\) or it might belong to a branch after \(L_{last}\). In that case, the state of the log is not changed. Otherwise, \(M\) is on a new fork that started earlier than \(L_{last}\). In this case, the longest common prefix (greatest lower bound) of both \(L_{last}\) and \(M\) is computed, which is assigned to \(L^{\prime}_{last}\) and the proof of the fork is stored in \(L^{\prime}_{fork}\). #### 3.3.3 Ordering We now define a partial order over all possible log states, listed in Alg. 2, as the first step in establishing logs as state-based CRDTs (Section 2.4). As for all other operations, the less or equal relationship between two log states, \(L\leq_{\text{L}}L^{\prime}\), is only defined for valid logs and the two states must have consistent authors. There are four cases to consider: 1. Both \(L\) and \(L^{\prime}\) are in the growing phase. In this case, \(L\) is smaller or equal to \(L^{\prime}\) if and only if \(L_{last}\) is equal or a predecessor of \(L^{\prime}_{last}\), i.e. \(L_{last}\leq_{log}L^{\prime}_{last}\) (Eq. 6). 2. \(L\) is in the growing phase and \(L^{\prime}\) is in the shrinking phase. Regardless of their states, then \(L\) is smaller than \(L^{\prime}\). 3. \(L\) is in the shrinking phase and \(L\) is in the growing phase. This is the opposite of the second case, therefore \(L\) is not smaller or equal to \(L^{\prime}\). 4. Both \(L\) and \(L^{\prime}\) are in the shrinking phase. This is the opposite of the first case, therefore \(L\) is smaller or equal if \(L^{\prime}_{last}\) is smaller or equal to \(L_{last}\). A structured proof that is \(\leq_{\text{L}}\) is a partial order is given Section 5.2.2. As previously defined, \(\leq_{\text{L}}\) is a not a total order on all possible log states for two reasons. First, it is only defined on log states from the same author and log states from different authors are incomparable. Second, when the last message of both states are on different branches, i.e. \(L_{last}\parallel L^{\prime}_{last}\), neither of the states is smaller than the other. There is an upper bound on all possible states that corresponds to a fork on the first message of the log, i.e. \(L_{last}=\bot\) and for all \(M\in L_{forks}\), \(M_{prev}=\bot\). However, there is an infinite ``` 1:functionInitializeL(\(author\)) 2:\(L_{author}\leftarrow\)author 3:\(L_{last}\leftarrow\bot\) 4:\(L_{forks}\leftarrow\emptyset\) 5: 6:functionAppend(\(L,M\)) 7:Preconditions:\(L\) is valid, \(M\) is valid, and \(M_{author}=L_{author}\). 8: 9:if\(L_{forks}=\emptyset\)then 10:if\(L_{last}\stackrel{{ log}}{{\leadsto}}M\)then\(\triangleright\) Normal case: \(M\) is newer. 11:\(L^{\prime}\leftarrow\)initialize(\(L_{author}\)) 12:\(L^{\prime}_{last}\gets M\) 13:return\(L^{\prime}\) 14:elseif\(M\leq_{log}L_{last}\)then\(\triangleright\)\(M\) is the same or a predecessor. 15:return\(L\) 16:elseif\(L_{forks}\neq\emptyset\wedge M\not\parallel_{log}L_{last}\)then\(\triangleright\)\(M\) does not provide a new fork proof. 17:return\(L\) 18: 19:\(L^{\prime}\leftarrow\)initialize(\(L_{author}\))\(\triangleright\) Forked cases: \(L_{forks}\neq\emptyset\) or \(L_{last}\)\(\parallel_{log}\)\(M\) 20:\(L^{\prime}_{last}\leftarrow\)LogPrefix(\(L_{last},M\)) 21:\(L^{\prime}_{forks}\leftarrow\)ForkProof(\(L_{last},M\)) 22:return\(L^{\prime}\) ``` **Algorithm 1** Log: State and Operations ``` 1:function\(\leq_{\text{L}}(L,\,L^{\prime})\) 2: Preconditions:\(L\) and \(L^{\prime}\) are valid, and \(L_{author}=L^{\prime}_{author}\). 3: 4:if\(L_{forks}=\emptyset\wedge L^{\prime}_{forks}=\emptyset\)then 5:return\(L_{last}\leq_{log}L^{\prime}_{last}\)\(\triangleright\) See Alg. 1 6:elseif\(L_{forks}=\emptyset\wedge L^{\prime}_{forks}\neq\emptyset\)then 7:return true\(\triangleright\) A forked log state is always larger than a non-forked state. 8:elseif\(L_{forks}\neq\emptyset\wedge L^{\prime}_{forks}=\emptyset\)then 9:return false\(\triangleright\)\(L^{\prime}\) is still in growing phase but not \(L\), therefore cannot be greater 10:else 11:return\(L^{\prime}_{last}\leq_{log}L_{last}\)\(\triangleright\) Is \(L^{\prime}\) the same length as, or shrunk more than \(L\)? ``` **Algorithm 2** Log: Ordering number of intermediate states between the initial state and the upper bound that corresponds to an infinite number of correct logs with arbitrarily large number of messages, prior to shrinking. #### 3.3.4 Merging The next step in our design is a function \(\sqcup_{\mathds{L}}\), listed in Alg 3, to merge any two states of logs \(L,L^{\prime}\) to obtain a new state \(L^{\prime\prime}\) that includes updates from both. This may be used, for example, to reconcile the state of a log within a replica with that of another replica. Similar to other operations, it is only defined for valid log states with consistent authors. ``` 1:function\(\sqcup_{\mathds{L}}(L,\,L^{\prime})\) 2:Preconditions: \(L_{author}=L^{\prime}_{author}\), \(L\) and \(L^{\prime}\) are valid. 3: 4:if\(L_{last}\not\Downarrow_{log}\ L^{\prime}_{last}\)then 5:if\(L_{forks}=\emptyset\wedge L^{\prime}_{forks}=\emptyset\)then\(\triangleright\) No known forks 6:if\(L_{last}\leq_{log}L^{\prime}_{last}\)then 7:return\(L^{\prime}\) 8:else 9:return\(L\) 10:elseif\(L_{forks}\neq\emptyset\wedge L^{\prime}_{forks}=\emptyset\)then 11:return\(L\) 12:elseif\(L_{forks}=\emptyset\wedge L^{\prime}_{forks}\neq\emptyset\)then 13:return\(L^{\prime}\) 14:\(\triangleright\) Either \(L_{last}\parallel_{log}L^{\prime}_{last}\) or both are forked. 15:\(L^{\prime\prime}\leftarrow\texttt{initialize}_{\mathds{L}}(L_{author})\) 16:\(L^{\prime\prime}_{last}\leftarrow\texttt{LogPrefix}(L_{last},L^{\prime}_{ last})\) 17:\(L^{\prime\prime}_{forks}\leftarrow\{M\in(\texttt{ForkProof}(L_{last},L^{\prime}_{last})\cup L_{forks}\cup L^{\prime}_{forks}):M_{prev}=L^{\prime \prime}_{last}\}\) 18:return\(L^{\prime\prime}\) ``` **Algorithm 3** Log: Merging There are multiple cases to consider depending on whether \(L\) and \(L^{\prime}\) are growing or shrinking, and whether \(L_{last}\) and \(L^{\prime}_{last}\) are on different branches of forks, i.e. \(L_{last}\parallel_{log}L^{\prime}_{last}\). If both \(L\) and \(L^{\prime}\) are already forked or \(L_{last}\parallel_{log}L^{\prime}_{last}\), then the fork resolution applies, similar to the case of a fork between \(L_{last}\) and a message \(M\) in Append (Section 3.3.2): \(L^{\prime\prime}_{last}\) is the longest prefix (greatest lower bound) of both \(L_{last}\) and \(L^{\prime}_{last}\), and \(L^{\prime\prime}_{forks}\) contains either a new fork proof or a superset for fork proofs that apply. Otherwise, \(L^{\prime\prime}\) is equal to the greatest state of \(L\) or \(L^{\prime}\). There are three possible cases (the fourth case was already covered by the previous paragraph): 1. Both \(L\) and \(L^{\prime}\) are still growing. Since \(L_{last}\not\Downarrow_{log}L^{\prime}_{last}\), then either \(L\) is greater or equal to \(L^{\prime}\), in which case \(L^{\prime\prime}=L\), or \(L^{\prime}\) is greater than \(L\) in which case \(L^{\prime\prime}=L^{\prime}\). 2. \(L\) is shrinking but \(L^{\prime}\) is still growing. Then \(L^{\prime\prime}=L\). 3. \(L\) is growing but \(L^{\prime}\) is shrinking. Then \(L^{\prime\prime}=L^{\prime}\). To complete this second step, we need to show that \(L^{\prime\prime}=L\sqcup_{\mathds{L}}L^{\prime}\) computes the least upper bound of \(L\) and \(L^{\prime}\) according to our partial order (\(\leq_{\mathds{L}}\), Section 3.3.3). Informally, this is the case because either \(L^{\prime\prime}\) is equal to the greatest of \(L\) or \(L^{\prime}\) which is trivially a least upper bound. For all other cases that result in \(L^{\prime\prime}\) being in a shrinking phase, \(L^{\prime\prime}_{last}\) will be the longest prefix of both \(L_{last}\) and \(L^{\prime}_{last}\) (by definition of LogPrefix in Eq. 9), itself a greatest lower bound, that result in \(L^{\prime\prime}\) being the least upper bound. A more detailed proof is given in Section 5.2.2. #### 3.3.5 Monotonicity The last step in establishing our log as a state-based CRDT is to show that all operations are _monotonic_, i.e. they result in a state that is equal or larger than the input states. This is trivially the case for the merge operation, by definition of a least upper bound and otherwise there is only one other state-changing operation, which is Append. This is easy to verify, since either the input state is returned, which is equal, or a new state \(L^{\prime}\) that is larger is returned, either because \(L^{\prime}\) is still growing and \(L_{\mathit{last}}\) is larger, or \(L^{\prime}\) is shrinking and \(L^{\prime}_{\mathit{last}}\) is smaller. A more detailed proof is given in Section 5.2.2. Given this completed last step, our log design satisfies all requirements of a state-based CRDT. And because state-based CRDTs are tolerant to Byzantine processes (Section 2.5), our log design is a Byzantine fault-tolerant state-based CRDT. ### Frontier: Set of Logs We now show how to track the latest state of a set of authors with a _frontier_ state-based CRDT. A frontier is essentially a grow-only set of logs, with the constraint that at most one log state, the largest received so far, is maintained per author. It behaves similarly as a grow-only dictionary of CRDTs (such as [17, 16]), but we prefer the set formulation to avoid the redundant use of _authors_ as keys, that are already part of the log state. #### 3.4.1 Validity Properties Frontiers are simple and straight-forward sets, so the only two validity requirements are that logs stored are valid and there is at most one log per author: **F1**: _(valid logs)_**: for all \(L\in F\), \(L\) is valid.** F2**: _(at most one log per author)_**: for all \(L\in F\), \(\nexists L^{\prime}\in F:L^{\prime}\neq L\wedge L_{\mathit{author}}=L^{\prime}_{ \mathit{author}}\).** #### 3.4.2 State and Operations The operations on a frontier are listed in Alg. 4. By convention, for all state modifying operations, the current state \(F\) is the first argument of the function and a different value is returned. As for logs, an implementation may choose to mutate a frontier in place for better performance. A frontier is initialized as an empty set. The full message history of a frontier is queried with \(\texttt{Messages}(F)\), which consists of the union, for all \(L\) in \(F\), of the causal history of \(L_{\mathit{last}}\) and all messages in \(L_{\mathit{forks}}\). The state of a log \(L\) within a frontier \(F\) is updated with \(\texttt{Update}(F,L)\): if \(L_{\mathit{author}}\) is already present in some log \(L^{\prime}\) in \(F\), the new state is the merged state of \(L\) and \(L^{\prime}\), i.e. \(L\sqcup_{\mathbbm{L}}L^{\prime}\), otherwise \(L\) is simply added to \(F\). Similar to grow-only dictionaries of CRDTs [17, 16]), \(F\) is smaller or equal to \(F^{\prime}\), i.e. \(F\leq_{\mathrm{F}}F^{\prime}\), if and only if \(F^{\prime}\) has a superset of the authors of \(F\) and the state of each log associated to each author in \(F\) is smaller or equal to the log associated to the same author in \(F^{\prime}\). Finally, two frontiers are merged by merging the state of the corresponding logs for authors that are present in both, and adding the logs for authors that are present in only one of \(F\) or \(F^{\prime}\). The proof that a frontier is a state-based CRDT is given in Section 5.2.3 and the proof that all frontier operations maintain the validity properties is given in Section 5.3.2. They are straightforward and require no additional exposition so we simply refer the reader to them. ``` 1:functionInitializeF 2:return\(\emptyset\)\(\triangleright\) Set of local logs 3: 4:functionMessages(\(F\))\(\triangleright\) Set of messages reachable from \(F\) 5:Preconditions: \(F\) is valid. 6: 7:return\(\bigcup\limits_{L\in F}(\mathcal{H}(L_{\mathit{last}})\cup\bigcup\limits_{M\in L _{\mathit{forks}}}\mathcal{H}(M))\) 8: 9:functionUpdate(\(F,L\)) 10:Preconditions: \(F\) and \(L\) are valid. 11: 12:if\(\exists L^{\prime}\in F:L^{\prime}_{\mathit{author}}=L_{\mathit{author}}\)then 13:return\(F\backslash\{L^{\prime}\}\cup\{L\sqcup_{\mathrm{L}}L^{\prime}\}\) 14:else 15:return\(F\cup\{L\}\) 16: 17:function\(\leq_{\mathrm{F}}(F,F^{\prime})\) 18:Preconditions: \(F\) and \(L\) are valid. 19: 20:\(A\leftarrow\{L_{\mathit{author}}:L\in F\}\) 21:\(A^{\prime}\leftarrow\{L_{\mathit{author}}:L\in F^{\prime}\}\) 22:\(\mathcal{L}\leftarrow\{(L,L^{\prime}):L\in F\wedge L^{\prime}\in F^{\prime} \wedge L_{\mathit{author}}=L^{\prime}_{\mathit{author}}\wedge L_{\mathit{author}} \in A\}\) 23:return\(A\subseteq A^{\prime}\land\bigwedge\limits_{(L,L^{\prime})\in\mathcal{L}}L \leq_{\mathrm{L}}L^{\prime}\) 24: 25:function\(\sqcup_{\mathrm{F}}(F,F^{\prime})\) 26:Preconditions: \(F\) and \(F^{\prime}\) are valid. 27: 28:\(\mathcal{L}\leftarrow\{L:L\in F\land\nexists L^{\prime}\in F^{\prime}:L_{ \mathit{author}}=L^{\prime}_{\mathit{author}}\}\) 29:\(\mathcal{L}^{\prime}\leftarrow\{L^{\prime}:L^{\prime}\in F^{\prime}\land\nexists L \in F:L_{\mathit{author}}=L^{\prime}_{\mathit{author}}\}\) 30:\(\mathcal{L}^{\prime\prime}\leftarrow\{L\sqcup_{\mathrm{L}}L^{\prime}:L\in F \wedge L^{\prime}\in F^{\prime}\wedge L_{\mathit{author}}=L^{\prime}_{\mathit{ author}}\}\) 31:return\(\mathcal{L}\cup\mathcal{L}^{\prime}\cup\mathcal{L}^{\prime\prime}\) ``` **Algorithm 4** Frontier: State and Operations #### 3.4.3 Impact of Forked Logs A forked log looses liveness with every log replica that has a proof it has forked: messages appended to any of the branches are ignored by correct replicas. There is however a window of opportunity between the detection of a fork by the first replica and the propagation of the fork proof to all replicas. During that time window, a malicious replica may still extend the branches that are replicated by unaware replicas. Nonetheless, even if those branches are extended and replicated, they will only influence the state of correct (non-forked) replicas if an explicit dependency is recorded in correct logs towards malicious branches. Once the fork proofs have fully replicated, the only other option a malicious participant has is to reveal or introduce new forks earlier in their log. But again, these additional forks can only influence the state of correct logs if dependencies are recorded. We therefore see that although not negligible, the impact an adversary can have by forking their log is limited both in time and in the explicit dependencies that correct logs record. Also, because the dependencies of messages on correct (growing) logs may include branches of forked logs, the branches after the last message of forked logs still need to be replicated to properly construct the causal history of messages from correct logs, as performed with \(\mathtt{Messages}(F)\). This is a necessary property, in an accounting application [16] for example, to compute how many tokens were double-spent in forked branches and be able to properly repair the damage. To ensure a malicious author cannot continue to corrupt the state of correct logs after a fork is discovered, a correct author can simply stop listing as dependencies the messages from authors that have forked their log. This way, even if a malicious author were to intentionally produce earlier forks, only the proofs will be replicated and none of the other newer messages is be depended and can therefore be safely ignored by correct replicas. Implementation over Git In this section, we explain at a high-level how to implement our design over Git [30]. This is straight-forward since Git provides a data model with commit histories and low-level commands necessary to easily implement our algorithms. The full prototype in still a work in progress, and this paper will be revised once it is complete, but we expect it may take at most a few hundred lines of Bash code and should therefore be easily portable to other languages that have bindings to libgit2 [2]. ### Messages as Git Commits We first map our messages (Table 1) to Git commits. Git commits have the following attributes [4]: * _tree-hash_: reference to an immutable tree that represents the state of the filesystem at the time of the commit. * _parents_: reference to other commits on which this commit depends; * _author_: _name_ and _email_ address of the person who created the content of the commit (in the case of the linux kernel, someone who submitted a patch, potentially by email to one of the maintainers); * _committer_: _name_ and _email_ address of the person who created the commit and added it to the repository; * _message_: description of the commit content. The mapping is straight-forward and shown in Table 2: the only significant difference being that \(M_{prev}\) and \(M_{deps}\) are combined together within the _parents_ field. Because commits are signed, it is therefore not possible for an adversary to forge alternative commits for authors for which they do not possess the private key. ### Log's Last Message as a Self-Certifying Branch Reference Git references are essentially pointers to commits [5]. In the case of branches, those references are mutable so that the same name will point to different commits over time. Because a log has a constant _author_, we use the author's public key in a branch name, <author>/last, to reference \(L_{last}\), the latest _sequential_ (non-forked) message from that author (Section 3.3). An additional benefit of using the author's public key as a branch name, is that the branch is now self-certifying (Section 2.6): when another replica provides the latest state of a log under a branch name, the \begin{table} \begin{tabular}{l l} **Git Commit Field** & **Purpose** \\ \hline _author_ & \(M_{author}\) as _author.name_, the _author.email_ is not used; \\ _committer_ & Not used; \\ _parents_ & First parent is always \(M_{prev}\) while the next are \(M_{deps}\); \\ _message_ & Contains both the \(M_{payload}\) and \(M_{signature}\); \\ _tree-hash_ & Not used, left open for applications. \\ \end{tabular} \end{table} Table 2: Encoding of messages as Git commits. receiver can check that this branch name is consistent with the author of the commits pointed at, which themselves are signed by the author. An adversarial replica can only provide existing alternative commits from the same correct author, as alternative referents to the branch name, because they do not have access to the correct author's private key to sign messages.They can therefore hamper liveness, by not supplying the latest commits and therefore delaying their propagation, but they cannot wrongly attribute commits not signed with the _author_'s private key as an alternative log history. ### Fork Proofs as Fork Commit and a Self-Certifying Branch Similar to the encoding of the last message as a self-certifying branch, we encode \(L_{forks}\), i.e. the set of fork proofs, in a branch. To represent the set we use a _fork commit_ (Table 3) whose parents point to the first commit of each branch of the fork, themselves having the commit referenced by <author>/last as the same parent. This fork commit therefore forms a "diamond" with \(L_{last}\). We then make the fork proof self-certifying by storing it under the branch <author>/forks/<last-id> in which <last-id> is the commit id referenced by <author>/last. This way we can quickly test whether a log is in a forked state by dereferencing <author>/last to obtain a <last-id> and then checking whether the branch <author>/forks/<last-id> exists and points to a valid fork commit. A fork commit is valid if it has at least two parents that are valid commits and that both of them have <last-id> as parent. A fork commit does not need to be signed since only the existence the two parents commits from the author of the fork matters. Moreover, in practice it may not be necessary to keep all fork proofs, any two messages that form a valid proof suffice, so a correct replica may simply keep the first valid fork commit it either created or replicated. This avoids mutating branch references to a different commit, which needs a forced update when using the default replication behaviour of Git. ### Handling Correctly Signed but Invalid Messages The implementation has to deal with an issue that does not arise in our previous presentation because we assume the algorithms are only defined for valid messages, logs, and frontiers: an author can correctly sign a message that is invalid. This results in neither a correct, growing, log, nor a forked log. For simplicity, we ignore invalid messages keeping <author>/last to the last valid message, and allow the author to sign a new valid message that would replace the invalid one. However, the production of an invalid message that is correctly signed is still a proof of misbehaviour from the author, so we store this proof under <author>/invalid. A correct replica keeps at most one such invalid message per author to bound the amount of storage Byzantine authors may require \begin{table} \begin{tabular}{l l} **Git Commit Field** & **Purpose** \\ \hline _author_ & \(M_{author}\) as _author.name_, the _author.email_ is not used. \\ _committer_ & Not used. \\ _parents_ & Each parent references the first commit of a branch that itself has <author>/last as parent. \\ _message_ & ”Fork proof” message type (signature not necessary). \\ _tree-hash_ & Not used, left open for applications. \\ \end{tabular} \end{table} Table 3: Encoding of fork proofs as Git commits. from correct replicas. This invalid proof is replicated between correct replicas, which may choose to stop replicating authors of invalid messages. ### Tracking Remote Replicas with Remote Branches Similar to existing Git conventions, we use the remotes/<origin> prefix to track remote branches and efficiently compute missing commits between two frontiers (which is equivalent to computing Messages(\(F\))\(\backslash\)Messages(\(F^{\prime}\)) using Alg. 4). For hosted replicas accessed over secure urls, either using the Git protocol or https, <origin> is the local label for that url, as typically added with git remote add origin <origin> <url>. A peer-to-peer alternative is also possible: <origin> may also be the public key of the replica, which may or may not be the same as one of the logs, and authenticated with a protocol such as Secret-Handshake [29]. While the latter is not commonly used among developers, Git has a well-defined extension mechanism to define such protocols [6], so this extension should be relatively straight-forward to implement. ### Validating Messages and Forks Incrementally Ideally, the replication protocols would validate the commits as they are received and would discard those that do not have the expected properties (Section 3.2). In that case, updating the remotes/<origin> branches would be sufficient. This however requires a custom replication protocol. For the current implementation, instead, we separate the validation from the replication process to ease implementation and reuse the existing Git replication protocols. The commits under remotes/<origin> are therefore not trusted until validated and a separate prefix, valids/<origin>, is used to track the commits that are valid. Only valid logs are used for ordering and merging operations (Alg. 2 and 3). While requiring less additional custom machinery this approach has the drawback that invalid commits are stored before being validated, so this opens an attack vector in which Byzantine replicas may produce an arbitrary large number of invalid commits during replication. A correct replica would have to limit the size of updates it accepts to limit the damage a Byzantine replica may do. We leave a full hardening of the protocol against resource exhaustion for future work. While not necessary to ensure convergence, as well as safety and liveness properties, our implementation enforces an additional constraint on messages by ensuring that every dependency from the same author is equal or larger for later messages: **M7**: _(monotonic dependencies)_: if \(M\stackrel{{ log}}{{\leadsto}}M^{\prime}\) then for all \((\textit{msgId},\textit{msgId}^{\prime})\in M_{\textit{deps}}\times M^{ \prime}_{\textit{deps}}:\textit{msgId}=\texttt{MsgId}(D)\wedge\textit{msgId}^{ \prime}=\texttt{MsgId}(D^{\prime})\wedge\textit{D}_{\textit{author}}=\textit{ D}^{\prime}_{\textit{author}}\Rightarrow D\leq_{\mathcal{M}}D^{\prime}\). This is the natural structure of applications built using append-only logs and those that follow _causal history_ (Section 3.2). ### Frontier as a Set of Branches A Frontier (Alg. 4) is therefore simply a set of branches representing logs and the commits they point to. We store the logs of a replica locals/<author>/* in contrast to the remotes/<origin>/<author>/* and valids/<origin>/<author>/* prefixes. The distinction between locals/<author>/* and valids/<origin>/* is necessary because two different origins may replicate logs that are valid and non-forked individually but actually represent different branches, so the log under locals/<author> will be forked (shrinking) instead. A replica therefore maintains one frontier of not yet validated logs and one frontier of validated logs for each remote origin, as well as a single local frontier combined from the validated logs of all origins. ### Relationships and Queries In Git, the most commonly used elementary query, git log, is actually the causal history of a commit (which is equivalent to \(\mathcal{H}(M)\) from Section 3.2.2). We therefore define the other relationships and queries in terms of this. Git also provides a convenient git merge-base command that covers most of the rest of our needs. All key relationships and queries we use in the previous algorithms, as well as their Git equivalent, are summarized in Table 4. We believe implementing our algorithms with the previous branch conventions and equivalent git commands should pose no special conceptual difficulties, so this completes our presentation of the design. Implementers should be able to focus on other important software engineering requirements (performance, maintainability, documentation, portability, etc.) which we glossed over. We will revise this document if actual practice teaches us we overlooked some important aspects. \begin{table} \begin{tabular}{l l} **Relationships** & **Git+Bash/Unix Equivalent** \\ \hline \hline \(M\leq_{\mathcal{M}}M^{\prime}\) & git merge-base --is-ancestor \textless{}c-id> \textless{}c-id-2> \\ \hline \multirow{2}{*}{\(M\rightsquigarrow M^{\prime}\)} & git merge-base --is-ancestor \textless{}c-id> \textless{}c-id-2> \& \\ & [ ”\textless{}c-id>”!= "\textless{}c-id-2>” ] \\ \hline \multirow{2}{*}{\(M\leq_{log}M^{\prime}\)} & git log \textless{}c-id-2> --author=‘author> --format=\%H | \\ & grep -q \textless{}c-id> \\ \hline \multirow{2}{*}{\(M\stackrel{{ log}}{{\rightsquigarrow M^{\prime}}}\)} & git log \textless{}c-id-2>’1 --author=‘author> --format=\%H | \\ & grep -q \textless{}c-id> \\ \hline \end{tabular} \end{table} Table 4: Git relationships and queries. Assume <c-id> = MsgId(\(M\)), <c-id-2> = MsgId(\(M^{\prime}\)), and \(M_{author}=M^{\prime}_{author}=\) ‘author>. Bash commands representing binary relations return 0 if true, \(>0\) otherwise. See documentation for various formats with which to return the commit sets. Tested with git version 2.24.3, commands and syntax might change for other git versions. Proofs In this section, we provide detailed proofs for convergence, safety, and liveness of our design. They are written with significant detail because this approach helped us find previously ignored corner cases, therefore serving a similar but more systematic purpose as unit testing. ### Definitions We first make the semantics of algorithms more precise with the following definitions: 1. A is the set of all possible authors. 2. M is the set of all possible valid messages. 3. L is the set of all possible valid log states. 4. L(_author_) : _author_\(\in\mathds{A}\) is the set of all possible valid log states from _author_, i.e. \(\mathds{L}(\textit{author})\subseteq\mathds{L}\) such that \(L,L^{\prime}\in\mathds{L}(\textit{author})\Rightarrow L_{\textit{author}}=L^{ \prime}_{\textit{author}}=\textit{author}\). 5. F is the set of all possible valid frontiers. ### Convergence To establish the convergence of both logs and frontiers, we need to establish that their states form a _monotonic semi-lattice_[28], which involves three propositions: First, that all possible states can be partially ordered by \(\leq\). This is a requisite for the next two properties. Second, that merging two states computes their _Least Upper Bound_ (LUB) in that partial order. This ensures that the merge is _commutative_, _associative_, and _idempotent_, providing _safety_, _i.e._, that replicas will agree on the final state regardless of ordering, delays, or duplication of merge operations. Third, that all operations modify the state \(S\) of a replica such that the new state \(S^{\prime}\) is either equal or larger than the previous state \(S\) in the partial order (_monotonicity_). This ensures all state changes will be eventually reflected in the new state of all replicas, either because the same update(s) will have concurrently been applied or because the new state will be the result of a merge. Assuming an underlying communication medium that ensures new states to be eventually delivered to other replicas, the three propositions combined ensure both _liveness_ and _safety_: all state changes are going to be replicated on all replicas _and_ all replicas will agree on the final state automatically, i.e. _strong eventual consistency_[28]. #### 5.2.1 Messages Prove: **Ordering**: \(\leq_{\textit{log}}\) (Eq. 6) partially orders M. 1. **Reflexivity**: Assume: \(M\in\mathds{M}\) Prove: \(M\leq_{\textit{log}}M\) is always true. By the definition of \(\leq_{\textit{log}}\) which includes \(M=M^{\prime}\). 1. **Transitivity**: Assume: \(1.\ M,M^{\prime},M^{\prime\prime}\in\mathds{M}\) 2. \(M\leq_{\textit{log}}M^{\prime}\) 3. \(M^{\prime}\leq_{\textit{log}}M^{\prime\prime}\) Prove: \(M\leq_{\textit{log}}M^{\prime\prime}\) Case: \(M=M^{\prime}=M^{\prime\prime}\) \(M=M^{\prime\prime}\), which makes the first condition on \(M\leq_{\textit{log}}M^{\prime\prime}\) true. Case: \(M=M^{\prime}\wedge M^{\prime}\leq_{\textit{log}}M^{\prime\prime}\) By substituting \(M^{\prime}\) with \(M\), therefore \(M\rightsquigarrow M^{\prime\prime}\) which makes the second condition on \(M\leq_{log}M^{\prime\prime}\) true. Case:\(M\stackrel{{ log}}{{\rightsquigarrow}}M^{\prime}\wedge M^{\prime}=M^{\prime\prime}\) By substituting \(M^{\prime}\) with \(M^{\prime\prime}\), therefore \(M\rightsquigarrow M^{\prime\prime}\) which makes the second condition on \(M\leq_{log}M^{\prime\prime}\) true. Case:\(M\stackrel{{ log}}{{\rightsquigarrow}}M^{\prime}\wedge M^{\prime} \stackrel{{ log}}{{\rightsquigarrow}}M^{\prime\prime}\) This is the third (transitive) condition on \(M\stackrel{{ log}}{{\rightsquigarrow}}M^{\prime\prime}\), therefore \(M\rightsquigarrow M^{\prime\prime}\) which makes the second condition on \(M\leq_{log}M^{\prime\prime}\) true. \(\langle 2\rangle\)1. Q.E.D. True for all possible cases, therefore always true. \(\langle 1\rangle\)3. **Antisymmetry**: \begin{tabular}{l} Assume: \\ 1. \(M,M^{\prime}\in\mathds{M}\) \\ 2. \(M\leq_{log}M^{\prime}\) \\ 3. \(M^{\prime}\leq_{log}M\) \\ Prove: \(M=M^{\prime}\) \\ \end{tabular} There are two alternative conditions for \(\leq_{log}\) to be true: either both are equal or one \(\stackrel{{ log}}{{\rightsquigarrow}}\) the other. If they are not equal, then there would be a cycle between \(M\) and \(M^{\prime}\) which would violate the acyclic validity condition that would contradict the fact that \(M,M^{\prime}\in\mathds{M}\) implies they are valid. Therefore the only possibility left is that they are equal. \(\langle 1\rangle\)4. Q.E.D. The three properties are the definition of a partial order. #### 5.2.2 Log Prove: For any given _author_, the Log design listed in Alg. 1, 2, and 3 is a state-based (convergent) CRDT. Define:\(1\). \(L,L^{\prime}\in\mathds{L}(\textit{author})\) \(2\). \(M\in\mathds{M}\) \(\langle 1\rangle\)1. **Ordering**: \(\leq_{\mathds{L}}\) (Alg. 2) partially orders \(\mathds{L}(\textit{author})\). Proof sketch: By cases on the different phases of the logs. \(\langle 2\rangle\)1. Case:\(L_{forks}=L^{\prime}_{forks}=\emptyset\) \(L\) and \(L^{\prime}\) are only distinguished by their last messages, respectively \(L_{last}\) and \(L^{\prime}_{last}\), that are partially ordered by \(\leq_{log}\) (Section 5.2.1). \(\langle 2\rangle\)2. Case:\(L_{forks}=\emptyset\) and \(L^{\prime}_{forks}\neq\emptyset\) \(L^{\prime}\) is a (valid) forked log, which corresponds to the second possible phase of a log, while \(L\) is still in the first phase. \(L^{\prime}\) is always greater than \(L\) which forms a total order between any possible states of \(L\) and \(L^{\prime}\). \(\langle 2\rangle\)3. Case:\(L_{forks}\neq\emptyset\) and \(L^{\prime}_{forks}=\emptyset\) Opposite of the previous case, which also forms a total order between any possible states of \(L\) and \(L^{\prime}\). \(\langle 2\rangle\)4. Case:\(L_{forks}\neq\emptyset\) and \(L^{\prime}_{forks}\neq\emptyset\) Similar to the first case with reversed \(L_{last}\) and \(L^{\prime}_{last}\), i.e. a forked log is considered larger or equal if there exists a proof that it forked at the same or an earlier point on the same message sequence. \(\langle 2\rangle\)5. Q.E.D. In all cases, all possible valid states of \(L\) and \(L^{\prime}\) are partially or totally-ordered therefore \(\leq_{\mathds{L}}\) defines a partial order over \(\mathds{L}\). \(\langle 1\rangle\)2. **Least-Upper Bound**: Prove: \(L^{\prime\prime}=L\sqcup_{\mathbb{L}}L^{\prime}\) is the LUB of \(L\) and \(L^{\prime}\) in \(\mathbb{L}(\mathit{author})\) partially ordered by \(\leq_{\mathbb{L}}\). 1. Case: \(L_{\mathit{forks}}=L^{\prime}_{\mathit{forks}}=\emptyset\) 1. Case: \(L_{\mathit{last}}\leq_{\mathit{log}}L^{\prime}_{\mathit{last}}\lor L^{\prime} _{\mathit{last}}\leq_{\mathit{log}}L_{\mathit{last}}\) (equivalent to \(L_{\mathit{last}}\,\mathaccent 1373{\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\upharpoonright\!\!\upharpoonright\!\!\upharpoonright\!\!\upharpoonright\!\!\upharpoonright\!\!\upharpoonright\!\!\upharpoonright\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\!\upharpoonright\!\!\!\!\upharpoonright\!\!\!\!\!\upharpoonright\!\!\!\!\!\upharpoonright\!\!\!\!\!\upharpoonright\!\!\!\!\!\upharpoonright\!\!\!\!\!\upharpoonright\!\!\!\!\!\upharpoonright\!\!\!\!\!\upharpoonright\!\!\!\!\!\upharpoonright\!\!\!\!\!\upharpoonright\!\!\!\!\!\!\upharpoonright\!\!\!\!\!\!\upharpoonright\!\!\!\!\!\!\upharpoonright\!\!\!\!\!\!\upharpoonright\!\!\!\!\!\!\upharpoonright\!\!\!\!\!\!\upharpoonright\!\!\!\!\!\!\upharpoonright\!\!\!\!\!\!\upharpoonright\!\!\!\!\!\!\!\upharpoonright\!\!\!\!\!\!\!\upharpoonright\!\! than \(L^{\prime\prime}\) and \(L\) that would also be equal to \(L\). Therefore \(L^{\prime\prime}\) is a least upper bound. \(\langle 3\rangle\)2. Case: \(L^{\prime}_{last}\leq_{log}L_{last}\) Similar to the previous case with \(L\) and \(L^{\prime}\) exchanged. \(\langle 3\rangle\)3. Case: \(L_{last}\,\|_{log}\,L^{\prime}_{last}\) 1. \(L^{\prime\prime}_{last}=\) LogPrefix\((L_{last},L^{\prime}_{last})\Rightarrow L^{\prime\prime}_{last}\stackrel{{ log}}{{\rightsquigarrow}}L_{last}\wedge L^{\prime\prime}_{ last}\stackrel{{ log}}{{\rightsquigarrow}}L^{\prime}_{last}\) 2. \(L^{\prime\prime}_{forks}\neq\emptyset\) 3. \(L^{\prime\prime}\) is valid (Section 5.3.1) Therefore \(L^{\prime\prime}>_{\mathds{L}}L\) and \(L^{\prime\prime}>_{\mathds{L}}L^{\prime}\) (greater than both). \(L^{\prime\prime}_{last}\) is the greatest lower bound of \(L_{last}\) and \(L^{\prime}_{last}\) because it is the result of LogPrefix. Therefore \(L^{\prime\prime}\) is a least upper bound. \(\langle 2\rangle\)5. Q.E.D. All cases are covered. \(\langle 1\rangle\)3. **Monotonicity**: All operations that may generate a new state, when applied on log state \(L\) and any possible arguments, result in a new log state either equal or larger than \(L\) in \(\mathds{L}(\textit{author})\) partially ordered by \(\leq_{\mathds{L}}\). \(\langle 2\rangle\)1. Case: \(L^{\prime}=\) Append\((L,M)\geq_{\mathds{L}}L\) \(\langle 3\rangle\)1. Case: \(L_{forks}=\emptyset\) 1. Case: \(L_{last}\stackrel{{ log}}{{\rightsquigarrow}}M\) 1. \(L^{\prime}_{author}=L_{author}\) 2. \(L^{\prime}_{forks}=\emptyset\) 3. \(L^{\prime}_{last}=M\geq_{log}L_{last}\) Therefore, \(L^{\prime}\geq_{\mathds{L}}L\). \(\langle 4\rangle\)2. Case: \(L_{last}\geq_{log}M\) \(L^{\prime}=L\geq_{\mathds{L}}L\) \(\langle 4\rangle\)3. Case: \(L_{last}\,\|_{log}\,M\) Fall through to the fork handling (Alg. 1, 19). 1. \(L^{\prime}_{author}=L_{author}\) 2. \(L^{\prime}_{forks}\neq\emptyset\) 3. \(L^{\prime}_{last}=\) LogPrefix\((L_{last},M)\stackrel{{ log}}{{\rightsquigarrow}}L_{last}\Rightarrow L^{\prime}_{last}\leq_{log}L_{last}\) Therefore, \(L^{\prime}\geq_{\mathds{L}}L\). \(\langle 3\rangle\)2. Case: \(L_{forks}\neq\emptyset\) \(\langle 4\rangle\)1. Case: \(L_{last}\,\|_{log}\,M\) \(M\) does not provide a fork earlier on the log. We keep the previous state, therefore \(L^{\prime}=L\geq_{\mathds{L}}L\). \(\langle 4\rangle\)2. Case: \(L_{last}\,\|_{log}\,M\) Fall through to the fork handling (Alg. 1, 19). 1. \(L^{\prime}_{author}=L_{author}\) 2. \(L^{\prime}_{forks}=L_{forks}\neq\emptyset\) 3. \(L^{\prime}_{last}=\) LogPrefix\((L_{last},M)\stackrel{{ log}}{{\rightsquigarrow}}L_{last}\Rightarrow L^{\prime}_{last}\leq_{log}L_{last}\) Therefore, \(L^{\prime}\geq_{\mathds{L}}L\). \(\langle 3\rangle\)3. Q.E.D. Since all cases lead to \(L^{\prime}=\) Append\((L,M)\geq_{\mathds{L}}L\), Append is monotonic. \(\langle 2\rangle\)2. Case: \(L^{\prime\prime}=L\sqcup_{\mathds{L}}L^{\prime}\Rightarrow L^{\prime\prime} \geq_{\mathds{L}}L\wedge L^{\prime\prime}\geq_{\mathds{L}}L^{\prime}\) Since \(L^{\prime\prime}=L\sqcup_{\mathds{L}}L^{\prime}\) computes the least upper bound of \(L\) and \(L^{\prime}\) in \(\mathds{L}(\textit{author}\) partially ordered by \(\leq_{\mathds{L}})\), it is also monotonic. \(\langle 1\rangle\)4. Q.E.D. Because the Log satisfies the three propositions, it is a state-based CRDT. #### 5.2.3 Frontier Prove:The Frontier design listed in Alg. 4 is a state-based (convergent) CRDT. Define: 1. \(F,F^{\prime}\in\mathds{F}\) 2. \(L\in\mathds{L}\) \(\langle 1\rangle\)1. **Ordering**: \(\leq_{\mathds{F}}\) (Alg. 4) partially orders \(\mathds{F}\). The definition of \(\leq_{\mathds{F}}\) is analogous to that for grow-only dictionaries of counters [17] and ledgers as grow-only dictionary of accounts [16]: a frontier \(F^{\prime}\) is equal or larger than a frontier \(F\) if and only if it has logs for a superset of authors, which is a partial order on the sets of authors, and each log for authors present in both \(F\) and \(F^{\prime}\) is larger or equal in \(F^{\prime}\) compared to \(F\), which is also a partial order, as proven in Section 5.2.2. Because the conjunction of partial orders is also a partial order [17], \(\leq_{\mathds{F}}\) is a partial order over \(\mathds{F}\). \(\langle 1\rangle\)2. **Least-Upper Bound**: Prove:\(F^{\prime\prime}=F\sqcup_{\mathds{F}}F^{\prime}\) is the LUB of \(F\) and \(F^{\prime}\) in \(\mathds{F}\) partially ordered by \(\leq_{\mathds{F}}\). The definition of \(\sqcup_{\mathds{F}}\) is the composition of \(\cup\) of grow-only sets of logs, which computes the least-upper bound on authors present in both \(F\) and \(F^{\prime}\), and \(\sqcup_{\mathds{L}}\), which computes the least-upper bound of two log states for the same author when present in both \(F\) and \(F^{\prime}\). Since the composition of least upper bounds is also a least upper bound [16], \(\sqcup_{\mathds{F}}\) computes the least upper bound of \(F\) and \(F^{\prime}\). \(\langle 1\rangle\)3. **Monotonicity**: All operations that may generate a new state, when applied on frontier state \(F\) and any possible arguments, result in a new frontier state either equal or larger than \(F\) in \(\mathds{F}\) partially ordered by \(\leq_{\mathds{F}}\). \(\langle 2\rangle\)1. Case: \(F^{\prime}=\texttt{Update}(F,L)\geq_{\mathds{F}}F\) \(\langle 3\rangle\)1. Case: \(\exists L^{\prime}\in F:L^{\prime}_{author}=L_{author}\) \(\exists L^{\prime\prime}\in F:L^{\prime\prime}=L\sqcup_{\mathds{L}}L^{\prime}\) and therefore \(L^{\prime\prime}\geq_{\mathds{L}}L^{\prime}\). Every other logs in \(F\) and \(F^{\prime}\) are equal, therefore \(F^{\prime}\) has a superset of the authors of \(F\) and every log is equal or greater. Therefore \(F^{\prime}\geq_{\mathds{F}}F\). \(\langle 3\rangle\)2. Case: otherwise \(F^{\prime}\) has one more log than \(F\): it has a strict superset of authors, and every log in \(F\) is equal to the log with the same author in \(F^{\prime}\) therefore \(F^{\prime}>_{\mathds{F}}F\). \(\langle 2\rangle\)2. Case: \(F^{\prime\prime}=F\sqcup_{\mathds{F}}F^{\prime}\Rightarrow F^{\prime\prime} \geq_{\mathds{F}}\wedge F^{\prime\prime}\geq_{\mathds{F}}F^{\prime}\) This is a subset of the properties of an operation that computes a least upper bound, which was proven in step \(\langle 1\rangle\)2. \(\langle 2\rangle\)3. Q.E.D. All operations that modify the state of \(F\) are monotonic and other operations on \(F\) do not modify the state. \(\langle 1\rangle\)4. Q.E.D. The _partial ordering_, the _least upper bound_ and the _monotonicity_ properties are all satisfied, therefore a frontier is a state-based CRDT. ### Safety #### 5.3.1 Every log operation on a valid log results in a valid log Assume: 1. \(L,L^{\prime}\in\mathds{L}\) (therefore \(L\) and \(L^{\prime}\) are valid) 2. \(M\in\mathds{M}\) (therefore \(M\) is valid) 3. \(author\in\mathds{A}\) Prove:All log state-changing operations involving possibly \(L,L^{\prime}\) and \(M\) as arguments result in a new log state \(L^{\prime\prime}\) that is valid. \(\langle 1\rangle\)1. Case: \(L^{\prime\prime}=\texttt{Initialize}_{\texttt{L}}(\textit{author})\) Log is in growing phase, meets the _no forks_, _consistent author_, and _valid messages_ properties. \(\langle 2\rangle\)1. _no forks_: \(L_{\textit{forks}}=\emptyset\) By definition. \(\langle 2\rangle\)2. _consistent author_: \(L_{\textit{last}}=\bot\) \(\langle 2\rangle\)3. _valid messages_: \(L_{\textit{last}}=\bot\) First message not yet set, consistent and valid according to definition. \(\langle 2\rangle\)4. Q.E.D. All three properties of growing logs are met, \(L^{\prime\prime}\) is therefore valid. \(\langle 1\rangle\)2. Case: \(L^{\prime\prime}=\texttt{Append}(L,M)\) \(\langle 2\rangle\)1. Case: \(L_{\textit{forks}}=\emptyset\) and \(M\not\Downarrow_{\textit{log}}L_{\textit{last}}\) \(L\) is in growing phase and stays in growing phase, \(L^{\prime\prime}\) must meet the _no forks_, _consistent author_, and _valid messages_ properties. \(\langle 3\rangle\)1. _consistent author_ \(\langle 4\rangle\)1. Case: \(L_{\textit{last}}\stackrel{{ log}}{{\rightarrow}}M\Rightarrow L ^{\prime\prime}_{\textit{last}}=M\) By definition of \(\stackrel{{ log}}{{\rightarrow}}\) and the _single writer_ property since \(M\) is valid. \(\langle 4\rangle\)2. Case: \(M\leq_{\textit{log}}L_{\textit{last}}\Rightarrow L^{\prime\prime}=L\) Since \(L\) is returned without being modified and is valid by assumption. \(\langle 4\rangle\)3. Q.E.D. This covers all possibilities such that \(M\not\Downarrow_{\textit{log}}L_{\textit{last}}\). \(\langle 3\rangle\)2. _valid messages_ Since \(M\not\Downarrow_{\textit{log}}L_{\textit{last}}\), either \(L^{\prime\prime}_{\textit{last}}=M\) or \(L^{\prime\prime}_{\textit{last}}=L_{\textit{last}}\). Since \(L\) and \(M\) are required to be valid in the pre-conditions of Append then \(L^{\prime\prime}_{\textit{last}}\) is necessarily valid. \(\langle 3\rangle\)3. Q.E.D. _no forks_ is met by assumption and all cases meet the _consistent author_ and _valid messages_ properties. \(\langle 2\rangle\)2. Case: \(L_{\textit{forks}}=\emptyset\) and \(M\parallel_{\textit{log}}L_{\textit{last}}\) \(L\) is in growing phase but we potentially have a new proof of fork. If \(M_{\textit{author}}\neq L_{\textit{author}}\), then \(M\) is not a proof of fork and \(L^{\prime\prime}=L\), which is still in the growing phase and valid so \(L^{\prime\prime}\) is also valid. Otherwise, \(L^{\prime\prime}\) is in shrinking phase and is the result of the fork handling (Alg. 1, l. 19): it must meet the FL1-FL7 properties: \(\langle 3\rangle\)1. FL1: _non-empty forks_ \(L_{\textit{forks}}\) is not empty because \(M\parallel_{\textit{log}}L_{\textit{last}}\) implies that there exists two \(M^{\prime},M^{\prime\prime}\) such that \(M^{\prime}\leq_{\textit{log}}M\) and \(M^{\prime\prime}\leq_{\textit{log}}L_{\textit{last}}\) and \(M^{\prime}_{\textit{prev}}=M^{\prime\prime}_{\textit{prev}}\) \(\langle 3\rangle\)2. FL2: _valid last message_ \(L^{\prime}_{\textit{last}}=\texttt{LogPrefix}(L_{\textit{last}},M)\) is a prefix and any message in that prefix is valid by definition because \(L\) and \(M\) are valid, which implies their predecessors are valid as well. \(\langle 3\rangle\)3. FL3: _consistent previous author_ \(L^{\prime}_{\textit{last}}=\texttt{LogPrefix}(L_{\textit{last}},M)\) is a prefix and any message in that prefix has a consistent author by definition because \(L\) and \(M\) are valid, which implies their predecessors have consistent authors. \(\langle 3\rangle\)4. FL4: _valid forks_ ForkProof only returns messages that are either \(L_{\textit{last}}\), \(M\), or predecessors of both. Since \(L\) and \(M\) are valid as required in the pre-condition of Append, \(L_{\textit{last}}\), \(M\) and any of their predecessors are valid. Therefore, messages in \(L^{\prime\prime}_{\textit{forks}}=\texttt{ForkProof}(L_{\textit{last}},M)\) are necessarily valid. \(\langle 3\rangle\)5. FL5: _consistent author_ \(M\) is a proof of fork, then \(L^{\prime\prime}_{\textit{last}}=\texttt{LogPrefix}(L_{\textit{last}},M)\) and \(L^{\prime\prime}_{\textit{forks}}=\texttt{ForkProof}(L_{\textit{last}},M)\) and messages in both must all have consistent authors. \(\mathtt{LogPrefix}(L_{\mathit{last}},M)\) either returns \(\bot\) in which case the author does not matter, or \(M^{\prime}\) which is either equal to \(M\) or a valid predecessor (\(M^{\prime}\stackrel{{\mathit{log}}}{{\leadsto}}M\)) because \(M\) is valid and has consistent authors, therefore \((L^{\prime\prime}_{\mathit{last}})_{\mathit{author}}=L_{\mathit{author}}\). \(L^{\prime\prime}_{\mathit{forks}}=\mathtt{ForkProof}(L_{\mathit{last}},M)\) has consistent author because all fork proofs are selected only among predecessors of \(L_{\mathit{last}}\) and \(M\), which have consistent author because \(L_{\mathit{last}}\) and \(M\) are valid. \(\langle 3\rangle\)6. FL6: _valid proof_ Because \(M\parallel_{\mathit{log}}L_{\mathit{last}}\), then either \(M_{\mathit{prev}}=(L_{\mathit{last}})_{\mathit{prev}}\) or there exist some predecessor(s) of both messages for which it is true. Therefore \(\mathtt{ForkProof}\) will return at least two different messages with the same predecessor. \(\langle 3\rangle\)7. FL7: _consistent proof_ By definitions of \(\mathtt{ForkProof}(M,M^{\prime})\). \(\langle 2\rangle\)3. Case: \(L_{\mathit{forks}}\neq\emptyset\)\(L\) is in shrinking phase, any later updates resulting in \(L^{\prime\prime}\) must stay there and meet the _valid forks_, _consistent author_, _valid proof_, and _consistent proof_ properties to stay valid. \(\langle 3\rangle\)1. Case: \(M\not\parallel_{\mathit{log}}L_{\mathit{last}}\) Regardless of whether \(M\geq_{\mathit{log}}L_{\mathit{last}}\) or \(M\stackrel{{\mathit{log}}}{{\leadsto}}L_{\mathit{last}}\), \(M\) does not provide a new fork proof, therefore \(L^{\prime\prime}=L\) and is valid because \(L\) is valid. \(\langle 3\rangle\)2. Case: \(M\parallel_{\mathit{log}}L_{\mathit{last}}\Rightarrow\mathtt{LogPrefix}(M,L_{ \mathit{last}})\stackrel{{\mathit{log}}}{{\leadsto}}L_{\mathit{ last}}\) Because \(M\) and \(L_{\mathit{last}}\) are concurrent, valid, and have consistent authors, there must exist a smaller log prefix than \(L_{\mathit{last}}\) with a fork proof. The proofs of validity properties are the same as for \(\langle 2\rangle\)2. \(\langle 1\rangle\)3. Case: \(L^{\prime\prime}=L\sqcup_{\mathbb{L}}L^{\prime}\)\(\langle 2\rangle\)1. Case: \(L^{\prime}_{\mathit{last}}\not\parallel_{\mathit{log}}L_{\mathit{last}}\)\(\langle 3\rangle\)1. Case: \(L_{\mathit{forks}}=\emptyset\wedge L^{\prime}_{\mathit{forks}}=\emptyset\)\(\langle 4\rangle\)1. Case: \(L_{\mathit{last}}\leq_{\mathit{log}}L^{\prime}_{\mathit{last}}\)\(L^{\prime\prime}=L^{\prime}\) and \(L^{\prime}\) is valid, therefore \(L^{\prime\prime}\) is also valid. \(\langle 4\rangle\)2. Case: \(L^{\prime}_{\mathit{last}}\stackrel{{\mathit{log}}}{{\leadsto}}L_{ \mathit{last}}\)\(L^{\prime\prime}=L\) and \(L\) is valid, therefore \(L^{\prime\prime}\) is also valid. \(\langle 3\rangle\)2. Case: \(L_{\mathit{forks}}\neq\emptyset\wedge L^{\prime}_{\mathit{forks}}=\emptyset\)\(L^{\prime\prime}=L\) and \(L\) is valid, therefore \(L^{\prime\prime}\) is also valid. \(\langle 3\rangle\)3. Case: \(L_{\mathit{forks}}=\emptyset\wedge L^{\prime}_{\mathit{forks}}\neq\emptyset\)\(L^{\prime\prime}=L^{\prime}\) and \(L^{\prime}\) is valid, therefore \(L^{\prime\prime}\) is also valid. \(\langle 3\rangle\)4. Case: \(L_{\mathit{forks}}\neq\emptyset\wedge L^{\prime}_{\mathit{forks}}\neq\emptyset\) Either \(L_{\mathit{last}}\leq_{\mathit{log}}L^{\prime}_{\mathit{last}}\) or \(L^{\prime}_{\mathit{last}}\stackrel{{\mathit{log}}}{{\leadsto}}L_{ \mathit{last}}\), since \(L_{\mathit{last}}\) and \(L^{\prime}_{\mathit{last}}\) are not concurrent (because \(\langle 2\rangle\)1). Then \(L^{\prime\prime}_{\mathit{last}}\) will be the smaller of the two and \(L^{\prime\prime}_{\mathit{forks}}\) includes the corresponding messages from either \(L_{\mathit{forks}}\) and/or \(L^{\prime}_{\mathit{forks}}\). There are cases in which \(L^{\prime\prime}_{\mathit{forks}}\) may end up with more messages than \(L_{\mathit{forks}}\) or \(L^{\prime}_{\mathit{forks}}\) but this does not influence the validity. \(\langle 2\rangle\)2. Case: \(L^{\prime}_{\mathit{last}}\parallel_{\mathit{log}}L_{\mathit{last}}\) This implies that: 1. \(L^{\prime\prime}_{\mathit{last}}=\mathtt{LogPrefix}(L_{\mathit{last}},L^{ \prime}_{\mathit{last}})\wedge L^{\prime\prime}_{\mathit{last}}\stackrel{{ \mathit{log}}}{{\leadsto}}L_{\mathit{last}}\wedge L^{\prime\prime}_{ \mathit{last}}\stackrel{{\mathit{log}}}{{\leadsto}}L^{\prime}_{ \mathit{last}}\) 2. \(L^{\prime\prime}_{\mathit{forks}}=\mathtt{ForkProof}(L_{\mathit{last}},L^{ \prime}_{\mathit{last}})\) 3. FL1: _non-empty forks_ \(L^{\prime\prime}_{\mathit{forks}}\) is not empty because \(L_{\mathit{last}}\parallel_{\mathit{log}}L^{\prime}_{\mathit{last}}\) implies that there exists two \(M,M^{\prime}\) such that \(M\leq_{\mathit{log}}L_{\mathit{last}}\) and \(M^{\prime}\leq_{\mathit{log}}L^{\prime}_{\mathit{last}}\) and \(M_{\mathit{prev}}=M^{\prime}_{\mathit{prev}}\). \(\langle 3\rangle\)2. FL2: _valid last message_ \(L^{\prime\prime}_{last}=\)LogPrefix\((L_{last},L^{\prime}_{last})\) is a prefix and any message in that prefix is valid by definition because \(L\) and \(L^{\prime}\) are valid, which implies their predecessors are valid as well. \(\langle 3\rangle\)3. FL3: _consistent previous author_ \(L^{\prime\prime}_{last}=\)LogPrefix\((L_{last},L^{\prime}_{last})\) is a prefix and any message in that prefix is valid by definition because \(L\) and \(L^{\prime}\) are valid, which implies their predecessors have consistent authors as well. \(\langle 3\rangle\)4. FL4: _valid forks_ ForkProof only returns messages that are either \(L_{last}\), \(L^{\prime}_{last}\), or predecessors of both. Since \(L_{last}\) and \(L^{\prime}_{last}\) are valid as required in the pre-conditions, any of their predecessors are also valid. Therefore, messages in \(L^{\prime\prime}_{forks}=\)ForkProof\((L_{last},L^{\prime}_{last})\) are necessarily valid. \(\langle 3\rangle\)5. FL5: _consistent fork author_ Since \(L\) and \(L^{\prime}\) are valid, and \(L_{author}=L^{\prime}_{author}\), therefore \(L_{last}\) and \(L^{\prime}_{last}\), and their predecessors have consistent authors. Since messages in \(L^{\prime\prime}_{forks}\) are selected from \(L_{last}\) and \(L^{\prime}_{last}\) or their predecessors, they will therefore also have a consistent author. \(\langle 3\rangle\)6. FL6: _valid proof_ Since \(L_{last}\) and \(L^{\prime}_{last}\) are concurrent and valid, there must exist two different messages, one on each branch, with a shared predecessor and if so, this will be returned by ForkProof. Other existing proofs in \(L_{forks}\) and \(L^{\prime}_{forks}\), if any, will be ignored because they are on more recent messages. \(\langle 3\rangle\)7. FL7: _consistent proof_ By definitions of ForkProof\((M,M^{\prime})\). #### 5.3.2 Every frontier operation on a valid frontier results in a valid frontier \(\langle 1\rangle\)1. Case:InitializeF Trivially true, because it returns an empty set with no log in it. \(\langle 1\rangle\)2. Case:\(F^{\prime\prime}=\)Update\((F,L)\) \(\langle 2\rangle\)1. F1: _valid logs_ \(\langle 3\rangle\)1. Case:\(\nexists L^{\prime}\in F:L^{\prime}_{author}=L_{author}\) Because \(F\) is valid, it only has valid logs and \(L\) is valid (pre-conditions on Update). \(F^{\prime\prime}=F\cup\{L\}\) and therefore only contains valid logs. \(\langle 3\rangle\)2. Case:\(\exists L^{\prime}\in F:L^{\prime}_{author}=L_{author}\) Because \(F\) is valid, it only has valid logs, \(L\) is valid (pre-conditions on Update), and \(F\) contains only a single log \(L^{\prime}\) such that \(L^{\prime}_{author}=L_{author}\). \(F^{\prime\prime}=F\backslash\{L^{\prime}\}\cup\{L\}\): removing \(L^{\prime}\) from \(F\) does not affect validity, and adding a valid \(L\) to \(F\) results in \(F^{\prime\prime}\) having only valid logs. \(\langle 2\rangle\)2. F2: _one author per frontier_ When adding \(L\) to \(F\), if there is already an \(L^{\prime}\) such that \(L^{\prime}_{author}=L_{author}\), \(L^{\prime}\) is first removed then \(L\) is added, leaving only a single log with author \(L_{author}\). If there are no other \(L^{\prime}\) such that \(L^{\prime}_{author}=L_{author}\), then adding \(L\) results in having only one log with \(L_{author}\). Finally there can't be more than one log with the same author within \(F\) because \(F\) is valid. \(\langle 1\rangle\)3. Case:\(F^{\prime\prime}=\sqcup_{\mathrm{F}}(F,F^{\prime})\) \(\langle 2\rangle\)1. F1: _valid logs_ Because \(F\) and \(F^{\prime}\) are valid they contain only valid logs. \(F^{\prime\prime}\) is the union of all logs with authors tha are either only in \(F\) or \(F^{\prime}\), and the merging of logs (\(\sqcup_{\mathrm{L}}\)) with author that are in both \(F\) and \(F^{\prime}\). Since the merge results in valid logs (Section 5.3.1) and all other logs are valid and unchanged, then \(F^{\prime\prime}\) only contains valid logs. \(\langle 2\rangle\)2. F2: _one author per frontier_ Since \(F\) and \(F^{\prime}\) are valid they have at most one log per author in each. When merging, any two logs respectively in \(F\) and \(F^{\prime}\) with the same author will result in a single log in \(F^{\prime\prime}\). Therefore, \(F^{\prime\prime}\) has at most one log per author. 1. Q.E.D. Since all properties of valid frontiers are maintained for all cases of all frontier operations, then every frontier operation on a valid frontier results in a valid frontier. ### Liveness 4.1 All correct replicas will eventually have a shrinking log replica for every log that presented different branches of a fork to correct replicas. 1. Every correct frontier replica is transitively connected to every other correct replica. 2. Every correct frontier replica updates its own state by merging with the latest state of any other correct frontier replica, infinitely often but with potentially arbitrary long waiting periods between merges. 2. \(n\) is the number of correct replicas. 2. \(\mathcal{R}=\{F_{i}\in\mathds{F}:i\in[1,n]\}\) is the set of the (valid) frontier states of correct replicas. 3. At some point in time, there exists \(\mathds{Y}:\emptyset\subset\mathds{Y}\subseteq\mathds{A}\), for which some logs depend on forked messages from authors in \(\mathds{Y}\), _i.e._, \(\exists M,M^{\prime}\in\mathds{M}\) such that: 1. _(valid messages)_: \(M\) and \(M^{\prime}\) are valid; 2. _(valid fork)_: \(M\neq M^{\prime}\wedge M_{author}=M^{\prime}_{author}\wedge M_{prev}=M^{\prime} _{prev}\); 3. _(diverging logs)_: \(\exists L\in F_{i}\in\mathcal{R},L^{\prime}\in F_{j}\in\mathcal{R}:M\leq_{log} L_{last}\wedge M^{\prime}\leq_{log}L^{\prime}_{last}\). 1. Eventually, for all \(F_{i}\in\mathcal{R}\) and for each _author_\(\in\mathds{Y}\), \(\exists\) valid \(L\in F_{i}:L_{author}=author\wedge L_{forks}\neq\emptyset\). The fork represented by \(M\) and \(M^{\prime}\) is initially not replicated on any of the correct replica: some replica replicates one branch and some other another branch. Because of the assumptions all replicas will eventually replicate both branches because every frontier replica will eventually update their log replica for the same author with both branches. And because of eventual convergence, once no earlier fork is made and replicated with correct replicas, all correct replicas will agree on the state of fork logs. ## 6 Related Work In this section, we contrast our work to others and provide additional comments that were not covered in the Background (Section 2). ### Byzantine Fault-Tolerance The Byzantine Generals Problem [15] was introduced by _Lamport et al._ as an analogy to illustrate the problem of designing algorithms in which correct processes may agree on a value even in the presence of faulty processes that may provide inconsistent messages. The Byzantine qualifier was later adopted to describe algorithms that can tolerate arbitrary behaviour from faulty processes. As later explained by Cachin (Chapter 3, [20]), the reliable dissemination of one value, solved by the core algorithms used to solve the Byzantine Generals (formalized in [25]), is _Byzantine broadcast_ and solutions using signed messages enable two correct (transitively connected) processes to reach agreement in the presence of an arbitrarily large number of Byzantine processes, because they can disseminate messages that cannot be tampered. Kleppmann and Howard [13] provide a stronger causal reliable broadcast in the same tradition. _2P-BFT-Log_ totally orders all sender messages from correct processes, but since Byzantine processes may violate this ordering with forks, an application that uses our logs should be able to recover from forks. More generally, the Byzantine Generals Problem has emphasized the importance of _a priori safe_ decisions, in that once an attack is engaged it is not possible to undo the operation and a bad decision may lead to catastrophic outcomes to loyal generals. The generals are therefore allowed to exchange messages until they reach agreement _prior_ to attacking. Many problems don't require such a strong level of safety and instead inconsistency may be repaired once malicious participants successfully carried them: _e.g._, the invalidation of overspent tokens [16] for correct participants could be compensated by a collective insurance scheme and the malicious participant blocked. For these problems, unrepudiable _detection_ of incorrect behaviour followed by _reparation_ towards honest participants is sufficient to tolerate incorrect behaviour and can possibly be cheaper than attempting complete prevention on all operations. ### Fork Consistency Out of over 50 different consistency models surveyed from distributed systems, and storage systems research [32], the closest work to ours is the _fork consistency_ model [23]. In this model, an adversarial server implementing a file system may present inconsistent operations to different clients but once it does, clients are partitioned into groups and member of different groups may never see other groups subsequent operations. However, this model and a later refinement [18] do not implement eventual consistency between replicas because it does not specify what clients should do once forks have been observed. In contrast, our _2P-BFT-Log_ design provides _strong eventual consistency_[28] because all replicas will eventually agree on the greatest lower bound between all forks as the latest non-forked message of every log. Depot [19] describes at a high-level a client-server Cloud Storage system that uses append-only logs with fork detection and recovery in a model they call Fork-Join-Causal-Consistency. After a fork, correct replicas will still accept updates from forked logs as long as the fork has been vouched by a correct replica. In contrast, _2P-BFT-Log_ is a replicated data structure that can be used in both peer-to-peer and client-server environments and we precisely described all core algorithms. Moreover, in _2P-BFT-Log_ forks are handled in an explicit second _shrinking_ phase that bounds the number of possible new forks to at most the remaining number of entries in the log. ### Timeline Entanglement and Causal Histories Our append-only logs augmented with dependencies are similar to secure timelines [21] but provide a complete specification of the data structure behaviour after a fork is discovered, with the novel contribution of an eventually-consistent shrinking phase. Our append-only logs can also be seen as a reification a reification of _causal histories_[27] for correct authors. ## 7 Conclusion We have presented _2P-BFT-Log_, a two-phase Byzantine Fault-Tolerant single-author append-only log design as a state-based Conflict Free Replicated Datatype. The key idea and novel contribution of our design is to add a _shrinking_ phase after the usual growing phase of append-only log, that is triggered by the discovery of a fork, and provides eventual consistency between all correct replica on the greatest lower bound to all forks known by correct replicas. Our design enables establishing a total order between the messages of correct authors while providing eventual detection of concurrent messages for malicious authors. This enables, in the context of an accounting application for example, to ensure correct authors maintain non-negative balances while malicious authors double-spending, which can only be done with concurrent messages, is eventually detected. Moreover, our design provides eventual consistency on the set of messages that affect correct authors' logs, because these are explicitly listed as dependencies. Our design also limits the window of opportunity after an initial attack is carried and guarantees that forked logs become dead, i.e. they cannot be extended with any more messages through correct replicas, once the fork proofs have been replicated by all correct replicas. However, our design does not solve the issue of how correct authors may repair the damage done by malicious authors with concurrent messages. This will be the focus of future work, including but not limited to, how correct authors may recover from double-spent tokens. We believe many existing distributed algorithms that were originally designed to prevent bad outcomes, might be adapted instead to recover from malicious behaviour, which might well be significantly cheaper. ## 8 Acknowledgements We thank Prof. Christian F. Tschudin for fostering a research environment allowing detours and playfulness in the process, as well as providing financial support for this work and feedback on earlier versions of this work. We would also like to thank the Secure-Scuttlebutt community for its enthusiasm in general, being a great springboard for discussions that help identify important and practically relevant underlying technical problems, and its relevant and timely discussions on our papers. In particular, we would like to thank Aljoscha Meyer for feedback on a previous version of this paper. We also thank the Swiss tax payers for contributing their hard-earned funds to make a Swiss academia possible in general, and this paper in particular. We hope our contribution to knowledge will provide general value in different forms many times larger than what it cost you. We would also finally like to thank you, the reader, to have made it to the end of this paper. If the ideas in this paper have been of any use, we would like to hear from you. Academia is mostly geared to track citations by other papers and the prestige of conference and journals in which papers were published, so it can be hard to assess impact beyond these two metrics. If you take a few minutes to send us an email, we will have a better idea of how useful these ideas have been outside of academia.
2310.15079
Affective and Dynamic Beam Search for Story Generation
Storytelling's captivating potential makes it a fascinating research area, with implications for entertainment, education, therapy, and cognitive studies. In this paper, we propose Affective Story Generator (AffGen) for generating interesting narratives. AffGen introduces "intriguing twists" in narratives by employing two novel techniques-Dynamic Beam Sizing and Affective Reranking. Dynamic Beam Sizing encourages less predictable, more captivating word choices using a contextual multi-arm bandit model. Affective Reranking prioritizes sentence candidates based on affect intensity. Our empirical evaluations, both automatic and human, demonstrate AffGen's superior performance over existing baselines in generating affectively charged and interesting narratives. Our ablation study and analysis provide insights into the strengths and weaknesses of AffGen.
Tenghao Huang, Ehsan Qasemi, Bangzheng Li, He Wang, Faeze Brahman, Muhao Chen, Snigdha Chaturvedi
2023-10-23T16:37:14Z
http://arxiv.org/abs/2310.15079v1
# Affective and Dynamic Beam Search for Story Generation ###### Abstract Storytelling's captivating potential makes it a fascinating research area, with implications for entertainment, education, therapy, and cognitive studies. In this paper, we propose **Aff**ective Story **G**enerator (AffGen) for generating interesting narratives. AffGen introduces 'intriguing twists' in narratives by employing two novel techniques--Dynamic Beam Sizing and Affective Reranking. Dynamic Beam Sizing encourages less predictable, more captivating word choices using a contextual multi-arm bandit model. Affective Reranking prioritizes sentence candidates based on affect intensity. Our empirical evaluations, both automatic and human, demonstrate AffGen's superior performance over existing baselines in generating affectively charged and interesting narratives. Our ablation study and analysis provide insights into the strengths and weaknesses of AffGen. ## 1 Introduction Stories have been a central part of human cultures for millennia, shaping societies, identities, and beliefs Kasunic and Kaufman (2018). However, the question of why some stories captivate us while others leave us indifferent remains intriguing. While humans can skillfully craft interesting narratives, even the most recent AI models cannot compose stories that can engage the reader for long enough. In this work, we address the task of automatically generating interesting stories. Automatically generating interesting stories could potentially help cognitive studies by revealing patterns that make stories interesting. From an application perspective, the capability to generate interesting stories could revolutionize the fields like entertainment Akoury et al. (2020); Thue et al. (2007), education Zhao et al. (2022), and even therapy Gabriel and Young (2011). While large language models (LLMs), such as GPT Radford et al. (2019), have been de facto winners in generating coherent text, their prowess in creating narratives that captivate human interest leaves much to be desired. LLMs' coherence is mainly rooted in their training objective that incentivizes text likelihood which is not necessarily correlated with human quality judgements Holtzman et al. (2019); Zhang et al. (2021) or writing style Gehrmann et al. (2019). The concept of "interesting stories" is also highly subjective and context-dependent Roemmele (2021). Previous research in the field increases "interest" in the story by structural planning to control specific aspects of the story, e.g. modeling the emotional flow of the protagonist Luo et al. (2019); Brahman and Chaturvedi (2020) or incorporating flashbacks Han et al. (2022). However, such methods ignore that text complexity and quality also raise its interestingness Schraw et al. (2001). Bradley and Lang (1999) advocates for decorating the plot with affective terms to increase the suspense and intensity of the story that results in control of the audience's emotions Delatorre et al. (2016). With this motivation, we propose **Aff**ective Story **G**enerator (AffGen)1 that con Figure 1: Two example stories. Story 1 is an interesting story with an intriguing twist (highlighted in orange color) that was produced by AffGen using dynamic beam sizing. Story 2 is a relatively less interesting story with a straightforward and predictable plot. trols text coherence and leverages words' affective dimensions to promote text interestingness. Our method is based on two key ideas. First, in beam-search-based decoding of language models, occasionally exploring larger beams can help in generating slightly lower probability but potentially more interesting words. Second, switching between large and small beams can help in maintaining the balance between coherence and interestingness. We use these ideas to generate stories with an **intriguing twist**. Figure 1 shows an example of an interesting story, Story 1, with an intriguing twist (highlighted in orange color) that was produced by dynamically using different beam sizes. It also shows an uninteresting story, Story 2, that used a comparable language model but with a constant beam size. To generate an interesting story, AffGen first identifies where to generate the intriguing twist that would push the story to be more interesting. Then it generates the intriguing twist using two novel techniques, i.e. _Dynamic Beam Sizing_ and _Affective Reranking_. In dynamic beam sizing, AffGen uses a contextual bandit model Thompson (1933) to dynamically explore different beam sizes thus encouraging the model to select words that are less predictable and more intriguing without compromising coherence. In affective reranking, AffGen reranks possible candidates for the sentence to be generated according to their arousal and valence scores Mohammad (2018), thereby modulating the emotional dynamics of the story. Our automatic and human evaluations show that stories generated by AffGen are more engaging than the baselines without sacrificing coherence. Our ablation studies and analysis provide deeper insights into the functioning of AffGen Our contributions are: * We propose the task of generating interesting stories. * We propose AffGen, a language model that uses a novel contextual bandit-based decoding algorithm and explores dynamic beam sizes and affective reranking. * We conduct automatic and human evaluations to empirically demonstrate that AffGen can produce interesting and coherent narratives. * We conduct ablation studies and analysis to further understand the working of AffGen. ## 2 Related Works We discuss two lines of related work that are closely relevant to this study. **Story Generation.** Early research on story generation explored symbolic planning methods Perez and Sharples (2001); Porteous and Cavazza (2009); Riedl and Young (2010) that used predefined rules and structures to generate stories. Later efforts used neural methods Jain et al. (2017); Peng et al. (2018); Fan et al. (2018); Puduppully et al. (2019); Zhai et al. (2019); Yao et al. (2019); Wang et al. (2021); Peng et al. (2022). However, generating interesting stories has remained a challenge due to the subjective nature of "interestingness" Roemmele (2021). Some previous work has attempted to generate interesting stories by controlling specific aspects of the generated content, such as modeling emotions Luo et al. (2019); Brahman and Chaturvedi (2020), flashbacks Han et al. (2022), personas Zhang et al. (2022), topics Lin and Riedl (2021), and social relationships Vijjini et al. (2022). Alhussain and Azmi (2021) pointed out factors that could lead to interesting narratives, such as suspense Tan and Fasting (1996), discourse Genette (1980), and characters Liu et al. (2020). This work differs from these approaches in the sense that it focuses on generating interesting content by choosing more affective, and not necessarily high-likelihood, words. **Sampling strategies for decoding.** One of the commonly used strategies in neural text (and story) generation is Nucleus Sampling Holtzman et al. (2019). This method involves selecting a subset of the vocabulary, called the nucleus, from which the next word is sampled. Another strategy is Top-\(k\) Sampling Fan et al. (2018), which only considers the \(k\) most probable words for the next word. Meister et al. (2023) proposed an information-theoretic strategy, _Locally Typical_ Sampling, with the aim of making the model's output more human-like. Our approach differs from these existing strategies in two key perspectives. First, while previous works primarily aim to encourage generation fluency and diversity we focus on including more affective terms during decoding. Second, we use re-scoring, which involves adjusting the probabilities of the words based on additional criteria, rather than solely relying on the logits distribution generated by the model. This allows us to further enhance the diversity and affective quality of the generated text. ## 3 Problem statement Given a sentence, \(\mathbf{s_{1}}\), as a prompt that represents the first sentence of a story, our goal is to generate an interesting story represented as a sequence of generated sentences \(\mathbf{s_{2}},\mathbf{s_{3}},\ldots,\mathbf{s_{N}}\). Each sentence is a sequence of tokens. In this paper, one of these generated sentences serves as the intriguing twist in the narrative. ## 4 Controlled Affective Story Generator This section presents the Controlled Affective Story Generator (AffGen), a narrative generation model designed to produce interesting stories. AffGen operates in two key stages. First, it identifies the position of the sentence that should contain the intriguing twist, \(p_{IT}\) (SS4.1). Then, it generates the story in the left-to-right manner using a language model. For generating sentences that do not contain the intriguing twist, it uses a standard decoding algorithm since the focus is on maintaining narrative coherence (SS4.2). For generating the sentence that contains the intriguing twist, it uses our proposed decoding algorithm based on Dynamic Beam Sizing and Affective Reranking since the focus is on balancing emotional arousal, interestingness, and coherence (SS4.3). ### Position of the intriguing twist Narratives are highly structured texts. Freytag's pyramid (Freytag, 1908), a widely recognized model of narrative structure, delineates the story into five key components: exposition, rising action, climax, falling action, and resolution. Given the prompt sentence, \(\mathbf{s_{1}}\), our objective is to determine the most suitable location for the climax or the intriguing twist, \(n_{IT}\in\{2,3,\ldots N\}\). There has been some work on identifying the climax or turning point in a given story (Ouyang and McKeown, 2015; Wang et al., 2022; Vijayaraghavan and Roy, 2023). We employ a data-driven approach inspired by the work of Wilmot and Keller (2020). Their methodology operates on the premise that if the embedding of two sentences is sufficiently distant, the latter sentence can be deemed unexpected or interesting with respect to the former sentence. They use this idea to identify the sentence that presents the turning point or intriguing twist in a narrative. Our data-driven approach utilizes the Writing-Prompts dataset (Fan et al., 2018), a collection of human-written stories. We use this dataset to form a distribution, \(D(n)\), which corresponds to the probability of observing the intriguing twist at the \(n^{th}\) sentence. During inference, AffGen samples a relative position \(n_{IT}\) from \(D(n)\) to pinpoint the location of the sentence that would be the intriguing twist in the story that will be generated \[n_{IT}\sim D(n).\] Next, we discuss how AffGen generates the various sentences of the story. ### Base Storyteller For generating sentences that do not contain an intriguing twist (\(\mathbf{s_{i}}\)'s \(\forall i\notin\{1,n_{IT}\}\)), the focus is on maintaining narrative coherence. We use a GPT-based language model (Radford et al., 2019; Brown et al., 2020) which has shown promising performance on story generation (Brahman and Chaturvedi, 2020; Clark and Smith, 2021). We fine-tune the language model on a dataset of stories (SS5.1) by minimizing the negative conditional log-likelihood: \[NLL=-\log\prod_{i=1}^{n}p(w_{i}|w_{1},...,w_{i-1}). \tag{1}\] where \(w_{i}\)'s represents the tokens of the story. We use beam search for inference in this model. ### Generating Intriguing Twist To generate the sentence that contains the intriguing twist in the narrative, \(\mathbf{s}_{IT}\), we use the fine-tuned language model from SS4.2 but with a novel beam search-based decoding. Our decoding method uses Dynamic Beam Sizing and Affective Reranking to produce interesting text. **Dynamic Beam Sizing.** The motivation behind our beam search-based decoding algorithm is that while a small beam size helps in producing coherent text, by expanding the beam size of the PLM, we can explore slightly lower probability but potentially more intriguing words. However, maintaining a large beam size throughout is also not desirable because not all words in a sentence need to be interesting. A large beam throughout can also slow down the inference process and require more resources. So during inference, the model needs to dynamically switch between large and small beam sizes to balance the tradeoff between the coherence and interestingness of the generated text. To address this, we introduce Dynamic Beam Sizing, where depending on the context, the model decides the beam size before generating a token. For practical purposes, we assume that the beam size can take one of \(k\) values \(\{b^{1},b^{2}...b^{k}\}\), and the model has to choose one. We cast the problem of choosing a beam size as a contextual \(k\)-arm bandit problem (Langford and Zhang, 2007), where the _arms_ of the bandit are the various beam sizes. The bandit's choice of beam size at time step or _trial_, \(t\), depends on the the _context_ of the bandit. The _context_ considers the tokens generated so far for the intriguing twist sentence, \(\mathbf{s}_{IT}\). We use \(\mathbf{s_{IT,t-1}}\) to refer to the sequence of tokens in this partial sentence and represent the _context_ using following features: 1. Arousal score: The arousal score of the sentence generated so far, \(\mathbf{s_{IT,t-1}}\). The arousal score of a partial sentence, viewed as a sequence of tokens, \(\mathbf{s}\), of length \(n\) is: \[A(\mathbf{s})=\sum_{i=1}^{n}a(w_{i}) \tag{2}\] where \(a(w_{i})\) is the arousal score of the \(i^{th}\) token obtained from the NRC Word-Emotion Association lexicon (Mohammad, 2018). Since longer sentences can accumulate higher arousal scores, we divide the arousal score by a length normalizing factor (Wu et al., 2016). The length normalizing factor for a sentence of length \(n\) is: \[lp(n)=\frac{(5+n)^{\lambda}}{(5+1)^{\lambda}} \tag{3}\] where \(\lambda\) is the normalization coefficient. 2. Event trigger likelihood: Sims et al. (2019) points out that in narratives there are certain words in a sentence that trigger interesting literary events. E.g. In the sentence "... Stephen leaned his arms on...". The word "leaned" is an event trigger. Identifying such event triggers can help in locating the interesting part of a sentence, which in turn will help in deciding whether to choose a larger beam. With this motivation, we train a RoBERTa (Liu et al., 2019) based predictor that given a partial sentence predicts whether the next token would be the trigger for an interesting literary event. We provide the partial sentence generated so far, \(\mathbf{s_{IT,t-1}}\), as the input to this predictor and use the likelihood assigned by it (for the next token to be an event trigger) as a feature. 3. Sequence length: Length of the partial sentence generated so far, \(\mathbf{s_{IT,t-1}}\). Knowing where the model is, in terms of position, can help it decide whether to generate an interesting token next. 4. Perplexity: The model's perplexity on the partial sentence generated so far, \(\mathbf{s_{IT,t-1}}\). This helps in maintaining coherence. For choosing an _arm_\(b\in\{b^{1},b^{2}...b^{k}\}\), the bandit also receives a _payoff_. The _payoff_ accounts for all the candidate sequences in the beam \(\{\mathbf{c}^{1},\mathbf{c}^{2},...,\mathbf{c}^{b}\}\). Each \(\mathbf{c}^{i}\) is basically a concatenation of the partial sentence generated so far, \(\mathbf{s}_{IT,t-1}\), and the \(i^{th}\) token in the beam. The _payoff_ rewards beams that contain candidate sequences with high arousal scores (to promote interestingness) and low perplexity (to promote coherence). It also penalizes large beam sizes to encourage using fewer compute resources. Mathematically, the payoff value \(R(b_{t},t)\), for choosing a beam size, \(b_{t}\), at time step \(t\), is defined as: \[R(b_{t},t)=\max_{i\in[1,b_{t}]}(\text{A}(\mathbf{c}^{i})-\alpha\cdot\text{ppl} (\mathbf{c}^{i})-\beta\cdot|b_{t}|), \tag{4}\] where \(\alpha,\beta\) are coefficients for each component, \(\text{A}(\mathbf{c})\) and \(\text{ppl}(\mathbf{c})\) represent the arousal score (as defined in Eqn. 2) and the perplexity of the candidate sequence \(\mathbf{c}\) respectively, and \(|b_{t}|\) represents the size of beam \(b_{t}\). Given the set of \(k\) choices for beam sizes \(\{b^{1},b^{2},\ldots b^{k}\}\), the optimal beam size \(b_{t}^{*}\) at timestep \(t\) is given by \[b_{t}^{*}=\underset{i\in[1,k]}{\text{argmax}}\ R(b_{t}^{i},t) \tag{5}\] Correspondingly, the optimal payoff at time step, \(t\) is \(R(b_{t}^{*},t)\). Using the LinUCB (Upper Confidence Bound) algorithm (Li et al., 2010), we optimize the bandit model by minimizing regret \(L\) defined as: \[L=\mathbb{E}[\Sigma_{t=1}^{T}R(b_{t}^{*},t)]-\mathbb{E}[\Sigma_{t=1}^{T}R(b_{ t},t)] \tag{6}\] where \(T\) is the total number of time steps or the total number of tokens in \(\mathbf{s}_{IT}\). **Affective Reranking.** While Dynamic Beam Sizing introduces more arousing content, it does not consider the variation of emotions associated with the content. Chung et al. (2022) highlighted that variation of emotional arc (Reagan et al., 2016) can make a story more engaging. We, therefore, introduce Affective Reranking. Let \(\{\mathbf{s_{iT}}^{1},\mathbf{s_{iT}}^{2}\dots\mathbf{s_{iT}}^{b}\}\) be the candidate sentences that are generated as potential intriguing twists in the beam. The best candidate should have a high arousal score and should also have high affective contrast. We quantify affective contrast as the difference in the valence scores of the candidate sentence and the story generated so far. Valence score of a sequence of tokens, \(v\), is the length-normalized cumulative valence score of its individual tokens. We use the NRC-VAD lexicon Mohammad (2018) to obtain valence scores of tokens. We select the best candidate for the intriguing twist sentence \(\mathbf{s}^{*}\) such that: \[\mathbf{s}^{*}=\underset{i\in[1,b]}{\text{argmax}}\ A(\mathbf{s_{TF}}^{i})+|v( \mathbf{s_{TF}}^{i})-v(\mathbf{s_{1:TF-1}})| \tag{7}\] ## 5 Empirical Evaluation In this section, we describe our experiments. ### Experimental Setup **Dataset.** For our experiments, we use the ROCStories dataset Mostafazadeh et al. (2016), a large collection of 100k five-sentence 'commonsense' stories about everyday events. We held out 1k stories each for validation and testing and use the first sentence of every story as the prompt. We chose this dataset because it allows us to assess the performance of our model's ability to learn from a collection of everyday life stories and improvise them to be interesting. The short nature of these stories also makes the manual assessment of narrative quality feasible during human evaluation which otherwise would have been difficult. This focus on short stories, however, does not limit the potential application of our model to longer narratives. Our base storyteller is trained on the ROCStories dataset. The contextual bandit model is trained in an unsupervised manner, relying on the internal regret function. **Implementation Details.** All hyperparameters were set based on the performance on the validation set. We used \(\alpha\) = 0.00015, \(\beta\) = 0.0003 in Eqn. 4 and \(\lambda=1.5\) in Eqn. 3. We trained the bandit model on single A5000 for 10 epochs and it chose between three beam sizes of \(10\), \(30\) and \(60\). **Baselines.** Our primary baseline is GPT2 fine-tuned on the RocStories dataset since it is widely recognized for its story generation capabilities Brahman and Chaturvedi (2020). We use GPT3 as a baseline to compare with a large language model. For GPT3, we use the following prompt 2 (after experimentation): "Contine writing an interesting story using the following context, <context>. The total length of the story should be five sentences. The total words limit is 60 words." Footnote 2: Please refer to Table 7 for more prompt details. ### Automatic Evaluation Table 1 presents a comparison between AffGen and baseline methods. We use two versions of our model, AffGen-2 and AffGen-3. They use fine-tuned GPT-2 and GPT-3 as the base storytellers (SS4.2). We observe that both versions of AffGen have higher perplexity (PPL) scores than the baselines. This, however, is expected and does not imply low coherence because AffGen encourages using low-likelihood words during the decoding process to generate interesting content. For a better evaluation of coherence, we consider the UNION (UNI) Guan and Huang (2020) and RUBER (RUB) scores Tao et al. (2018). UNION is a reference-free score specially designed for evaluating open-ended story generation models. RUBER is a hybrid of referenced and unreferenced metric used for evaluating dialog systems. We only use its unreferenced part to evaluate the quality of a piece of text (story) generated in response to a query (the story prompt). A higher value for these scores is better. We observe that for these scores versions of AffGen either perform better than or comparable to the baselines. This indicates that AffGen is capable of generating coherent narratives. For evaluating how interesting the stories are, we measure their per-token Arousal score (Aro) (Eqn. 2) which quantifies their affect level. A higher value is better for this score. We observe that \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & PPL \(\downarrow\) & Uni \(\uparrow\) & RUB \(\uparrow\) & Aro \(\uparrow\) \\ \hline GPT2 & 26.77 & 0.021 & 0.1546 & 0.45 \\ AffGen-2 & 40.27 & 0.019 & **0.1556** & 0.51 \\ GPT3 & **18.90\({}^{*}\)** & 0.028 & 0.1541 & 0.46 \\ AffGen-3 & 25.66 & **0.029** & 0.1547 & **0.53\({}^{*}\)** \\ \hline \hline \end{tabular} \end{table} Table 1: Automatic evaluation of AffGen using Perplexity (PPL), UNION score (Uni) Guan and Huang (2020), (RUB) score Tao et al. (2018), and Arousal score (Aro). \(\uparrow\) and \(\downarrow\) indicate if higher or lower scores are desirable. Bold fonts indicate best scores and * indicates statistical significance (\(p<0.01\)). The results indicate that both versions of AffGen can generate interesting stories without compromising coherence. both versions of AffGen outperform the baselines with AffGen-3 achieving the highest score. This indicates that AffGen generates more interesting stories. ### Human Evaluation In order to assess the performance of AffGen, a comprehensive human evaluation was conducted on the Amazon Mechanical Turk (AMT) platform. A total of 100 instances were randomly selected from our test set. We feed their initial sentences as prompts for generating stories using AffGen-3 and GPT-3, our stronger baseline. To eliminate any potential bias, the presentation order of the two stories was randomized. The Turkers then selected the better of the stories according to 6 criteria: coherence, emotional engagement, empathy, interestingness, and overall preference. These criteria were chosen based on prior research conducted by Chhun et al. (2022). The Turkers could also select an _"equally good"_ option. The Turkers were explicitly instructed to solely consider the given criterion when evaluating, except when expressing an overall preference. In the appendix, Figure 4 showcases a screenshot of our AMT setup. We specifically utilized Master annotators predominantly from English-speaking countries (US, UK, Canada, and Australia). We evaluated 200 stories in total, and each pair was assessed by three different annotators. We discuss the results shown in Table 2 below. All differences in this table are statistically significant (p<\(0.1\) for coherence and p<\(0.05\) for others) and the inter-annotator agreement is \(0.58\) (moderate agreement). **Coherence** evaluated the logical flow and connection between the different elements of the story. For this criterion, judges found stories generated by AffGen-3 to be more coherent than those generated by GPT-3 in 50.5% of instances, while AffGen-3's stories were considered less coherent in 40.7% of cases. The remaining 8.8% resulted in a tie. This indicates that AffGen does not compromise on coherence while generating stories. **Emotional Engagement** evaluated how effectively a story conveys a range and intensity of emotions that capture and hold the reader's attention and create a sense of emotional depth and complexity. For this criterion, judges found stories generated by AffGen-3 to be more emotionally engaging than GPT-3 in \(53.0\%\) and less emotionally engaging in \(40.3\%\) of the cases. This demonstrates AffGen's stronger ability to evoke emotions in readers. **Empathy** evaluated if the story arouses the readers' empathy for the characters. The conflicts and challenges described in stories can create situations that make the readers project their own emotions and thoughts onto the characters, keeping them invested and engaged. For this criterion, AffGen-3 outperformed GPT-3 by a large gap of \(13.6\%\) (\(53.8\%\) wins and \(40.2\%\) losses). This demonstrates AffGen can generate emotionally resonant content. **Interestingness** evaluates the story's ability to be compelling and engaging. For this criterion also, AffGen-3 outperformed GPT-3 by a large gap of \(13.8\%\) (\(54.9\%\) wins and \(41.1\%\) losses). This demonstrates AffGen's its superiority in keeping the reader's interest while generating stories. **Overall Preference** Finally, we observed that overall, the judges preferred AffGen over the baseline in \(52.7\%\) of the cases (as compared to preferring baseline over AffGen in \(39.6\%\) cases). To conclude, the human evaluation results provide strong evidence of the superiority of the AffGen in various critical aspects of open-ended story generation underlying its ability to generate interesting and engaging stories while maintaining coherence. \begin{table} \begin{tabular}{l c c c} \hline \hline **Evaluation Criteria** & **Win** & **Lose** & **Tie** \\ \hline Coherence & **50.5*** & 40.7 & 8.8 \\ Emotional Engagement & **53.0*** & 40.3 & 6.7 \\ empathy & **53.8*** & 40.2 & 6.0 \\ Interestingness & **54.9*** & 41.1 & 4.0 \\ \hline Overall Preference & **52.7*** & 39.6 & 7.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Human evaluation of AffGen vs GPT-3. AffGen generates better stories across all measures. * indicates statistical significance (p<0.1 for coherence and p<0.05 for others). \begin{table} \begin{tabular}{l c c c} \hline \hline **Evaluation Criteria** & **Win** & **Lose** & **Tie** \\ \hline Coherence & 14.5 & **38.8*** & 46.7 \\ Emotional Engagement & **55.5*** & 24.8 & 19.7 \\ Empathy & **40.8*** & 28.6 & 30.6 \\ Interestingness & **45.3*** & 26.3 & 28.4 \\ \hline Overall Preference & **45.3** & 35.7 & 19.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Human evaluation of AffGen vs ChatGPT. AffGen generates not as coherent but more interesting and empathetic stories. * indicates statistical significance (p<0.05). ### Comparison with ChatGPT In this section we compare AffGen with a large language model, ChatGPT 3. We used a human evaluation setup similar to that described in SS5.3. These annotations were performed by expert annotators who were students of literature theories. For generating stories with ChatGPT, we experimented with different prompts and the final prompt is shown in Table 7. Table 3 shows the results. Annotators expectedly found ChatGPT's stories to be more coherent. Our initial analysis also revealed ChatGPT text to have more sophisticated structure. However, annotators found AffGen's stories to be significantly more emphathy-evoking and interesting. Because of this, the annotators preferred AffGen over ChatGPT in the overall preference. Footnote 3: OpenAI. (2023). ChatGPT (May 24th version) [Large language model]. [https://chat.openai.com](https://chat.openai.com) ### Ablation Study We now describe our ablation study in which we investigate the importance of exploring different beam sizes and affective reranking. In our experiments reported so far, we made AffGen explore three different beam sizes during decoding. In this study, we design ablated versions of AffGen that only uses one of the three beam sizes. We call them AffGen\({}_{10}\), AffGen\({}_{30}\), and AffGen\({}_{60}\), where the subscript indicates the beam size being used. The first three rows of Table 4 reports the relative performance of these versions with AffGen. All models use fine-tuned GPT-2 as the base storyteller. For all scores, a negative score indicates that the ablated version did not perform as well as AffGen (and vise versa). We can see that for most of the ablated versions, the UNION and RUBER scores are negative. This means that the stories generated by the ablated versions are less coherent than the full model. In terms of Arousal scores, AffGen\({}_{10}\) produces less arousing stories than AffGen but AffGen\({}_{30}\), and AffGen\({}_{60}\) produce more arousing stories than AffGen. This aligns with our initial intuition that a larger beam size helps the model generate more interesting content. However, because of large but static beam sizes, the stories generated by these two versions were less coherent than those generated by AffGen. Next, we also consider another version of AffGen but without Affective Reranking. The relative performance of this model is shown in the last row of Table 4. We can see that the performance of this version is quite close to the baseline. Also, while its coherence is comparable to AffGen, the arousal score is particularly worse indicating the importance of this component in generating interesting content. Overall, we can draw two conclusions from this ablation study. First, exploring large beam sizes and affective reranking can help in generating more interesting content. Second, it is important to dynamically switch between larger and smaller beams to balance interestingness and coherence. ### Expansion to Longer Narratives Our experiments have used RocStories which are short in nature. This focus on short stories does not limit the potential application of our model to longer narratives. Jolles (2017) points out that stories could be condensed into "simple forms". Story composition could be viewed as a process of expanding these simple forms into presentable longer narratives. Table 10 presents expanded AffGen-generated stories and compared with vanilla ChatGPT generated stories. With the help of the fivesentence interesting plots produced by AffGen, ChatGPT expand them into better stories comparing to vanilla ChatGPT generated stories. ### Dynamic Beam Sizing We now investigate how the beam size changes as AffGen generates an interesting sentence. Figure 2 shows the average beam size used to generate at different positions of a typical sentence. We observe that AffGen is using larger beam sizes for the first few tokens. Our manual analysis revealed that the interesting words indeed appear earlier in a sentence, in general. Since the model is capable of transitioning between beam sizes, we plot a heat map of the transitions shown in Figure 3. Each cell shows the probability of transitioning from beam size on Y \begin{table} \begin{tabular}{l c c c} \hline & UNION & RUBER & Arousal \\ \hline AffGen\({}_{10}\) & -0.007 & -0.012 & -0.018 \\ \hline AffGen\({}_{30}\) & -0.002 & -0.007 & 0.024 \\ \hline AffGen\({}_{60}\) & -0.005 & -0.006 & 0.047 \\ \hline AffGen\({}-AR\) & 0.002 & -0.001 & -0.070 \\ \hline \end{tabular} \end{table} Table 4: Performance of ablated versions of AffGen with static beam sizes relative to AffGen. Subscripts indicate the beam sizes. A negative score indicates that the ablated version did not perform as well as AffGen. These results indicate that it is important to explore large beam sizes in a dynamic manner to generate interesting and coherent stories. axis to a beam size on the X axis. Darker color indicates higher probabilities. We observe that in general, while AffGen has a tendency to stick to a chosen beam size (\(\sim 70\%\)), it does transition to different beam sizes about \(30\%\) of the times indicating the importance of switching between beam sizes. ### Qualitative Analysis During the human evaluation (SS5.3), when asking for preferences for the stories, we also asked the judges to provide explanations for their choices. We then analyzed the explanations to further analyze the stories generated by AffGen. Table 5 show an example of stories generated by GPT-3 and AffGen for the same input prompt as well as the human-provided explanation. While both stories have a happy ending, AffGen's story introduces a plot complication, where the protagonist, Grayson's, initial attempt to bake a cake fails. He resolves the situation through determined efforts, creating a narrative of perseverance. Compared to the baseline, the plot in AffGen's story becomes more complicated and has more ups and downs, which enhances the emotional engagement and interest of the reader. The AMT judges noted that the AffGen's story was more emotionally expressive. Table 8 in the Appendix provides more comparative examples of stories generated by the baseline and AffGen and corresponding explanations. Analyzing explanations for story pairs we found that the judges preferred AffGen's stories because they presented a shift in mood enhancing the affect they had on the reader. AffGen's stories also presented unexpected twists which provide a relief from the story's prevalent theme and increases its interest. In contract, the baseline stories were banal and conflict-less. Sometimes AffGen's stories introduced a melancholic theme, but the judges still found them pleasant. This aligns with the narrological theory presented by Massumi (2002) who argues that there is a gap between the content and the effect on the receiver's end. As a result, audience often rate "sad" scenes in the film as the "most pleasant". Overall, AffGen was found to be better at generating more emotionally captivating and interesting stories leading to to better storytelling experience. ### Error Analysis Using the judges' explanations provided during the human evaluation, we also conduct an error analysis to identify issues encountered during story generation by AffGen. Table 9 in Appendix shows some examples of story pairs in which the judges did not prefer AffGen's stories over GPT-3's stories and their explanations. We observe that while AffGen introduces an intriguing twist in the story, it sometimes suffers from typical language modeling challenges like repetitive phrases and ideas (Story 2) and incoherence. Often the incoherence is caused by a lack of commonsense knowledge like sunglasses cannot change eye colors (Story 1), and if a toy breaks, it cannot function (Story 3). This aligns with the proposition made by Alhussain and Azmi (2021) that coherence (and also causality) are fundamental in storytelling. Without them, the story may disintegrate into inconsistent fragments. ## 6 Conclusion This paper addresses the task of generating interesting stories. For this, we present, AffGen, a language model that uses a novel contextual bandit-based decoding mechanism. This new decoding mechanism enables the AffGen to dynamically explore different beam sizes and rerank based on affective quality of the text. Our experiments indicate that AffGen can generate interesting but Figure 3: Transitional probability between beam sizes. Figure 2: Average beam size used to generate at different positions of a sentence. coherent narratives. Our ablation studies underscore the importance of dynamic beam sizing and affective reranking, and our qualitative and error analysis point to the strengths and weaknesses of our model. We hope that this ability to compose interesting narratives can open new dimensions of computational creativity, driving the generation of unique and captivating content at an unprecedented scale and speed. ## Acknowledgement We appreciate the reviewers for their insightful comments and suggestions. Tenghao Huang and Muhao Chen were supported by the NSF Grant IIS 2105329, an Amazon Research Award and a Keston Exploratory Research Award. Ehsan Qasemi was supported by the DARPA MCS program under Contract No.N660011924033 with the United States Office Of Naval Research. Computing of this work was partly supported by a subaward of NSF Cloudbank 1925001 through UCSD. ## Limitations Our study has the following limitations. We assume a single sentence containing an intriguing twist can enhance a story's interestingness. This paper focused on how to generate that intriguing twist. However, a story can potentially benefit from multiple interesting sentences and future works can investigate into how frequently and where to generate interesting content. We adopted a simple data-driven approach for deciding where to put the sentence that contains the intriguing twist. It samples from a distribution learned from a collection of stories. Future work could work on more sophisticated methods that consider the preceding narrative context for deciding when to describe an interesting twist so that it integrates better with the story being generated. For practical purposes, the bandit model discretized beam sizes. However, beam size is a continuous variable, and discretizing it can restrict the model from exploring all possible values. Our experiments used GPT-2 and GPT-3 as the base storyteller for generating the stories. However, we see AffGen as a framework that could incorporate other language models and future work can investigate this aspect. Our experiments explored short and fictional narratives. Future work could investigate advanced planning and strategies for composing longer stories or non-fictional content. Our dataset and experiments use only one language - English. We did not investigate the model's capabilities to generate stories in other languages. ## Ethical considerations Our experiments use a publicly available dataset. Previous work (Huang et al., 2021) has shown that it contains gender-related biases and storytelling models that use this dataset can replicate and amplify these biases. Our model also encourages low-perplexity text, which could unintentionally encourage biased, violent, or sexually explicit content. Since we have not employed any bias or toxicity removal methods, applications of our work should control for inappropriate content.
2302.02714
Differentiable Programming of Chemical Reaction Networks
We present a differentiable formulation of abstract chemical reaction networks (CRNs) that can be trained to solve a variety of computational tasks. Chemical reaction networks are one of the most fundamental computational substrates used by nature. We study well-mixed single-chamber systems, as well as systems with multiple chambers separated by membranes, under mass-action kinetics. We demonstrate that differentiable optimisation, combined with proper regularisation, can discover non-trivial sparse reaction networks that can implement various sorts of oscillators and other chemical computing devices.
Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson
2023-02-06T11:41:14Z
http://arxiv.org/abs/2302.02714v1
# Differentiable Programming of Chemical Reaction Networks ###### Abstract We present a differentiable formulation of abstract chemical reaction networks (CRNs) that can be trained to solve a variety of computational tasks. Chemical reaction networks are one of the most fundamental computational substrates used by nature. We study well-mixed single-chamber systems, as well as systems with multiple chambers separated by membranes, under mass-action kinetics. We demonstrate that differentiable optimisation, combined with proper regularisation, can discover non-trivial sparse reaction networks that can implement various sorts of oscillators and other chemical computing devices. ## 1 Introduction Computation and information processing, implemented using different physical substrates and at different scale, are ubiquitous in nature and in technology. The most effective computational devices rely on fine and persistent physical structures. Examples are natural neural networks and human-made electronic, mechanical, or hydraulic computers. There is another very important class of computational networks, which are responsible for decision making at scales from individual cells to societies, that are much less demanding of the precise spatial structure and connectivity among the processing elements. In these networks, information is represented using populations of different types of interacting agents, such as molecules, cells, [Turing, 1952] or even animals [Lotka, 1926], and the structure of the computational process is encoded in the interaction-reaction rules between these agents. Chemical Reaction Networks (CRNs) are a notable example of computational systems of this type, capable of making complex decisions and adapting even under the assumption that individual computing elements undergo completely chaotic Brownian motion. In this work we aim to use differentiable optimization to automatically design task-specific networks of this type. Assuming mass-action kinetics and that individual chambers are well-mixed, Van Kampen [1992] naturally defines a differential equation for modelling the dynamics of a given CRN. The modelled variables are concentrations of participating chemical components, and their rates of change are defined by the structure of the reaction network and current concentrations of the reactants, catalysts, and inhibitors. The computational power of such networks has been proven to be Turing Complete [Soloveichik et al., 2008] and is thus sufficient for representing arbitrary computation, such as, for instance, computations representable by a Boolean formula. While the possibility of computation with CRNs has long been enticing, implementations have generally involved significant complexity (Shin, 2012). In recent years, DNA strand displacement has been demonstrated as a viable implementation mechanism for CRNs (Soloveichik et al., 2010). ### Computing with Chemical Reaction Networks In this work we focus on systems that have transition rules of the following form: \(A+C\xrightarrow{k}B+C\), meaning that substance \(A\) gets transformed into \(B\) after interaction with \(C\), which acts as a catalyst. The rate at which the chemical reaction occurs can be expressed as \(kAC\), where \(A\) and \(C\) are the current concentrations of the reactant and the catalyst and \(k\) is a reaction coefficient. We are going to use a more compact notation for this type of reaction: \(A\xrightarrow{k,C}B\). This choice of elementary reaction type gives us an "agent-centric" view of the system where each agent makes independent decisions about its state after an interaction with another agent. Figure 1 shows a simple CRN oscillator composed of such reactions. It is easy to transform reaction rules into an ODE-system, where each reaction decreases the concentration of its input and increases the concentration of its output. We can think of concentrations as variables, where reactions are "gates" that continuously modify the variable values. The language of chemical reactions is surprisingly versatile and powerful. For example, it is possible to simulate the function of an arbitrary Boolean circuit using a network constructed from reactions of the type described here (Soloveichik et al., 2008). Various analog circuits, such as oscillators, approximate majority computators, or even 3D renderers (Sergienko, 2020), can be constructed as well. Given the rise in popularity of artificial neural networks (ANN), it is not surprising that chemical networks were adopted to perform ANN inference. Typically, CRNs are constructed to compute results of formulas composed of traditional NN building blocks, such as matrix multiplications and element-wise non-linearities. In contrast, in this work we explore the possibility of direct application of backpropagation gradient-based optimization to find the CRN network structure and parameters for solving a particular problem. Numerical optimization has already been applied to determining the parameters of physical systems that satisfy real world measurements from various processes (Kaheman et al., 2020) (model identification). In this work we focus on synthesising _new models_ given the specification expressed with an objective function. From that perspective, contributions of this paper can be summarized as follows: * Define an efficient parameterization and a training procedure that enables differentiable optimization of CRNs having a particular structure. * Demonstrate that this procedure can be used to synthesize compact reaction networks that perform computational tasks specified by a provided objective function. ## 2 Differentiable Reaction Networks In this section, we describe a possible differentiable representation of a CRN system. Consider \(N\) components that undergo reactions of the form \(X\xrightarrow{k,Z}Y\), where \(X\), \(Y\) and \(Z\) are arbitrary (may Figure 1: Reaction Network example: three-phase oscillator. Graph (c) shows reactants as white nodes and reactions as grey nodes. Grey node labels denote the catalyst, that activates a particular reaction. Plot (d) shows the evolution of the oscillator system in case \(A_{0}=0.8\), \(B_{0}=C_{0}=0.1\), \(k_{\{0,1,2\}}=1\) even be repeating) components from these N. The total number of possible reactions (including the trivial \(X\xrightarrow{Z}X\)) is therefore \(N^{3}\). We may represent any such \(N\)-element reaction network using a 3-dimensional tensor \(T\) of shape \(N\times N\times N\), where dimensions correspond to the reaction catalyst, input and output respectively. We are going to call this the _reaction tensor_. For example, the following tensor represents the oscillator system shown on the Figure 1: \[\begin{bmatrix}T_{0,*,*}&T_{1,*,*}&T_{2,*,*}\\ \begin{bmatrix}0&0&0&0\\ 0&0&0\\ k_{3}&0&-k_{3}\end{bmatrix}&\begin{bmatrix}-k_{1}&k_{1}&0\\ 0&0&0\\ 0&0&0\end{bmatrix}&\begin{bmatrix}0&0&0\\ 0&-k_{2}&k_{2}\\ 0&0&0\end{bmatrix}\end{bmatrix}\] We refer to elements of this tensor as \(T_{c,a,b}\), where each slice \(c\) corresponds to one catalyst, and each row \(a\) to one reaction input. Elements of each row express the change of the concentration of each substance caused by the reaction of a particular input-catalyst pair. For example, the row \(T_{1,0}=[-k_{1},k_{1},0]\) means that when the 1st component (B) catalyses the 0th component (A), A gets removed and B gets added with the rate \(k_{1}\), which corresponds to the reaction \(A\xrightarrow{k_{1},B}B\) from the Figure 1a. Rows of \(T\) have a meaning, similar to the rows of a Stoichiometric matrix, but also encode reaction rates along with reactants and products. Consider a vector \(\mathbf{x}=[x_{0},...,x_{N-1}],\forall i:x_{i}\geq 0\) that encodes current concentrations of \(N\) chemicals. We can now express the rate of change of its components over time as easily as \[x_{b}^{\prime}=\sum_{a,c}x_{a}x_{c}T_{c,a,b} \tag{1}\] We need to impose some constraints on the tensor \(T\) on make sure that it represents the right type of reactions: * \(\forall a,c:\sum_{b}T_{c,a,b}=0\) -- all rows sum to zero to conserve the mass; * \(\forall a,c:T_{c,a,a}\leq 0\) -- only reactant may get consumed by the reaction; * \(\forall a,b,c:a\neq b\implies T_{c,a,b}\geq 0\) -- reaction outputs must be non-negative; Note that this formulation allows multiple possible outputs for the same input. For example, let's suppose that \(T_{1,0}=[-1,0.8,0.2]\). This may be interpreted as two reactions \(A\xrightarrow{0.8,B}B\) and \(A\xrightarrow{0.2,B}C\) running in parallel. We can also think of molecules as _stochastic finite state machines_ (FSM), that make a random decision on which state to take upon interaction with another molecule. This view inspired us to use the following differentiable representation of the reaction network, which maintains the properties listed above. We construct \(T\) from a logit parameters tensor \(W\in\mathbb{R}^{N\times N\times N}\) this way: \[\begin{split} P_{c,a,b}&=\operatorname*{softmax}_{b}(W _{c,a,b})\\ T_{c,a,b}&=P_{c,a,b}-I_{a,b}\end{split} \tag{2}\] Rows of \(P\) are probability distributions over the resulting molecule states for each possible input-catalyst pair. Rows sum up to one, so we can subtract the identity matrix \(I\) that spans axes \(a\) and \(b\) to obtain the reaction tensor \(T\). Once we defined the differentiable representation (2) for the coefficients of the reaction network ODE system (1) and selected the initial conditions \(\mathbf{x}(0)\), we may plug the equation into a differentiable ODE solver. Gradient-backpropagation through the ODE solver may be either performed directly, or by using the adjoint state method (Chen et al., 2018). In following sections we are going to explore a number of different optimization objectives that are expressed in terms of the behavior of the ODE system defined by the reaction network. ### Sparsity-inducing regularization It's often desirable to keep the resulting network as simple as possible. One possible definition of simplicity is the number of different reactions that are possible within a network. Our key motivations for reducing the network complexity are feasibility of physical implementation and interpretability of the network structure. We consider each unique combination of catalyst, reactant and product as one reaction. The maximum number of possible reactions our model allows for a system of \(N\) species is \(N^{3}-N^{2}\), where \(N^{2}\) accounts for the excluded "no-op" \(X\xrightarrow{Y}X\) reactions. In our experiments we use a few of strategies to reduce the number of reactions in the CRN. We pick a threshold value \(k_{\text{min}}\) and ignore all reactions that have a smaller rate. We can do so by setting the corresponding elements of tensor \(T\) to zeros. Note that we also have to adjust negative diagonal (\(T_{c,a,a}\)) elements to make sure that each row still has a zero sum. We call this sparsified tensor as \(T^{k_{\text{min}}}\). Regularization lossesWe use a number of additional training loss terms to steer the optimization into finding the reaction tensors \(T\) that have a larger number of near-zeros values that can be discarded: \[L_{L1}=\frac{1}{N^{3}}\sum_{c,a,b}|T_{c,a,b}|\quad L_{H}=-\frac{1}{N^{3}}\sum _{c,a,b}P_{c,a,b}\text{log}(P_{c,a,b})\quad L_{I}=-\frac{1}{N^{3}}\sum_{c,a,b} P_{c,a,b}^{2}\] \(L1\)-regularization (\(L_{L1}\)) is a common ways to promote sparsity of optimized parameters. The corresponding loss term boils down to computing the average of absolute values of elements of \(T\). Another approach to regularization is inspired by the stochastic state machines interpretation of molecules. We would like to reduce the number of stochastic reactions, i.e. reactions which produce more then one possible output for a given reactant-catalyst pair. One way of steering optimization towards such networks is decreasing the entropy (\(L_{H}\)) of rows of the tensor \(P\), which can be interpreted as a stochastic transition table of FSMs that represent molecules. A similar effect can also be achieved by maximizing the so called _informational energy_ (\(L_{I}\)) (Nielsen, 2022). The set of used loss terms and their weights vary from experiment to experiment and described in corresponding appendix sections. Sparsified trainingIn some experiments we observed that post-training removal of small rate reactions from the network may lead to substantial difference with the behaviour seen during training. For example, in dynamics matching experiments (section 3), the frequency of sparsified learned oscillators diverged from the objective. This lead us to the idea of accounting for sparsification during the network training. We experimented with two strategies of training sparse networks. The first strategy is to use \(T^{k_{\text{min}}}\) instead of \(T\) on forward training pass, but propagate gradients back to \(T\) as if small value masking didn't happen2. An alternative approach involves computing the target specific objective function twice, using the original and sparsified networks, and adding the results together: \(L=\text{Loss}(T)+Loss(T^{k_{\text{min}}})\) Footnote 2: This is often achieved with the stop_gradient trick: \(T_{\text{min}}=T+\text{stopgrad}(T^{k_{\text{min}}}-T)\) ## 3 Waveform matching In this section we explore the capability of differentiable optimization to find reaction networks, which have specific temporal dynamics. Consider a scalar function \(f(t)\) defined on the range \([0,t_{\text{max}}]\). We would like to find a reaction network \(T\) and initial conditions \(\mathbf{x}(0)\), so that the temporal dynamics of concentration of one of the chemical components (e.g. \(x_{0}\)) matches the target function \(f\) as closely as possible. We define the objective in the following way: \[L_{f}=\frac{1}{t_{\text{max}}}\int_{0}^{t_{\text{max}}}(x_{0}(t)-f(t))^{2}dt \;\approx\;\frac{1}{n_{t}}\sum_{i=0}^{n_{t}}(x_{0}(t_{i})-f(t_{i}))^{2}\] where \(t_{i}\) values are evenly spaced over the \([0,t_{max}]\) interval. We study two different examples of target function dynamics: square wave and three peaks of decreasing intensity. Figure 2 shows target waveforms along with the behaviours of 40 independently trained networks that were using different random parameter initialization. The target loss is applied over the interval \([0,t_{max}]\). We evaluated resulting CRNs on a twice longer time interval to see how the learned behaviour generalizes outside of training time frame. We observed that the proposed procedure is capable to discover compact CRNs that demonstrate an approximation of the target dynamics in concentrations of one of the chemical components. ## 4 Functional networks In this section we explore the capacity of learned CRNs to find approximations to some simple functions. We investigate approximations to binary functions \(f:\mathbb{Z}_{2}^{i}\rightarrow\mathbb{Z}_{2}^{j}\) as well as functions of the form \(f:\mathbb{R}_{>0}^{i}\rightarrow\mathbb{R}_{>0}^{j}\). We refer readers to the supplementary materials further examples of learned functions, such as Analogue-to-Digital converters. Given both measurable and controllable quantities in CRN are real-valued, non-zero concentrations of a chemical, and we use a similar approach to Cardelli et al. (2018) to map the space of concentrations of indicator chemicals to high and low signals in a binary setting. ### Logic Gates There has long been an interest in implementing Boolean operators as reaction networks. Several successful hand-engineered implementations have been demonstrated (Soloveichik et al., 2008; Cardelli et al., 2018), with varying properties and encoding schemes. The CRN design in Cardelli et al. (2018) additionally has the key property of **reusability** - the control chemicals can be changed and the CRN responds appropriately, updating its output. This property is non-trivial as it requires a network to be able to maintain a state, as opposed to use-once circuits. We take inspiration from Cardelli et al. (2018) and demonstrate that our proposed method can learn CRNs that approximate Boolean functions, and can learn them in a **reusable** fashion. We use the same number of CRN chemicals per Boolean operator, and the same encoding scheme for inputs and outputs. Additionally, we initialize non-indicator and non-input chemicals to the same concentrations as in Cardelli et al. (2018). We note that the values of these auxiliary chemicals are Figure 2: Results of training 5-component CRNs to reproduce two different temporal target patterns. We trained 40 CRNs for each pattern using different random initialization. Most runs converged to solutions that produced reasonable approximations of the target waveform. We observed large variance in numbers of reactions constituting sparsified CRNs (\(k_{\text{min}}=10^{-3}\)). A large fraction of ”squares” target runs converged to the oscillating solution, although it was not explicitly required by the training objective. not readily included in Cardelli et al. (2018), but we infer them to the best of our ability from the time-concentration graphs. #### 4.1.1 Dual Rail Encoding We use dual rail encoding as in Cardelli et al. (2018). Each input and output variable \(X\) is represented by two unique complementary chemicals, \(X_{hi}\) and \(X_{lo}\), whose concentrations signal the state of variable \(X\). We consider \(X=1\) i.f.f. \(X_{hi}>=1.0-\epsilon\) and \(0<=X_{lo}<=\epsilon\), and \(\bar{X}=0\) otherwise. We choose \(\epsilon=0.1\), however in practice when designing loss functions, we encourage \(X_{hi}\) and \(X_{lo}\) to be as close as possible to one of the two desired states \((1,0)\) or \((0,1)\) at measurement time and often find that learned solutions converge to states much closer than \(\epsilon\). #### 4.1.2 Target & Training We allocate the \(N\) chemicals used in the CRN into the inputs, IN \(:=\{X_{hi},X_{lo},Y_{hi},Y_{lo}\}\), outputs OUT \(:=\{Z_{hi},Z_{lo}\}\) and \(N-6\) auxiliary chemicals AUX \(:=\{A,B,C,D...\}\). We explicitly prevent backpropagation into indices of our reaction tensor \(T\) which would consume or produce the any chemical in IN, ensuring that these act as fixed control chemicals, and can only influence the CRN dynamics as catalysts. We independently train three CRNs to learn the Boolean operators "**AND**", "**OR**" and "**XOR**". In each case, we use as many AUX chemicals as used in Cardelli et al. (2018), which is 3, 3 and 4, respectively. We train our CRN largely using the method outlined in 3, with a few caveats. For each operator, we generate a training set of initial concentrations and timed transitions of the input chemicals IN and matching desired outputs OUT. Inputs \((X,Y)\) can be one of \(\{(1,0),(0,1),(1,1),(0,0)\}\), so the set of transitions consists of \(2^{4}\) possible transitions (e.g. \((1,0)\rightarrow(1,1)\)). During training, we run the CRN for \(T=800\) time, and introduce a transition in the input after every \(T//4\), i.e. at \(T=200,400,600\). Each batch entry in our training set covers four transitions. We then impose the aforementioned waveform loss, only on the output chemicals, but over a period of \(T//8\) prior to the next transition, encouraging the CRN to converge to the correct output for the given inputs just before the next transition. We use an L1 instead of L2 loss to further encourage stability in the outputs. #### 4.1.3 Results & Verification We refer to figure 3 for a sample of the dynamics of the learned CRNs over time. AND, OR and XOR CRNs consist of 20, 16 and 14 reactions with a rate \(>0.1\), respectively, which compares favourably with the functionally equivalent hand-designed CRNs in Cardelli et al. (2018), with 7, 7 and 12 reactions, respectively. In Cardelli et al. (2018), correctness of the designed CRN is proved through a combination of informal reasoning about the circuit, simulation of the circuit using Visual GEC (Cardelli et al., 2016) under both deterministic and under stochastic conditions, as well as formally verifying the circuit using PRISM (Kwiatkowska et al., 2011). The mechanics of our learned circuits are non-trivial to reverse-engineer and formally verify in similar fashion. Instead, we perform two tests on the stability of the learned CRN. Firstly, we evaluate the behaviour of the CRN on the a modified training dataset iterated for \(T_{eval}=100*T_{train}\), with the transition points placed accordingly every \(T_{eval}/4\). Secondly, we iterate the CRN again for \(T_{eval}\) steps, but continuously uniformly sample the "time to next transition" from \(U_{[T_{train}//4,T_{train}//2]}\), as well as randomly sample the next input state. All our learned Boolean CRNs output the correct values (under our \(\epsilon\) definition) during these tests, suggesting convergence to a stable point. ### Seven segment display mapping One more complex logical mapping is the Seven-segment digit mapping (Figure 4), where the combinations of 4 input bits are mapped to 7 output segments activations. We define a _low_ and _high_ floating point parameters, representing the initial values of 4 input chemicals for the 0 and 1 input case respectively. For instance, the input encoding "1010" is mapped to four chemicals initialized as follows: _(high, low, high, low)_. We decided to keep the total mass of chemicals equal across different inputs. To do that, we add another input whose initial value is equal to \(1\) minus the sum of the 4 input chemical concentrations for that instance. Therefore, the resulting input encoding consists of \(n+1=5\) chemicals. The output is defined by the final value of 7 output chemical distributions (note we _do not_ request any target value for the final input chemicals). We choose to train the task with a _squared hinge loss_ on the output chemicals: \[\begin{split}\textit{TranslateAndScale}(x)&=(2x- \textit{low}-\textit{high})/(\textit{high}-\textit{low})\\ \textit{SquaredHingeLoss}(x_{i},y_{i})&=(\text{Max} (0,1-\textit{TranslateAndScale}(x_{i})\cdot y_{i}))^{2}\end{split} \tag{3}\] where the vector \(y\) has its value set to \(-1\) and \(+1\) for output values of \(0\) and \(1\) respectively. The squared Hinge loss effectively penalizes the output chemicals if they do not get lower than _low_ or higher than _high_ if the target output is 0 or 1 respectively. We choose to apply this loss on the latter half of the time unfolding (as opposed to only the final result), encouraging a more stable final configuration. Finally, we add 4 more chemicals (initialized to zero) as auxiliary channels. To evaluate whether the task is solved, we then threshold the output concentrations and consider them 0 or 1 if they are below or above this threshold. The arbitrary choice of a midpoint threshold = \((\textit{high}+\textit{low})/2\) appears to work well and with it we achieve a perfect fit on this task. ### Single-chamber winner-takes-all We qualify this task with "single-chamber" as we present a more complex case in the next section. The premise of the task, also sometimes referred to "approximate-majority", is to treat two chemicals \(A,B\) as both input and output, with initial concentrations \(A_{0},B_{0}\in[0,1]\), and to have the desired final, converged, state of our chemicals to be: \[\lim_{t\rightarrow\inf}(A_{t},B_{t})=\begin{cases}(1.0,0.0)&A_{0}>B_{0}\\ (0.0,1.0)&A_{0}<B_{0}\end{cases}\] This definition also implies convexity in the dynamics of the CRN, but we don't explicitly enforce this. Using Cardelli et al. (2018) as a heuristic for the upper bound on the number of chemicals required for such a reaction-network, we design the reaction with only one additional auxiliary chemical, initialized with a concentration of 0. We train the model by initializing it with sampled \(A_{0},B_{0}\)\(U_{[0,1]}\), and apply an L2 loss at time \(T=200\), penalising the deviation from the desired states defined above. In figure 5 we show the dynamics of the reaction network, as well as it's time evolution for various initialisations of \(X_{0}\) and \(Y_{0}\), plotting the evolution of the difference between the two concentrations normalized by the magnitude of their sum at \(T=0\). We also graphically visualise the sparsified CRN. Figure 4: Seven-segment digit mapping task. The 4 input bits (red and blue squares) are mapped to 7 output gates each representing a segment (black line) in a hexadecimal display. Figure 3: Dynamics of learned CRNs approximating the Boolean operators AND, OR and XOR. We are keen to note that the sparsified version of the learned reaction network for approximate majority is **identical** to the hand-designed, formally verified one in Angluin et al. (2007). ## 5 Multi-chamber reaction-diffusion models We now introduce the concept of _membranes_ and their related diffusion of certain chemicals. Traditionally, CRNs have often had their reaction component paired with a _diffusion_ component, where chemicals on a space or through a membrane would naturally diffuse with varying rates (Turing, 1952; Kondo and Miura, 2010; Mordvintsev et al., 2021). In this paper, we focus on diffusion occurring on permeable membranes, and more specifically on _rings_ of chambers, where every chamber has a right and a left neighbour (with the exception of the case with only two chambers where there is only one neighbour). We extend our ODE system to take into account the contribution on the change of rate for any chemicals passing through membranes. We construct a diffusion trainable vector \(V\in\mathbb{R}^{N}\) (we only need one single value for each chemical to represent their diffusion through a membrane) and construct the diffusion rate vector: \(D=\mathrm{Sigmoid}(V)\). Now, we can create systems with different chambers, separated by membranes. Each chamber has their own concentration of chemicals that react among themselves. Different chambers connected by a membrane diffuse chemicals at different rates, based on the vector \(D\). The rate of change of each chemical component now becomes: \[{x_{b}^{i}}^{\prime}=\sum_{j}^{M_{i}}(x_{b}^{j}-x_{b}^{i})D_{b}+\sum_{a,c}x_{ a}x_{c}T_{c,a,b} \tag{4}\] where \(i\) is a chamber identifier and \(M_{i}\) is the set of chambers connected to i. For this task every membrane is identical, but this system can generalize to different diffusion rates for different membranes if needed. ### Multi-chamber winner takes all We revisit the winner takes all task we introduced in section 4.3 and render it a multi-chamber task. In this version, we randomly initialize a chemical A within two values _low_ and _high_ across different chambers. The task is to suppress A on all the chambers where their concentration was not the highest, and highlight A on the winner chamber. This task can also be seen as a variant of _leader election_ in anonymous rings (Xu and Jeavons, 2015), where the leader needs to suppress all other nodes, with the added complexity that a leader must be chosen based on the input configuration of a specific group of chemicals. The capacity of performing leader election is an extremely important Figure 5: **(a)** Learned single-chamber winner-takes-all CRN. Each line corresponds to the evolution of one instantiation of the chamber. Note the magnitude of the measured quantity does not converge exactly to 1.0, due to the small-magnitude secondary reactions. When initial concentrations are very close (i.e. the measured quantity is close to \(0.0\)) the secondary reactions cause incorrect convergence. **(b)** Graphical representation of the learned reaction network, showing only reactions rates \(k_{r}>0.05\) (n.b. depicted reactions all have \(k_{r}=1.0\)). The sparsified learned reaction network exactly matches the approximate majority network derived in Angluin et al. (2007) feature of most biological systems, as it enables differentiation of roles. For instance, analyses of _Drosophila_ have been demonstrated to perform leader election routines (Afek et al., 2011; Barad et al., 2011, 2010; Jacobsen et al., 1998). Task descriptionWe construct \(n\) chambers connected as a ring through membranes sharing the same diffusion vector \(D\). During training we set \(n=5\) exclusively and we evaluate for more out-of-training configurations. The chemical A is randomly initialized within _low_\(=0.01\) and _high_\(=0.9\) in each chamber. We also initialize three more auxiliary chemicals (B,C,D) to zero everywhere and enforce the input configuration to have a total concentration (sum of all chemicals) for each chamber to be equal to 1. We do so by adding a chemical E initialized as \(E^{i}=1.-A^{i}\) for each chamber i. We observed this initialization to be critical for a successful training of this system. We apply a squared Hinge loss (Equation 3), with lower and upper bound of _low_ and _high_ respectively, for all steps after the first \(1/3\)rd, encouraging the model to find a more stable final configuration. ResultsGiven a _winner threshold_\(t\), a batch of final configurations x of chemicals A per chamber and a batch of target winner and losers configurations y, we define _accuracy_\((t,x,y)\) as the percentage of instances \(b\) where only the winner chemical on \(y_{b}\) is above the threshold t in \(x_{b}\). Figure 5(a) shows the different accuracy results for varying t and different numbers of chambers. Losers are consistently suppressed to \(\sim 0\) for all chambers, while more chambers increase the final expected concentration of the winner. This is likely due to having a different total concentration of chemicals in the system. Table 1 shows accuracy with different thresholds and chambers. Thresholds are extracted using the eval dataset (n=10000) and then tested on a separate dataset (n=10000). The "all" column represents one threshold used for all possible numbers of chambers, averaging the resulting accuracy. Figure 5(b) shows a complete mapping of inputs to outputs for the case of two chambers. The Appendix C shows the example run and the full description of the resulting system. ## 6 Discussion and Limitations We propose a new method of designing compact sparse Chemical Reaction Networks for solving a variety of computational problems. Previous work on CRN design focuses on construction of chemical counterparts of traditional basic computational units, such as logic gates, and manually combining them into circuits. In contrast, we show that end-to-end differentiable optimization is a viable approach to the objective-driven synthesis of complete circuit. This may enable efficient design \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & & \multicolumn{6}{c}{Number of chambers} \\ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & all \\ \hline Threshold & 0.21 & 0.31 & 0.36 & 0.52 & 1.04 & 1.19 & 1.5 & 0.31 \\ Eval Accuracy (\%) & 99.95 & 99.99 & 99.87 & 99.63 & 99.10 & 97.73 & 96.01 & 97.19 \\ Test Accuracy (\%) & 99.97 & 100.00 & 99.88 & 99.68 & 98.87 & 97.63 & 95.36 & 97.27 \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracies on the multi-chamber winner takes all task. Figure 6: Plot (a) shows accuracy results on an eval dataset (n=10000) for varying values of winner thresholds and number of chambers (Chn). The number of chambers used during training is 5. Plot (b) shows the complete input-output mapping of the resulting network for the case with two chambers. of reaction circuits that can be implemented on a variety of physical substrates, from molecular to community scales. We see following limitations and future research directions for this work: (A) All networks described here operate in a bulk, deterministic setting. Low molecular counts make systems stochastic and noisy, which brings both new challenges and opportunities. (B) Our sparsity regularization method removes all low-rate reactions from the system, although such reactions may sometimes be necessary for efficient implementation of the target function.
2302.14176
Reinforcement Learning with Depreciating Assets
A basic assumption of traditional reinforcement learning is that the value of a reward does not change once it is received by an agent. The present work forgoes this assumption and considers the situation where the value of a reward decays proportionally to the time elapsed since it was obtained. Emphasizing the inflection point occurring at the time of payment, we use the term asset to refer to a reward that is currently in the possession of an agent. Adopting this language, we initiate the study of depreciating assets within the framework of infinite-horizon quantitative optimization. In particular, we propose a notion of asset depreciation, inspired by classical exponential discounting, where the value of an asset is scaled by a fixed discount factor at each time step after it is obtained by the agent. We formulate a Bellman-style equational characterization of optimality in this context and develop a model-free reinforcement learning approach to obtain optimal policies.
Taylor Dohmen, Ashutosh Trivedi
2023-02-27T22:28:58Z
http://arxiv.org/abs/2302.14176v1
# Reinforcement Learning with Depreciating Assets ###### Abstract A basic assumption of traditional reinforcement learning is that the value of a reward does not change once it is received by an agent. The present work forgoes this assumption and considers the situation where the value of a reward decays proportionally to the time elapsed since it was obtained. Emphasizing the inflection point occurring at the time of payment, we use the term _asset_ to refer to a reward that is currently in the possession of an agent. Adopting this language, we initiate the study of depreciating assets within the framework of infinite-horizon quantitative optimization. In particular, we propose a notion of asset depreciation, inspired by classical exponential discounting, where the value of an asset is scaled by a fixed discount factor at each time step after it is obtained by the agent. We formulate a Bellman-style equational characterization of optimality in this context and develop a model-free reinforcement learning approach to obtain optimal policies. ## 1 Introduction _Time preference_(Frederick et al., 2002; Loewenstein and Jon, 1992) refers to the tendency of rational agents to value potential _desirable outcomes_ in proportion to the expected time before such an outcome is realized. In other words, agents prefer to get a future reward sooner rather than later, all else being equal, and similarly, agents prefer to experience negative outcomes later rather than sooner. This phenomenon is typically codified in mathematical models in terms of discounting (Shapley, 1953) and has been applied to a diverse array of disciplines concerned with optimization such as economics (Heal, 2007; Philibert, 1999), game theory (Filar and Vrieze, 1996), control theory (Puterman, 1994), and reinforcement learning (Sutton and Barto, 2018). These models focus on the situation in which an agent moves through a stochastic environment in discrete time by selecting an action to perform at each time step and receiving an immediate reward based on the selected action and environmental state. In particular, we consider exponential discounting, as introduced by Shapley (1953), in which the agent carries this process on ad infinitum to generate an infinite sequence of rewards \(\langle r_{n}\rangle_{n=1}^{\infty}\) with the goal of maximizing, with respect to a discount factor \(\lambda\in(0,1)\), the discounted sum \(\sum_{n=1}^{\infty}\lambda^{n-1}r_{n}\). The discount factor is selected as a parameter and quantifies the magnitude of the agent's time preference. A notable characteristic of the aforementioned discounted optimization framework is an implicit assumption that the utility of a reward remains constant once it is obtained by a learning agent. While this seemingly innocuous supposition simplifies the model and helps to make it amenable to analysis, there are a number of scenarios where such an assumption is not appropriate. Consider, for instance, the most basic and ubiquitous of rewards used to incentivize human behaviors: money. The value of money tends to decay with time according to the rate of inflation, and the consequences of this decay are a topic of wide spread interest and intense study (Beckerman, 1991; Comley, 2015; Fergusson, 2010; Hulten and Wykoff, 1980). _Recognizing the fundamental role such decay has in influencing the dynamics of economic systems throughout the world, we consider its implications with respect to optimization and reinforcement learning in Markov decision processes._ ### Asset Depreciation When discussing a situation with decaying reward values, it is useful to distinguish between potential future rewards and actual rewards that have been obtained. As such, we introduce the term _asset_ to refer to a reward that has been obtained by an agent at a previous moment in time. Using this terminology, the present work may be described as an inquiry into optimization and learning under the assumption that assets _depreciate_. Depreciation, a term borrowed from the field of finance and accounting (Burt, 1972; Wright, 1964), describes exactly the phenomenon where the value of something decays with time. We propose a notion of depreciation that is inspired by traditional discounting and is based on applying the same basic principle of time preference to an agent's history in addition to its future. More precisely, we consider the situation in which an agent's behavior is evaluated with respect to an infinite sequence of cumulative accrued assets, each of which is discounted in proportion to how long ago it was obtained. That is, we propose evaluating the agent in terms of functions on the sequence of assets \[\left\langle\sum_{k=1}^{n}r_{k}\gamma^{n-k}\right\rangle_{n=1}^{\infty}\,,\] where \(\gamma\in(0,1)\) is a discount factor, rather than on the sequence of rewards \(\langle r_{n}\rangle_{n=1}^{\infty}\). To motivate the study of depreciation and illustrate its naturalness, we examine the following hypothetical case-study. **Example 1** (Used Car Dealership).: _Consider a used car dealership with a business model involving purchasing used cars in locations with favorable regional markets, driving them back to their shop, and selling them for profit in their local market. Suppose that our optimizing agent is an employee of this dealership, tasked with managing capital acquisition. More specifically, this employee's job is to decide the destination from which the next car should be purchased, whenever such a choice arises. The objective of the agent is to maximize the sum of the values of all vehicles in stock at the dealership over a discounted time-horizon for some discount factor \(\lambda\in(0,1)\). Note that the discounted time-horizon problem is equivalent to the problem of maximizing expected terminal payoff of the process given a constant probability \((1-\lambda)\) of terminating operations at any point._ _It has long been known [1, 13] that cars tend to continually depreciate in value after being sold as new, and so any reasonable model for the value of all vehicles in the inventory should incorporate some notion of asset depreciation. Suppose that another discount factor \(\gamma\in(0,1)\) captures the rate at which automobiles lose value per unit of time. Considering \(\gamma\)-depreciated rewards and \(\lambda\)-discounted horizon, the goal of our agent can be defined as a discounted depreciating optimization problem. Alternatively, one may seek to optimize the long run average (mean payoff) of \(\gamma\)-depreciated rewards._ ### Discounted Depreciating Payoff Consider the sequence \(x=(3,4,5,3,4,5,...)\) of (absolute) rewards accumulated by the agent. In the presence of depreciation, the cumulative asset values at various points in time follow the sequence \[3,(3\gamma+4),(3\gamma^{2}+4\gamma+5),(3\gamma^{3}+4\gamma^{2}+5 \gamma+3),\] \[(3\gamma^{4}+4\gamma^{3}+5\gamma^{2}+3\gamma+4),...\] For the \(\lambda\)-discounted time horizon, the value of the assets can be computed as follows: \[3+4(3\gamma+4)+\lambda^{2}(3\gamma^{2}+4\gamma+5)+\lambda^{3}(3 \gamma^{2}+4\gamma^{2}+5\gamma+3)+\] \[\qquad\lambda^{4}(3\gamma^{4}+4\gamma^{3}+5\gamma^{2}+3\gamma+4)+.\] \[=(3+3\lambda 7+3\gamma^{2}\lambda^{2}+\cdots)+(4\lambda+4\lambda^{2} \gamma+4\lambda^{3}\gamma^{2}+\cdots)+\] \[\qquad(5\lambda^{2}+5\lambda^{3}\gamma+4\gamma^{5}\gamma^{2}+ \cdots)+(3\lambda^{3}+3\lambda\gamma^{4}+3\gamma^{2}\lambda^{5}+\cdots)+\cdots\] \[=3(1+\lambda\gamma+\gamma^{2}\lambda^{2}+\cdots)+4\lambda(1+ \lambda\gamma+\lambda^{2}\gamma^{2}+\cdots)+\] \[\qquad 5\lambda^{2}(1+\lambda\gamma+\lambda^{2}\gamma^{2}+\cdots)+3 \lambda^{3}(1+\lambda\gamma+\gamma^{2}\lambda^{2}+\cdots)+...\] \[=\frac{3+4\lambda+5\lambda^{2}+3\lambda^{3}+\cdots}{(1-\lambda \gamma)}.\] Notice that this \(\gamma\)-depreciated sum is equal to the \(\lambda\)-discounted sum when immediate rewards are scaled by a factor \(\frac{1}{1-\lambda\gamma}\). We show that this is not a mere coincidence, and prove that this equality holds also for general MDPs. ### Average Depreciating Payoff Next, consider the long-run average of the depreciating asset values as the limit inferior of the sequence \[3,\frac{3\gamma+4}{2},\frac{3\gamma^{2}+4\gamma+5}{3},\frac{3 \gamma^{3}+4\gamma^{2}+5\gamma+3}{4},\] \[\qquad\frac{3\gamma^{4}+4\gamma^{3}+5\gamma^{2}+3\gamma+4}{5},\ldots\] Based on classical Tauberian results [1], it is tempting to conjecture that the \(\lambda\)-discounted, \(\gamma\)-depreciating value converges to this mean as \(\lambda\to 1\), e.g. \[\lim_{\lambda\to 1}(1-\lambda)\frac{3+4\lambda+5\lambda^{2}}{(1- \lambda\gamma)(1-\lambda^{3})} =\lim_{\lambda\to 1}\frac{3+4\lambda+5\lambda^{2}}{(1-\lambda \gamma)(1+\lambda+\lambda^{2})}\] \[=\frac{3+4+5}{3(1-\gamma)}.\] Indeed, we prove that this conjecture holds. Contributions.The highlights of this paper are given below. * We initiate the study of discounted and average payoff optimization in the presence of depreciation dynamics. * We characterize the optimal value of the discounted depreciating payoff via Bellman-style optimality equations and use them to show that stationary deterministic policies are sufficient for achieving optimality. Moreover, our characterization enables computing the optimal value and an optimal policy in polynomial time in the planning setting. * The optimality equation also facilitates a formulation of a variant of Q-learning that is compatible with asset depreciation, thereby providing a model-free reinforcement learning approach to obtain optimal policies in the learning setting. * We show the classical Tauberian theorem relating discounted and average objectives can be extended to the depreciating reward setting. This result allows us to establish the sufficiency of stationary deterministic policies for optimality with respect to the average depreciating payoffs. Organization.We begin by introducing necessary notation and reviewing the relevant technical background. Section 3 develops results on discounted depreciating payoff, while Section 4 develops results for the average depreciating objective. We discuss some closely related work in Section 5 and recap our contributions in the concluding section. ## 2 Preliminaries Let \(\mathbb{R}\) be the set of real numbers and \(\mathbb{N}\) the set of natural numbers. For a set \(X\), we write \(|X|\) to denote its cardinality and \(\operatorname{Dist}(X)\) for the set of all probability distributions over \(X\). A point distribution over \(X\) is one that assigns probability \(1\) to a unique element of \(X\) and probability \(0\) to all others. The technical portions of the paper are carried out within the standard mathematical framework of asymptotic optimization and learning in environments modeled as finite Markov decision processes. Our presentation follows the conventions set in the standard textbooks on the optimization and learning (Feinberg and Shwartz, 2012; Filar and Vrieze, 1996; Puterman, 1994; Sutton and Barto, 1998). ### Markov Decision Processes A (finite) _Markov decision process_ (MDP) \(M\) is a tuple \((S,A,T,R)\) in which \(S\) is a finite set of states, \(A\) is a finite set of actions, \(T:(S\times A)\rightarrow\operatorname{Dist}(S)\) is a stochastic transition function specifying, for any \(s,t\in S\) and \(a\in A\) the conditional probability \(T(t\mid s,a)\) of moving to state \(t\) given that the current state is \(s\) and that action \(a\) has been chosen, and \(R:(S\times A)\rightarrow\mathbb{R}\) is a real-valued reward function mapping each state-action pair to a numerical valuation. For any function \(f:S\rightarrow\mathbb{R}\), i.e. any random variable on the state space of the MDP, we write \(\mathbb{E}_{T}\left[f(t)\mid s,a\right]\) to denote the conditional expectation \(\sum_{t\in S}f(t)T(t\mid s,a)\) of \(f\) on the successor state, given that the agent has selected action \(a\) from state \(s\). A path in \(M\) is a sequence \(s_{1}a_{1}s_{2}\cdots a_{n}s_{n+1}\) of alternating states and actions such that \(0<T(s_{k+1}\mid s_{k},a_{k})\) at every index. Let \(\mathcal{F}(M)\) denote the set of all finite paths in \(M\) and \(\mathcal{I}(M)\) denote the set of all infinite paths in \(M\). Payoffs, Policies, and Optimality.We focus on infinite duration quantitative optimization problems where an outcome may be concretized as an infinite path in the MDP. Such an outcome is evaluated relative to some mapping into the real numbers \(\mathcal{I}(M)\rightarrow\mathbb{R}\) called a payoff. A policy on \(M\) is a function \(\pi:\mathcal{F}(M)\rightarrow\operatorname{Dist}(A)\) that chooses an a distribution over the action set, given a finite path in \(M\). Fixing a policy \(\pi\) induces, for each state \(s\), a unique probability measure \(\mathbb{P}_{s}^{\pi}\) on the probability space over the Borel subsets of \(\mathcal{I}(M)\). This enables the evaluation of a policy, modulo a payoff and initial state \(s\), in expectation \(\mathbb{E}_{s}^{\pi}\). Let \(\Pi^{M}\) be the set of all policies on the MDP \(M\). A policy is optimal for a payoff if it maximizes, amongst all other policies, the expected value of that payoff, and this maximal expectation is called the value of the payoff on \(M\). Strategic Complexity.The strategic complexity of a payoff characterizes the necessary structure required for a policy to be optimal. A qualitative aspect of strategic complexity is based on whether or not there exist environments for which optimal policies are necessarily probabilistic (_mixed_). A policy is _deterministic (pure_) if returns a point distribution for every input. A policy is stationary if \(\pi(s_{1}a_{1}\cdots a_{n-1}s_{n})=\pi(s_{n})\) holds at every time \(n\). The class of deterministic stationary policies is of special interest since there are finitely many such policies on any finite MDP; we consider these policies as functions \(S\to A\). ### Discounted and Average Payoffs Given a path \(s_{1}a_{1}s_{2}\cdots\) in an MDP, two well-studied objectives are the discounted payoff, relative to a discount factor \(\lambda\in(0,1)\), and the average payoff, defined as \[\sum_{n=1}^{\infty}\lambda^{n-1}R(s_{n},a_{n}),\text{ and }\] (Discounted Payoff) \[\liminf_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}R(s_{k},a_{k }).\] (Average Payoff) The discounted value and average value functions are defined \[V_{\lambda}(s) =\sup_{\pi\in\Pi^{M}}\mathbb{E}^{\pi}\left[\sum_{n=1}^{\infty} \lambda^{n-1}R(s_{n},a_{n})\right],\] (Discounted Value) \[V(s) =\sup_{\pi\in\Pi^{M}}\mathbb{E}^{\pi}\left[\liminf_{n\rightarrow \infty}\sum_{k=1}^{n}\frac{R(s_{k},a_{k})}{n}\right].\] (Average Value) A stronger notion of optimality, specific to the discounted payoff, is Blackwell optimality. A policy \(\pi\) is Blackwell optimal if there exists a discount factor \(\lambda_{0}\in(0,1)\) such that \(\pi\) is optimal for the discounted payoff with any discount factor in the interval \([\lambda_{0},1)\). An alternative characterization of the discounted value is as the unique solution to the optimality equation \[V_{\lambda}(s)=\max_{a\in A}R(s,a)+\lambda\mathbb{E}_{T}\left[V_{\lambda}(t) \mid s,a\right],\] which is the starting point for establishing the following result on the complexity of discounted and average payoffs (Feinberg and Shwartz, 2012; Filar and Vrieze, 1996; Puterman, 1994). **Theorem 1**.: _Both discounted and average payoffs permit deterministic stationary optimal policies. Moreover, optimal values for both payoffs can be computed in polynomial time._ ### Reinforcement Learning Reinforcement learning (RL) (Sutton and Barto, 2018) is a sampling-based optimization paradigm based on the feedback received from the environment in the form of scalar rewards. The standard RL scenario assumes a discounted payoff, and model-free approaches typically leverage the _state-action value_ or _Q-value_: defined as the optimal value from state \(s\), given that action \(a\) has been selected, and is the solution of the equation \[Q_{\lambda}(s,a)=R(s,a)+\lambda\mathbb{E}_{T}\left[V_{\lambda}(t)\mid s,a \right].\] The Q-value provides the foundation for the classic Q-Learning algorithm (Watkins and Dayan, 1992), which learns an optimal policy by approximating \(Q_{\lambda}\) with a sequence \(Q_{\lambda}^{n}\) of maps which asymptotically converge to \(Q_{\lambda}\). In particular, \(Q_{\lambda}^{1}\) is initialized arbitrarily and then the agent explores the environment by selecting action \(a=\operatorname*{argmax}_{a\in A}Q_{\lambda}^{n}(s,a)\) from the current state \(s\) and performing the update \[Q_{\lambda}^{n+1}(s,a)\leftarrow Q_{\lambda}^{n}(s,a)+\alpha_{n}\left(R(s,a)+ \lambda V_{\lambda}^{n}(t)-Q_{\lambda}^{n}(s,a)\right), \tag{1}\] in which \(t\) is the next state as determined by the outcome of sampling the conditional distribution \(T(\cdot\mid s,a)\), the family of \(\alpha_{n}\in(0,1)\) are time-dependent parameters called learning rates, and \(V_{\lambda}^{n}(t)=\max_{a\in A}Q_{\lambda}^{n}(t,a)\). The following theorem gives a sufficient condition for asymptotic convergence of the \(Q\)-learning algorithm. **Theorem 2** (Watkins and Dayan [1992]).: _If every state-action pair in the environmental decision process is encountered infinitely often and the learning rates \(0\leq a_{n}<1\) satisfy the Robbins-Monroe conditions \(\sum_{n=1}^{\infty}a_{n}=\infty\) and \(\sum_{n=1}^{\infty}a_{n}^{2}<\infty\), then \(Q_{A}^{n+1}(s,a){-}Q_{A}\) almost surely as \(n{-}\infty\)._ ### Depreciating Assets We define variations on the discounted and average payoffs based on the idea that the value of an asset decays geometrically in proportion with the amount of time elapsed since it was obtained as a reward. That is, we consider the situation in which a payoff is determined as a function of the sequence \(\langle R(s_{n},a_{n})\rangle_{n=1}^{\infty}\), but rather of the sequence \[\left\langle\sum_{k=1}^{n}R(s_{k},a_{k})\gamma^{n-k}\right\rangle_{n=1}^{\infty}\] of exponential recency-weighted averages of the agent's assets, where \(\gamma\in(0,1)\) is a discount factor. ## 3 Discounted Depreciating Payoff In this section, we study discounted optimization, for \(\lambda\in(0,1)\), under depreciating asset dynamics. The payoff in this setting is captured by the expression \[\sum_{n=1}^{\infty}\lambda^{n-1}\sum_{k=1}^{n}R(s_{k},a_{k})\gamma^{n-k},\ \ \ \text{( Discounted Depreciating Payoff)}\] which has a corresponding value function \[V_{\lambda}^{\gamma}(s)=\sup_{\pi\in\mathbb{T}^{M}}\mathbb{E}_{s}^{\pi}\left[ \sum_{n=1}^{\infty}\lambda^{n-1}\sum_{k=1}^{n}R(s_{k},a_{k})\gamma^{n-k} \right].\] Let us now return to the used car dealership example. **Example 2** (Used Car Dealership Cont.).: _Recognizing that cars depreciate continually after their first purchase, the employee realizes that their model should incorporate a notion of asset depreciation. After a bit of market research, the employee selects another discount factor \(\gamma\in(0,1)\) to capture the rate at which automobiles typically lose value over a given time step. Using both discount factors \(\lambda\) and \(\gamma\), the employee can model the scenario as a discounted depreciating optimization problem._ _For the sake of simplicity, suppose that there are only two locations \(s_{1}\) and \(s_{2}\) from which to choose the next target market, and that the only point where the employee has more than one possible action is at the dealership \(s_{d}\) (from where they can chose action \(a_{1}\) to go to \(s_{1}\) or \(a_{2}\) to go to \(s_{2}\)). Realizing that it is unreasonable to plan without expecting unforeseen delays, the employee also introduces two parameters \(\rho_{1}\) and \(\rho_{2}\), which are success rates for buying a desired vehicle in \(s_{1}\) and \(s_{2}\) respectively. Given that the agent is in location \(s_{i}\), the rate \(\rho_{1}\) is interpreted as the probability that they find a seller and purchase a vehicle before the end of the day and thus \(1-\rho_{i}\) is the probability that they fail to do so. This situation is represented graphically as a finite MDP in Figure 1, where actions are displayed in red, transition probabilities in blue, and immediate rewards (i.e. car values when they are stocked) in green. If an action is omitted from an edge label, then there is only one action a available. If a transition probability is omitted, then the transition is deterministic, i.e. occurs with probability 1. If a reward value is omitted, then the reward obtained is 0._ _In traditional discounted optimization, the discount factor \(\lambda\) imposes a certain type of trade-off. Suppose, for instance, that \(\rho_{1}\) is large while \(r_{1}\) is small and that \(\rho_{2}\) is small while \(r_{2}\) is large. Then a small discount factor indicates that it may payoff more to take action \(a_{1}\) since it is likely that taking \(a_{2}\) will result in significant delays and thus diminish the value of the eventual reward \(r_{2}\). On the other hand, if the discount factor is close to 1, then it may be worth it for the agent to accept the high probability of delay since the eventual discounted value will be closer to \(r_{2}\)._ _Adding in the depreciation dynamics with discount factor \(\gamma\), the trade-off remains, but to what extent depreciation alters the dynamics of a given environment and policy is unclear. Intuition may suggest that introducing depreciation to discounted optimization should only make the risk-reward trade-off sharper, and one might further conjecture that when \(\gamma\) is close to 0, the higher decay rate of cumulative asset value should drive an agent towards riskier behavior. On the other hand, it is plausible that a depreciation factor close to one might embolden the agent towards similar risky actions because the opportunity cost of such behavior diminishes as assets are accumulated in greater quantities. As we proceed with our analysis of the discounted depreciating payoff we attempt to shed light on questions like this and get to the core of what depreciation entails in this context._ Our first main result establishes a Bellman-type equational characterization the discounted depreciating value. **Theorem 3** (Optimality Equation).: _The discounted depreciating value is the unique solution of the equation_ \[V_{\lambda}^{\gamma}(s)=\max_{a\in A}\frac{R(s,a)}{1-\lambda\gamma}+\lambda E _{T}\left[V_{\lambda}^{\gamma}(t)\left|\,s,a\right|\right]. \tag{2}\] Proof.: By splitting the term \(\lambda^{n-1}\) occurring in the definition of the discounted depreciating payoff into the product \(\lambda^{n-k}\lambda^{k-1}\) and distributing these factors into the inner summation, we obtain the expression \[\sum_{n=1}^{\infty}\sum_{k=1}^{n}\lambda^{k-1}R(s_{k},a_{k})\lambda^{n-k} \gamma^{n-k}. \tag{3}\] The next step of the proof relies on the following classical result of real analysis (c.f. Theorem 3.50 of Rudin [1976]). Figure 1: An MDP for the discounted depreciating optimization problem of the car dealership. Mertens' Theorem. _Let \(\sum_{n=1}^{\infty}x_{n}=X\) and \(\sum_{n=1}^{\infty}y_{n}=Y\) be two convergent series of real numbers. If at least one of the given series converges absolutely, then their Cauchy product converges to the product of their limits:_ \[\left(\sum_{n=1}^{\infty}x_{n}\right)\left(\sum_{n=1}^{\infty}y_{n}\right)= \sum_{n=1}^{\infty}\sum_{k=1}^{n}x_{k}y_{n-k}=XY.\] The series (3) may be factored into the Cauchy product \[\left(\sum_{n=1}^{\infty}(\lambda\gamma)^{n-1}\right)\left(\sum_{n=1}^{\infty }\lambda^{n-1}R(s_{n},a_{n})\right), \tag{4}\] and since both terms in this Cauchy product converge absolutely, Mertens' theorem applies. Thus, noticing that the left-hand series is geometric, the expression (4) is equivalent to \[\frac{1}{1-\lambda\gamma}\sum_{n=1}^{\infty}\lambda^{n-1}R(s_{n},a_{n}).\] Consequently, the discounted depreciating value may be written as \[\begin{split} V_{\lambda}^{\gamma}(s)&=\sup_{x\in \Pi^{M}}\mathbb{E}_{s}^{x}\left[\frac{1}{1-\lambda\gamma}\sum_{n=1}^{\infty} \lambda^{n-1}R(s_{n},a_{n})\right]\\ &=\frac{1}{1-\lambda\gamma}\sup_{x\in\Pi^{M}}\mathbb{E}_{s}^{x} \left[\sum_{n=1}^{\infty}\lambda^{n-1}R(s_{n},a_{n})\right]\\ &=\frac{V_{\lambda}(s)}{1-\lambda\gamma}.\end{split} \tag{5}\] The equational characterization of the discounted value \(V_{\lambda}\) now facilitates the derivation of the desired equational characterization of the discounted depreciating value \(V_{\lambda}^{\gamma}\) as \[\begin{split} V_{\lambda}^{\gamma}(s)&=\frac{1}{1- \lambda\gamma}\left(\max_{a\in A}R(s,a)+\lambda\mathbb{E}_{T}\left[V_{\lambda} (t)\left|\,s,a\right|\right]\right)\\ &=\max_{a\in A}\frac{R(s,a)}{1-\lambda\gamma}+\lambda\mathbb{E}_ {T}\left[V_{\lambda}^{\gamma}(t)\left|\,s,a\right|.\right.\end{split} \tag{6}\] An immediate consequence of Theorem 3 is a characterization of the strategic complexity of discounted depreciating payoffs. **Corollary 1** (Strategic Complexity).: _For any discounted depreciating payoff over any finite MDB there exists an optimal policy that is stationary and deterministic._ Theorem 3 enables a number of extensively studied algorithmic techniques to be adapted for use under the discounted depreciating payoff. In particular, the equational characterization of the discounted depreciating value implies that it is the unique fixed point of a contraction mapping (Banach, 1922), which in turn facilitates the formulation of suitable variants of planning algorithms based on foundational methods such as value iteration and linear programming. This allows us to bound the computational complexity of determining discounted depreciating values in terms of the size of the environmental MDP and the given discount factors. **Theorem 4** (Computational Complexity).: _The discounted depreciating value and a corresponding optimal policy are computable in polynomial time._ Proof.: Let \(\delta_{i,j}=\begin{cases}1&\text{if }i=j\\ 0&\text{otherwise}\end{cases}\) be the Kronecker delta. Suppose that, for each state \(s\) in the environment \(M\), we have an associated real number \(0<x_{s}\), chosen arbitrarily. The unique solution to the following linear program is the vector of values from each state of \(M\). \[\begin{split}&\text{minimize }\sum_{s\in S}x_{s}v_{s}\quad\text{ subject to}\\ &\frac{R(s,a)}{1-\lambda\gamma}\leq\sum_{t\in S}v_{t}\left( \delta_{s,t}-\frac{\lambda T(t\left|\,s,a\right|)}{1-\lambda\gamma}\right) \quad\forall(s,a)\in S\times A\end{split} \tag{7}\] From a solution \(v^{*}\) to (7), an optimal policy can be obtained as \[\pi(s)=\operatorname*{arg\,max}_{a\in A}\frac{R(s,a)}{1-\lambda\gamma}+ \lambda\mathbb{E}_{T}\left[v_{t}^{*}\left|\,s,a\right|\right].\] Alternatively, an optimal policy may be derived from the solution to the dual linear program given as follows. \[\begin{split}&\text{maximize }(\sum_{(s,a)\in S\times A}\frac{R(s,a)}{1- \lambda\gamma}y_{s,a}\quad\text{subject to}\\ & x_{s}=\sum_{(t,a)\in S\times A}\left(\delta_{s,t}-\frac{\lambda T (t\left|\,s,a\right|)}{1-\lambda\gamma}\right)\qquad\qquad\qquad\qquad\qquad \forall s\in S\\ & 0\leq y_{s,a}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\forall(s,a)\in S\times A\end{split} \tag{8}\] In particular, if \(y^{*}\) is a solution to (8), then any policy \(\pi\) for which the inequality \(0<y_{s,\pi(t)}^{*}\) holds at every state is optimal. The correctness of these linear programs follows from the proof of Theorem 3. Since linear programs can be solved polynomial time, the theorem follows. Theorem 3 allows the formulation of an associated Q-value \[Q_{\lambda}^{\gamma}(s,a)=\frac{R(s,a)}{1-\lambda\gamma}+\lambda\mathbb{E}_{T} \left[V_{\lambda}^{\gamma}(t)\left|\,s,a\right|,\right.\] which may be used to construct a Q-learning iteration scheme for discounted depreciating payoffs as \[Q_{\lambda}^{\gamma,n+1}(s,a)\text{--}Q_{\lambda}^{\gamma,n}(s,a)+\alpha_{n} \left(\frac{R(s,a)}{1-\lambda\gamma}+\lambda V_{\lambda}^{\gamma,n}(t)-Q_{ \lambda}^{\gamma,n}(s,a)\right). \tag{9}\] **Theorem 5**.: _If each state-action pair of the environment is encountered infinitely often and the learning rates satisfy the Robbins-Monroe convergence criteria_ \[\sum_{n=0}^{\infty}\alpha_{n}=\infty\quad\text{and}\quad\sum_{n=0}^{\infty}a_{ n}^{2}<\infty,\] _then iterating (9) converges almost surely to the discounted depreciating \(Q\)-value as \(n\to\infty\):_ \[\lim_{n\to\infty}Q_{\lambda}^{\gamma,n}=Q_{\lambda}^{\gamma}.\] Proof.: Equations (5) and (6) show that the optimality equation for the discounted depreciating value reduces to the optimality equation for the discounted value, modulo a multiplicative factor dependent on \(\lambda\) and \(\gamma\). It therefore follows that discounted depreciating Q-learning, via iteration of (9), converges in the limit to the optimal \(Q_{\lambda}^{\gamma}\) under the same conditions that standard discounted Q-learning, via iteration of (1), converges in the limit to the optimal \(Q_{\lambda}\). Hence, we conclude that discounted depreciating Q-learning asymptotically converges given that each state-action pair is encountered infinitely often and that the convergence conditions in the theorem statement are satisfied by the learning rates. ### Discussion Besides the technical implications of Theorem 3, its proof provides some insight about the interplay between discounting and depreciation. A foundational result [1] in the theory of infinite-horizon optimization establishes that over a common MDP the discounted value asymptotically approaches the average value, up to a multiplicative factor of \((1-\lambda)\), as \(\lambda\) approaches \(1\) from below: \[\lim_{\lambda\to 1}(1-\lambda)\,V_{\lambda}=V.\] Following this approach, we consider the asymptotic behavior of the discounted depreciating value when taking similar limits of the discount factors. Using the identity \(V_{\lambda}^{\gamma}=\frac{V_{\lambda}}{1-\lambda\gamma}\) from equation (5) as the starting point for taking these limits yields the equations \[\lim_{\lambda\to 1}(1-\lambda)\,V_{\lambda}^{\gamma} =\frac{V}{1-\gamma}, \tag{10}\] \[\lim_{\gamma\to 1}V_{\lambda}^{\gamma} =\frac{V_{\lambda}}{1-\lambda},\] (11) \[\lim_{\gamma\to 0}V_{\lambda}^{\gamma} =V_{\lambda}. \tag{12}\] The relationships described by equations (12) and (11), illustrated by Figure 2, are justified conceptually by a simple interpretation that is helpful for building intuition around the behavior of the discounted depreciating payoff. One can think of the standard discounted payoff as a special case of the discounted depreciating payoff where \(\gamma=0\). That is, the optimizing agent working towards maximizing a discounted payoff does not consider the value of their assets whatsoever at any point in time; the only quantities of concern from their perspective are the incoming stream of rewards. Interpreting \(\gamma\) as a measure of the agent's memory of past outcomes, it follows naturally that the discounted depreciating payoff reduces to the discounted payoff when the agent has no recollection whatsoever. Connecting this notion back to depreciation, it can be argued that, from the agent's perspective, externally driven depreciation of assets is morally equivalent to an internally driven perception of depreciation based on an imperfect recollection of past events. Conversely, an agent with a perfect memory operating under a discounted payoff would end up maximizing this payoff on the sequence of cumulative assets \(\left\langle\sum_{k=1}^{n}R(s_{k},a_{k})\right\rangle_{n=1}^{\infty}\) rather than the sequence \(\langle R(s_{n},a_{n})\rangle_{n=1}^{\infty}\) of immediate rewards. Assuming positive immediate rewards, this results in a greater value than would be obtained on the reward sequence itself, as evidenced by the plot in Figure 2. As a consequence of the contraction property resulting from the standard discounting, the overall sum converges in spite of the fact that the cumulative asset stream may not be bounded. ## 4 Average Depreciating Payoff Let us now consider the asymptotic average evaluation criterion, given that assets depreciate. The payoff of an outcome in this context is defined as \[\liminf_{n\to\infty}\sum_{k=1}^{n}\sum_{i=1}^{k}\frac{R(s_{i},a_{i})\gamma^{k -i}}{n},\quad\text{(Average Depreciating Payoff)}\] and the associated average depreciating value function is \[V^{\gamma}(s)=\sup_{\pi\in\Pi^{N}}\mathbb{E}_{s}^{\pi}\left[\liminf_{n\to \infty}\sum_{k=1}^{n}\frac{R(s_{i},a_{i})\gamma^{k-i}}{n}\right].\] Our main result in this section asymptotically relates the average depreciating value and the discounted depreciating value. **Theorem 6** (Tauberian Theorem).: _The limit of discounted depreciating value as \(\lambda\to 1\) from below, scaled by \((1-\lambda)\), converges to the average depreciating value:_ \[\lim_{\lambda\to 1}(1-\lambda)\,V_{\lambda}^{\gamma}=V^{\gamma}.\] The proof of Theorem 6 uses the following pair of lemmas. **Lemma 1**.: _For any finite path in the environmental MDP,_ \[\sum_{k=1}^{n}\sum_{i=1}^{k}\frac{R(s_{i},a_{i})\gamma^{k-i}}{n}=\sum_{k=1}^{ n}\frac{R(s_{k},a_{k})(1-\gamma^{n+1-k})}{n(1-\gamma)}. \tag{13}\] Proof.: We proceed by induction on \(n\). Base case.Suppose that \(n=1\). Then both expressions occurring in (13) evaluate to \(R(s_{1},a_{1})\). Inductive case.Suppose that (13) holds for \(n-1\). By splitting the summation on the left-hand side of (13), we obtain the expression \[\sum_{k=1}^{n-1}\sum_{i=1}^{k}\frac{R(s_{i},a_{i})\gamma^{k-i}}{n}+\sum_{k=1}^ {n}\frac{R(s_{k},a_{k})\gamma^{n-k}}{n}.\] Factoring \(\frac{n-1}{n}\) from the double summation in this expression yields \[\frac{n-1}{n}\sum_{k=1}^{n-1}\sum_{l=1}^{k}\frac{R(s_{k},a_{l})\gamma^{k-l}}{n-1} +\sum_{k=1}^{n}\frac{R(s_{k},a_{k})\gamma^{n-k}}{n}.\] Now, applying the inductive hypothesis, this may be rewritten as \[\frac{n-1}{n}\sum_{k=1}^{n-1}\frac{R(s_{k},a_{k})(1-\gamma^{n-k})}{(n-1)(1- \gamma)}+\sum_{k=1}^{n}\frac{R(s_{k},a_{k})\gamma^{n-k}}{n}.\] Factoring out \(\frac{1}{n(1-\gamma)}\) from the entire expression, we get \[\frac{\sum\limits_{k=1}^{n-1}R(s_{k},a_{k})(1-\gamma^{n-k})+(1-\gamma)\sum \limits_{k=1}^{n}R(s_{k},a_{k})\gamma^{n-k}}{n(1-\gamma)}.\] Distributing through the numerator results in the expression \[\frac{\sum\limits_{k=1}^{n-1}R(s_{k},a_{k})-R(s_{k},a_{k})\gamma^{n-k}+\sum \limits_{k=1}^{n}R(s_{k},a_{k})\gamma^{n-k}-R(s_{k},a_{k})\gamma^{n+1-k}}{n(1- \gamma)}.\] and removing those terms that cancel additively yields \[\frac{\sum\limits_{k=1}^{n}R(s_{k},a_{k})-\sum\limits_{k=1}^{n}R(s_{k},a_{k}) \gamma^{n+1-k}}{n(1-\gamma)}.\] Finally, we obtain (13) by factoring the numerator one last time: \[\sum\limits_{k=1}^{n}\frac{R(s_{k},a_{k})(1-\gamma^{n+1-k})}{n(1-\gamma)},\] thereby proving that if (13) holds for paths of length \(n-1\), then it also holds for paths of length \(n\). **Lemma 2**.: _For any infinite path in the environmental MDP,_ \[\lim_{n\to\infty}\sum_{k=1}^{n}\frac{R(s_{k},a_{k})\gamma^{n+1-k}}{n(1-\gamma )}=0.\] Proof.: Factoring out the constant term in the denominator of the left-hand side of the claimed equation, we obtain the equivalent expression \[\frac{1}{1-\gamma}\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}R(s_{k},a_{k}) \gamma^{n-k}.\] Since the environmental MDP is assumed to be finite, there are finitely many possible reward values and we can bound the summation in the above expression as \[\frac{r_{\downarrow}(1-\gamma^{n-1})}{1-\gamma}\leq\sum_{k=1}^{n}R(s_{k},a_{ k})\gamma^{n-k}\leq\frac{r_{\downarrow}(1-\gamma^{n-1})}{1-\gamma}\] where \(r_{\downarrow}=\min_{(s_{k},a)\in S\times A}R(s,a)\) and \(r_{\uparrow}=\max_{(s_{k},a)\in S\times A}R(s,a)\). Lastly, noticing that \[\lim_{n\to\infty}\frac{r_{\downarrow}(1-\gamma^{n-1})}{n(1-\gamma)}=\lim_{n \to\infty}\frac{r_{\downarrow}(1-\gamma^{n-1})}{n(1-\gamma)}=0,\] it follows that \[\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^{n}R(s_{k},a_{k})\gamma^{n-k}=0.\] Now we are in position to prove Theorem 6. Proof of Theorem 6.: In light of equation (10), it is sufficient to prove the identity \(V^{\gamma}=\frac{V}{1-\gamma}\). Applying Lemma 1, the average depreciating payoff may be rewritten as \[\liminf_{n\to\infty}\sum_{k=1}^{n}\frac{R(s_{k},a_{k})(1-\gamma^{n+1-k})}{n(1 -\gamma)}.\] Distributing the product in the numerator and then breaking the summation into a difference of summations yields the expression \[\liminf_{n\to\infty}\left(\sum_{k=1}^{n}\frac{R(s_{k},a_{k})}{n(1-\gamma)}- \sum_{k=1}^{n}\frac{R(s_{k},a_{k})\gamma^{n+1-k}}{n(1-\gamma)}\right).\] By Lemma 2, the right-hand term in this difference tends to \(0\) as \(n\to\infty\), and so the above expression is equivalent to \[\liminf_{n\to\infty}\sum_{k=1}^{n}\frac{R(s_{k},a_{k})}{n(1-\gamma)}.\] Factoring the constant term in the denominator out, the remaining limit-term is exactly the definition of the average payoff, and thus we conclude, for any state \(s\), that \[V^{\gamma}(s)=\frac{V(s)}{1-\gamma}.\] As a direct consequence of Theorem 6, there exists a Blackwell optimal policy that is optimal for \(V_{\lambda}^{\gamma}\) when \(\lambda\) is sufficiently close to \(1\), that is also optimal for \(V^{\gamma}\). **Corollary 2**.: _There exists a discount factor \(\lambda_{0}\in(0,1)\) and a policy \(\pi\) such that, for all \(\lambda\in\{\lambda_{0},1\}\) and every state \(s\), it holds that_ \[V_{\lambda}^{\gamma}(s) =\mathbb{E}_{s}^{\pi}\left[\sum_{n=1}^{\infty}\lambda^{n-1}\sum_{ k=1}^{n}R(s_{k},a_{k})\gamma^{n-k}\right],\] \[V^{\gamma}(s) =\mathbb{E}_{s}^{\pi}\left[\liminf_{n\to\infty}\frac{1}{n}\sum_{ k=1}^{n}\sum_{l=1}^{k}R(s_{l},a_{l})\gamma^{k-l}\right].\] In turn, this implies the following result on the strategic complexity for the average depreciating payoff. **Corollary 3** (Strategic Complexity).: _For any average depreciating payoff over any finite MDP, there exists an optimal policy that is stationary and deterministic._ ## 5 Related Work Discounted and average payoffs have played central roles in the theory of optimal control and reinforcement learning. A multitude of deep results exist connecting these objectives (Andersson and Miltersen, 2009; Bewley and Kohlberg, 1976, 1978; Chatterjee and Majumdar, 2012; Chatterjee et al., 2011; Mertens and Neyman, 1981; Ziliotto, 2016, 2018) in addition to an extensive body of work on algorithms for related optimization problems and their complexity (Chatterjee and Ibsen-Jensen, 2015; Chatterjee et al., 2008; Filar and Schultz, 1986; Raghavan and Filar, 1991; Raghavan and Syed, 2003). The value for the depreciating assets is defined as a past discounted sum of rewards. Past discounted sums for finite sequences were studied in the context of optimization (Alur et al., 2012) and are closely related to exponential recency weighted average, a technique used in nonstationary multi-armed bandit problems (Sutton and Barto, 2018) to estimate the average reward of different actions by giving more weight to recent outcomes. However, to the best of our knowledge, depreciating assets have not been formally studied as a payoff function. Discounted objectives have found significant applications in areas of program verification and synthesis (Cerny et al., 2011; de Alfaro et al., 2003). Although the idea of past operators is quite old (Lichtenstein et al., 1985), relatively recently a number of classical formalisms including temporal logics such as LTL and CTL and the modal \(\mu\)-calculus have been extended with past-tense operators and with discounted quantitative semantics (Almagoor et al., 2014, 2016; de Alfaro et al., 2005; Littman et al., 2017). A particularly significant result (Markey, 2003) around LTL with classical boolean semantics is that, while LTL with past operators is no more expressive than standard LTL, it is exponentially more succinct. It remains open whether this type of relationship holds for other logics and their extensions by past operators when interpreted with discounted quantitative semantics (Almagoor et al., 2016). ## 6 Conclusion In the stochastic optimal control and reinforcement learning setting the agents select their actions to maximize a discounted payoff associated with the resulting sequence of scalar rewards. This interaction models the way dopamine driven organisms maximize their reward sequence based on their capability to delay gratification (discounting). While this paradigm provides a natural model in the context of streams of immediate rewards, when the valuations and objectives are defined in terms of assets that depreciate, the problem cannot be directly modeled in the classic framework. We initiated the study of optimization and learning for the depreciating assets, and showed a surprising connection between these problems and traditional discounted problems. Our result enables solving optimization problems under depreciation dynamics by tweaking the algorithmic infrastructure that has been extensively developed over the last several decades for classic optimization problems. We believe that depreciating assets may provide a useful abstraction to a number of related problems. The following points sketch some of these directions and state several problems that remain open. * Regret minimization (Cesa-Bianchi and Lugosi, 2006) is a popular criterion in the setting of online learning where a decision-maker chooses her actions so as to minimize the average regret--the difference between the realized reward and the reward that could have been achieved. We posit that imperfect decision makers may view their regret in a depreciated sense, since a suboptimal action in the recent past tends to cause more regret than an equally suboptimal action in the distant past. We hope that the results of this work spur further interest in developing foundations of past-discounted characterizations of regret in online learning and optimization. * In solving multi-agent optimization problems, a practical assumption involves bounding the capability of any adversary by assuming that they have a limited memory of the history of interaction, and this can be modeled via a discounting of past outcomes. From our results it follows that two-player zero-sum games with depreciation dynamics under both discounted and average payoffs can be reduced to classic optimization games modulo some scaling of the immediate rewards. * The notion of state-based discount factors has been studied in the context of classic optimization and learning. Is it possible to extend the results of this paper to the setting with state-dependent depreciation factors? This result does not directly follow from the tools developed in this paper, and it remains an open problem. * Continuous-time MDPs provide a dense-time analog of discrete-time MDPs and optimization and RL algorithms for such systems are well understood. Is it possible to solve optimization and learning for CTMDPs with depreciating assets?
2310.04174
Primordial Black Holes without fine-tuning from a light stochastic spectator field
We investigate a mechanism of primordial black hole (PBH) formation that avoids any dependence on specific inflationary features or exotic physics. In this scenario, the required large curvature fluctuations leading to PBH formation are generated after inflation by the quantum fluctuations of a light stochastic spectator field during inflation, when this field transiently dominates the energy density. We calculate the dynamics of such a spectator field during and after inflation, the distribution of induced curvature perturbations and their non-Gaussian tails leading to the copious production of PBHs. For a plateau-like potential, this scenario produces an extended PBH mass distribution with a peak at the solar-mass scale when one takes into account the effects of the thermal history. What is remarkable in this scenario is the absence of parameter fine-tuning. Instead, it invokes an anthropic selection over all the realizations of PBH abundances predicted by the field stochasticity. This scenario offers a novel perspective for the formation of PBHs with minimal ingredients and without the need of fine-tuning. It is amenable to observational tests, notably with the gravitational-wave observations of black hole mergers and of a background at nanoHertz frequency, as recently observed by pulsar timing arrays.
Ioanna Stamou, Sebastien Clesse
2023-10-06T11:43:56Z
http://arxiv.org/abs/2310.04174v2
# Primordial Black Holes without fine-tuning ###### Abstract We investigate a mechanism of primordial black hole (PBH) formation that avoids any dependence on specific inflationary features or exotic physics. In this scenario, the required large curvature fluctuations leading to PBH formation are generated after inflation by the quantum fluctuations of a light stochastic spectator field during inflation, when this field transiently dominates the energy density. We calculate the dynamics of such a spectator field during and after inflation, the distribution of induced curvature perturbations and their non-Gaussian tails leading to the copious production of PBHs. For a plateau-like potential, this scenario produces an extended PBH mass distribution with a peak at the solar-mass scale when one takes into account the effects of the thermal history. What is remarkable in this scenario is the absence of parameter fine-tuning. Instead, it invokes an anthropic selection over all the realizations of PBH abundances predicted by the field stochasticity. This scenario offers a novel perspective for the formation of PBHs with minimal ingredients and without the need of fine-tuning. It is amenable to observational tests, notably with the gravitational-wave observations of black hole mergers and of a background at nanoHertz frequency, as recently observed by pulsar timing arrays. ## I Introduction Taking advantage of the absence of detection of new particles such a weakly interacting massive particles, in accelerators and in direct and indirect detection experiments, primordial black holes (PBHs) are nowadays considered as one leading candidate to explain the dark matter in the Universe. Contrary to dark matter particles, the existence of PBHs is supported by a series of observations, reviewed in [1; 2; 3; 4] and including the gravitational waves (GW) from compact binary coalescences observed by the Ligo/VIRGO/Kagra (LVK) collaboration [5; 6; 7; 8; 9; 10], a GW background at nanoHertz frequency detected with pulsar timing arrays (PTA) [11; 12; 13; 14; 15; 16], the size and mass-to-light ratio of ultra-faint dwarf galaxies, several microlensing candidates, spatial correlations in source-subtracted cosmic infrared and X-ray backgrounds, the existence of supermassive black holes at high redshifts (see [4] and references therein). These observational clues are however not unambiguous and could have other astrophysical origins. In addition, there are also numerous constraints on the abundance of PBHs, see e.g. [17] for a recent review, sometimes in apparent conflict with some of those hints. Furthermore, it is worth noticing that any observational evidence or constraint is still subject to large uncertainties or model dependence. It is therefore very difficult to prove the existence of PBHs and, if they exist, to infer their total contribution to the dark matter. There is so far only one almost unambiguous way to prove the existence of PBHs that is accessible with the current generation of instruments: detecting a subsolar-mass black hole in a compact binary coalescence. Recently a few intriguing subsolar-mass triggers have been reported in GW observations [18; 19]. For instance, SSM170401 prefers a subsolar-mass black hole secondary component if interpreted as a GW signal [20]. Overall, the search for PBHs and their properties is a very active and exciting area of research, with many implications for our understanding of the nature of dark matter and of the physics at play in the early Universe. PBHs are thought to have formed from the collapse of regions of high density contrast in the early Universe. An important criticism of the majority of PBH scenarios comes from the difficulty to produce them without invoking strong parameter fine-tuning [21] and very specific models of the early Universe, such as transient inflationary features in the primordial power spectrum or new phase transitions (see e.g. [22] for a review). For instance, the mechanism of PBH formation may involve the amplification of quantum fluctuations during inflation. A lot of PBH models rely on this idea but they require a strong enhancement of the primordial power spectrum at small scales. Such a feature is not natural in the vast majority of single-field slow-roll inflation models. It typically requires an extremely flat region of the scalar field potential over a tiny field range, leading to a so-called transient phase of _ultra-slow-roll_. In addition, in most models the abundance of PBHs depends exponentially on the amplitude of those fluctuations, leading to an additional layer of fine-tuning for the model parameters [21; 23]. In this work we explore a mechanism of PBH production based on a light quantum stochastic spectator scalar field during inflation. By definition, a spectator field is a hypothetical scalar field, not involved in the inflationary expansion of the early Universe. Inflation therefore does not play a direct role in the PBH production, and vice-versa. Because the field is very light, the exact shape of its potential is also irrelevant for the dynamics of its quantum fluctuations during inflation, which adds to the genericity of the scenario and allows PBH formation with relatively minimal assumptions and no strong dependence on potential parameters. It can
2310.10129
Minimal Timelike Surfaces in the Lorentz-Minkowski 3-space and Their Canonical Parameters
We study minimal timelike surfaces in $\mathbb R^3_1$ using a special Weierstrass-type formula in terms of holomorphic functions defined in the algebra of the double (split-complex) numbers. We present a method of obtaining an equation of a minimal timelike surface in terms of canonical parameters, which play a role similar to the role of the natural parameters of curves in $\mathbb R^3$. Having one holomorphic function that generates a minimal timelike surface, we find all holomorphic functions that generate the same surface. In this way we give a correspondence between a minimal timelike surface and a class of holomorphic functions. As an application, we prove that the Enneper surfaces are the only minimal timelike surfaces in $\mathbb R^3_1$ with polynomial parametrization of degree 3 in isothermal parameters.
Ognian Kassabov, Velichka Milousheva
2023-10-16T07:13:38Z
http://arxiv.org/abs/2310.10129v1
# Minimal timelike surfaces in the Lorentz-Minkowski 3-space and their canonical parameters ###### Abstract. We study minimal timelike surfaces in \(\mathbb{R}^{3}_{1}\) using a special Weierstrass-type formula in terms of holomorphic functions defined in the algebra of the double (split-complex) numbers. We present a method of obtaining an equation of a minimal timelike surface in terms of canonical parameters, which play a role similar to the role of the natural parameters of curves in \(\mathbb{R}^{3}\). Having one holomorphic function that generates a minimal timelike surface, we find all holomorphic functions that generate the same surface. In this way we give a correspondence between a minimal timelike surface and a class of holomorphic functions. As an application, we prove that the Enneper surfaces are the only minimal timelike surfaces in \(\mathbb{R}^{3}_{1}\) with polynomial parametrization of degree 3 in isothermal parameters. Key words and phrases:Timelike surfaces, canonical parameters, Weierstrass formula. 2020 _Mathematics Subject Classification_: 53A10; 53B30; 53C50. ## 1. Introduction The study of minimal surfaces is one of the main topics in classical differential geometry which goes back to the 18th century. Lagrange initiated in 1760 the study of minimal surfaces in Euclidean 3-space and found the minimal surface equation when he looked for a necessary condition for minimizing the area functional. He showed that a minimal surface parametrized as a graphic \(x=(u,v,\varphi(u,v))\) satisfies the following equation, known nowadays as the Lagrange's equation, 1762: \[(1+\varphi_{v}^{2})\varphi_{uu}-2\varphi_{u}\varphi_{v}\varphi_{uv}+(1+\varphi _{u}^{2})\varphi_{vv}=0.\] The link between curvature and minimal surfaces was made by Meusnier in 1776 who proved that the Lagrange's equation implies that the mean curvature is zero everywhere on a minimal surface. Usually, minimal surfaces are defined as surfaces with zero mean curvature, but they are also characterized as surfaces of minimal surface area for given boundary conditions, as a critical point of the area functional, or as a graphic of the solution of a differential equation. The Weierstrass representation formula (1866) describes minimal surfaces in terms of two holomorphic functions \(f(z)\) and \(g(z)\) as follows [18]: \[\Psi(z)=\Re\int_{z_{0}}^{z}\,\left(\frac{1}{2}f(z)(1-g^{2}(z)),\frac{i}{2}f(z )(1+g^{2}(z)),f(z)g(z)\right)\,dz.\] The theory of minimal surfaces in real space forms have been attracting the attention of many mathematicians for more than two centuries and have inspired many authors to study minimal surfaces in other ambient spaces. In the last years, great attention is paid to Lorentz surfaces in pseudo-Euclidean spaces, since pseudo-Riemannian geometry has many important applications in Physics, especially in problems related to General Relativity. However, the local geometry of surfaces in the Lorentz-Minkowski space \(\mathbb{R}^{3}_{1}\) is much more complicated than that in the Euclidean space \(\mathbb{R}^{3}\), since in \(\mathbb{R}^{3}_{1}\) the vectors have different ###### Contents * 1 Introduction * 2 Preliminaries * 3 The \(\mathbb{R}^{3 are the complex numbers and the complex functions (on the field of complex numbers), see e.g. [11]. These functions are also convenient in the study of spacelike surfaces in \(\mathbb{R}^{3}_{1}\) with vanishing mean curvature - the maximal surfaces. In the present paper, we are interested in minimal timelike surfaces in \(\mathbb{R}^{3}_{1}\), so we apply the theory of functions on the algebra of double numbers. We use also the special Weierstrass formula of G. Ganchev, proposed in [7], where a new approach to timelike surfaces is given and a determination of a minimal timelike surface via canonical parameters is established. We find a method of obtaining an equation of a minimal timelike surface in canonical parameters. Having one holomorphic function (defined on a domain of the plane of double numbers) that generates a minimal timelike surface, we find all holomorphic functions that generate the same surface. Thus we obtain a correspondence between a minimal timelike surface and a class of holomorphic functions. As an application, we prove that the Enneper surfaces are the only minimal timelike surfaces in \(\mathbb{R}^{3}_{1}\) with polynomial parametrization of degree \(3\) in isothermal parameters. ## 2. Preliminaries We deal with the \(3\)-dimensional Lorentz-Minkowski space \(\mathbb{R}^{3}_{1}\), endowed with the standard flat metric of signature \((2,1)\): \[\langle x,y\rangle=-x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}\.\] The considerations in this paper are local and all functions are supposed to be of class \(C^{\infty}\). A regular surface \(S\) in \(\mathbb{R}^{3}_{1}\) is said to be: - _timelike_, if the restriction of \(\langle.,.\rangle\) to each tangent space of \(S\) is indefinite; - _spacelike_, if the restriction of \(\langle.,.\rangle\) to each tangent space of \(S\) is positive definite; - _lightlike_, if the restriction of \(\langle.,.\rangle\) to each tangent space of \(S\) is degenerate. In what follows, we suppose that the surface \(S\) is timelike and is defined by the parametric equation \[\mathbf{x}=\mathbf{x}(u,v)=(x_{1}(u,v),x_{2}(u,v),x_{3}(u,v)),\qquad(u,v)\in U \subset\mathbb{R}^{2}.\] We denote the derivatives of the vector function \(\mathbf{x}=\mathbf{x}(u,v)\) by \[\mathbf{x}_{u}=\frac{\partial\mathbf{x}}{\partial u},\qquad\mathbf{x}_{v}= \frac{\partial\mathbf{x}}{\partial v},\qquad\mathbf{x}_{uv}=\frac{\partial^{ 2}\mathbf{x}}{\partial u\partial v},\...\] The coefficients of the first fundamental form are given by \[E=\langle\mathbf{x}_{u},\mathbf{x}_{u}\rangle,\qquad F=\langle\mathbf{x}_{u}, \mathbf{x}_{v}\rangle,\qquad G=\langle\mathbf{x}_{v},\mathbf{x}_{v}\rangle.\] Denote by \(\mathbf{U}\) the unit normal to the surface, i.e. \[\mathbf{U}=\frac{\mathbf{x}_{u}\times\mathbf{x}_{v}}{|\mathbf{x}_{u}\times \mathbf{x}_{v}|}.\] Then, the coefficients of the second fundamental form are \[L=\langle\mathbf{U},\mathbf{x}_{uu}\rangle,\qquad M=\langle\mathbf{U}, \mathbf{x}_{uv}\rangle,\qquad N=\langle\mathbf{U},\mathbf{x}_{vv}\rangle.\] The Gauss curvarture and the mean curvature of \(S\) are defined respectively by \[K=\frac{LN-M^{2}}{EG-F^{2}},\qquad H=\frac{EN-2FM+GL}{2(EG-F^{2})}.\] The surface \(S\) is said to be _minimal_ if the mean curvature vanishes identically. Probably, the most used parameters of a surface are the isothermal parameters. The coefficients of the first fundamental form of a timelike surface in \(\mathbb{R}^{3}_{1}\) parametrized in terms of isothermal parameters satisfy \(E=-G\), \(F=0\). In the study of minimal timelike surfaces in \(\mathbb{R}^{3}_{1}\) we use the algebra \(\mathbb{D}\) of the double numbers, which is determined in the following way: \(\mathbb{D}=\{a+{\rm j}b:\ a,b\in\mathbb{R},\ {\rm j}^{2}=1\}\), where \({\rm j}\) commutes with the elements of \(\mathbb{R}\). For the element \(z=a+{\rm j}b\) of \(\mathbb{D}\) we have \(|z|^{2}=z\bar{z}=(a+{\rm j}b)(a-{\rm j}b)=a^{2}-b^{2}\). This shows that \(\mathbb{D}\) is the hyperbolic analogue of the algebra of complex numbers \(\mathbb{C}\) and reflects the Lorentz geometry. The algebra of double numbers is used essentially in paper [9] in the study of the Lorentz surfaces in \(\mathbb{R}^{4}_{2}\). Let \(f(z)\) and \(g(z)\) be two holomorphic functions defined in a domain of \(\mathbb{D}\). Consider the Weierstrass curve, defined by \[\Psi(z)=\int_{z_{0}}^{z}\,\left(-\frac{1}{2}\,f(z)(1+g^{2}(z)),\frac{{\rm j}}{ 2}\,f(z)(1-g^{2}(z)),f(z)g(z)\right)\,dz\.\] The real and the "imaginary" parts \({\bf x}(u,v)\) and \({\bf y}(u,v)\) define two minimal timelike surfaces of Gauss curvature \(K<0\) and \(-K\), respectively. For example, with the functions \(f(z)=1,\ g(z)=z\) we obtain the classical Enneper surface of negative Gauss curvature \[{\bf x}(u,v)=\left(-\frac{u}{6}(u^{2}+3v^{2}+3),-\frac{v}{6}(3u^{2}+v^{2}-3), \frac{1}{2}(u^{2}+v^{2})\right)\] as the real part of the Weierstrass curve and the classical Enneper surface of positive Gauss curvature \[{\bf y}(u,v)=\left(-\frac{v}{6}(3u^{2}+v^{2}+3),-\frac{u}{6}(u^{2}+3v^{2}-3), uv\right)\] as the "imaginary" part of the Weierstrass curve. Conversely, every minimal timelike surface can be obtained at least locally in this way. Note, however, that a minimal timelike surface can be generated via the Weierstrass formula by different pairs of holomorphic functions on the algebra of double numbers. Figure 1. Enneper surfaces In his study of minimal timelike surfaces in \(\mathbb{R}^{3}_{1}\), G. Ganchev [7] specialized the Weierstrass formula by introducing special isothermal parameters, called _canonical_. If the surface is parametrized with respect to canonical parameters, then the coefficients of the first and second fundamental forms are \[-E=G=\frac{1}{\sqrt{-K}}>0,\qquad F=0,\] \[L=-1,\qquad M=0,\qquad N=-1, \tag{2.1}\] in the case of minimal surfaces of negative Gauss curvature \(K\), and \[E=-G=\frac{1}{\sqrt{K}}>0,\qquad F=0,\] \[L=0,\qquad M=1,\qquad N=0, \tag{2.2}\] in the case of minimal surfaces of positive Gauss curvature \(K\). Note that, because of (2.1) the parametric lines are principal in the case of minimal surfaces with \(K<0\). Analogously, because of (2.2) the parametric lines are asymptotic in the case of minimal surfaces with \(K>0\). The idea of Ganchev leads to the special Weierstrass curve \[\Phi(z)=\int_{z_{0}}^{z}\,\left(-\frac{1}{2}\,\frac{1+g^{2}(z)}{g^{\prime}(z) },\frac{\mathrm{j}}{2}\,\frac{1-g^{2}(z)}{g^{\prime}(z)},\frac{g(z)}{g^{ \prime}(z)}\right)\,dz. \tag{2.3}\] The real (resp. the "imaginary") part of this curve is a minimal timelike surface with canonical parametrization and negative (resp. positive) Gauss curvature. We shall use also the following theorem: **Theorem A [7].**_If a timelike surface in \(\mathbb{R}^{3}_{1}\) with non-vanishing Gauss curvature is parametrized by canonical parameters \((u,v)\), then the Gauss curvature \(K\) satisfies the equation_ \[(\ln\sqrt{-K})_{uu}-(\ln\sqrt{-K})_{vv}=2\sqrt{-K},\] _in the case \(K<0\), and_ \[(\ln\sqrt{K})_{uu}-(\ln\sqrt{K})_{vv}=2\sqrt{K},\] _in the case \(K>0\). Conversely, for any solution \(K(u,v)\) of any of these equations there exists a_ **unique** _(up to position in the space) minimal timelike surface of Gauss curvature \(K(u,v)\), \((u,v)\) being canonical parameters._ The canonical parameters \((u,v)\) are determined uniquely up to the following transformations [7]: \[\begin{split} u&=\varepsilon\bar{u}+A\\ v&=\varepsilon\bar{v}+B\end{split}\qquad \varepsilon=\pm 1\,\ A=const,\ B=const. \tag{2.4}\] The idea of canonical parameters is further developed for the class of timelike surfaces in \(\mathbb{R}^{n}_{1}\), see [8]. ## 3. Transformation of the isothermal parameters to canonical ones Suppose the surface \(S\) is defined as the real part of the Weierstrass curve \[\Psi(z)=\int_{z_{0}}^{z}\,\left(-\frac{1}{2}\,f(z)(1+g^{2}(z)),\frac{\mathrm{ j}}{2}\,f(z)(1-g^{2}(z)),f(z)g(z)\right)\,dz. \tag{3.1}\] We look for a transformation \(z=z(w)\) such that the curve \(\Phi(w)=\Psi(z(w))\) has the form \[\Phi(w)=\int_{w_{0}}^{w}\left(-\frac{1}{2}\,\frac{1+\tilde{g}^{2}(w)}{\tilde{g}^{ \prime}(w)},\frac{\mathrm{j}}{2}\,\frac{(1-\tilde{g}^{2}(w))}{\tilde{g}^{ \prime}(w)},\frac{\tilde{g}(w)}{\tilde{g}^{\prime}(w)}\right)\,dw\] for some holomorphic function \(\tilde{g}(w)\). The real part of this curve will be a canonical representation of the given surface \(S\). The equality \(\Psi(z(w))=\Phi(w)\) implies \(\Psi^{\prime}(z(w))z^{\prime}(w)=\Phi^{\prime}(w)\). Hence, it is easy to derive \[f(z(w))z^{\prime}(w)=\frac{1}{\tilde{g}^{\prime}(w)}\,\qquad\qquad g(z(w))= \tilde{g}(w). \tag{3.2}\] The last equality implies \[\tilde{g}^{\prime}(w)=g^{\prime}(z(w))z^{\prime}(w)\] and using the first equality of (3.2) we obtain \[(z^{\prime}(w))^{2}=\frac{1}{f(z(w))g^{\prime}(z(w))}. \tag{3.3}\] Now, we know also the function \(\tilde{g}(w)=g(z(w))\) that generates the surface in canonical parameters. Similar considerations can be done in the case the surface is defined as the "imaginary" part of the Weierstrass curve determined by (3.1). So, we can state the following result: **Theorem 3.1**.: _Let the minimal timelike surface \(S\) be defined by the real or "imaginary" part of (3.1). Any solution to differential equation (3.3) defines a transformation of the isothermal parameters of \(S\) to canonical ones. Moreover, the function \(\tilde{g}(w)\) that defines \(S\) via formula (2.3) is given by \(\tilde{g}(w)=g(z(w))\)._ As a consequence, we may obtain also relations (2.4) between two different pairs of canonical parameters. As an application of Theorem 3.1, consider the minimal surfaces \(S\) (of negative Gauss curvature) generated by the functions \[f(z)=a\,\qquad g(z)=bz+c\] via the Weierstrass formula, \(a,b,c\) being double numbers, \(a\neq 0\), \(b\neq 0\), \(ab\neq 0\). Equation (3.3) takes the form \[(z^{\prime}(w))^{2}=\frac{1}{ab}\] and has the following solution \[z(w)=\pm\frac{w}{\sqrt{a}\sqrt{b}}+const.\] According to (2.4) and Theorem 3.1, we may replace \(z\) in \(g(z)\) with \(\ \frac{z}{\sqrt{a}\sqrt{b}}-\frac{c}{b}\) and we will obtain a parametrization of the surface \(S\) in canonical parameters via formula (2.3) and the function \[\tilde{g}(z)=g\left(\frac{z}{\sqrt{a}\sqrt{b}}-\frac{c}{b}\right)=\frac{\sqrt{ b}}{\sqrt{a}}\,z\.\] The Gauss curvature is given by the following formula: \[K=-\frac{16\left|\frac{b}{a}\right|^{2}}{\left(1-\left|\frac{b}{a}\right|\left(u^ {2}-v^{2}\right)\right)^{4}}\.\] This result shows that the surface \(S_{0}\) of negative Gauss curvature generated via the Weierstrass formula by the functions \[f(z)=\frac{|a|}{|b|}\,\qquad g(z)=z,\] has the same Gauss curvature in canonical parameters. Hence, due to Theorem A we may identify \(S\) with \(S_{0}\). On the other hand, the Weierstrass formula implies that \(S_{0}\) is homothetic to the standard timelike Enneper surface (\(a=b=1\)) with \(K<0\). So, as in the case of minimal surfaces in the Euclidean space [3], we have **Corollary 3.2**.: _The minimal timelike surface generated by the pair of linear functions \(f(z)=a,\,g(z)=bz+c\) via the Weierstrass formula coincides with the Enneper surface up to position in the space and homothety._ ## 4. Holomorphic functions generating a minimal timelike surface As we said before, a minimal timelike surface is generated by different pairs of holomorphic functions via the Weierstrass formula. For example, the Enneper surface of negative (resp. positive) Gauss curvature is the real (resp. "imaginary") part of the curve defined via the Weierstrass formula by the pair of functions \[f(z)=1;\qquad g(z)=z, \tag{4.1}\] but also by \[f(z)=e^{z};\qquad g(z)=e^{z}, \tag{4.2}\] and, of course, by many others. So, the following natural question arises: under what conditions do two pairs of holomophic functions give rise to one and the same minimal timelike surface via the Weierstrass representation? It is not difficult to prove the following: **Proposition 4.1**.: _Suppose the pairs \((\tilde{f}(z),\tilde{g}(z))\) and \((f(w),g(w))\) generate two minimal timelike surfaces via the Weierstrass formula. Then, these surfaces coincide (up to translation) if and only if there exists a function \(w=w(z)\), such that_ \[\tilde{g}(z)=g(w(z))\qquad\text{and}\qquad\tilde{f}(z)=f(w(z))w^{\prime}(z)\.\] For the two pairs (4.1) and (4.2), that generate the Enneper surface, the function \(w(z)=e^{z}\) transfers the first pair into the second one. Similarly, the following question related to formula (2.3) arises: what is the relation between the functions that generate a minimal timelike surface in canonical parameters? A result in this direction is given by the following theorem. **Theorem 4.2**.: _Let the holomorphic function \(g(z)\) (defined on a domain of \(\mathbb{D}\)) generate a minimal timelike surface \(S\) in canonical parameters, i.e. via formula (2.3). Then, for an arbitrary real number \(\varphi\) and an arbitrary double number \(\alpha\), by the transformations_ \[\tilde{g}(z)=\pm e^{\varphi j}\frac{\alpha+g(z)}{1+\bar{\alpha}g(z)};\qquad \qquad\tilde{g}(z)=\pm e^{\varphi j}\frac{1}{f(z)}, \tag{4.3}\] _we obtain the same (up to position in the space) surface in canonical parameters. Conversely, any function that generates (up to position) the surface \(S\) in canonical parameters may be obtained in this way._ **Proof.** Let us consider the first transformation. Denote by \(S\) the minimal timelike surface of negative Gauss curvature, generated via formula (2.3) by the function \(g(z)\) and let \(\Psi(z)\) be the corresponding curve. Analogously, we define \(\widetilde{S}\) and \(\widetilde{\Psi}(z)\). We may prove that \(S\) and \(\widetilde{S}\) coincide (up to position) by a direct computation of their Gauss curvatures using the formula \[K=-\frac{16|g^{\prime}|^{4}}{(1-|g|^{2})^{4}}\] and applying Theorem A. Now we give another proof, thus clarifying the relation between \(S\), \(\widetilde{S}\) and transformation (4.3). We have \[\Psi^{\prime}(z)=\left(-\frac{1+g^{2}(z)}{2g^{\prime}(z)},\mathrm{j}\frac{1- g^{2}(z)}{2g^{\prime}(z)},\frac{g(z)}{g^{\prime}(z)}\right)\,\] \[\widetilde{\Psi}^{\prime}(z)=\left(-\frac{1+\widetilde{g}^{2}(z)}{2\widetilde {g^{\prime}}(z)},\mathrm{j}\frac{1-\widetilde{g}^{2}(z)}{2\widetilde{g^{ \prime}}(z)},\frac{\widetilde{g}(z)}{\widetilde{g^{\prime}}(z)}\right)\.\] Let \(\alpha=a+\mathrm{j}b\), \(a,b\in\mathbb{R}\). Define the SO(1,2)-matrices \[A=\left(\begin{array}{ccc}\cosh\varphi&\sinh\varphi&0\\ \sinh\varphi&\cosh\varphi&0\\ 0&0&1\end{array}\right);\qquad\qquad B=\left(\begin{array}{ccc}\frac{1+a^{2 }+b^{2}}{1-a^{2}+b^{2}}&\frac{-2ab}{1-a^{2}+b^{2}}&\frac{-2a}{1-a^{2}+b^{2}}\\ \frac{2ab}{1-a^{2}+b^{2}}&\frac{1-a^{2}-b^{2}}{1-a^{2}+b^{2}}&\frac{-2b}{1-a^{2 }+b^{2}}\\ \frac{-2a}{1-a^{2}+b^{2}}&\frac{2b}{1-a^{2}+b^{2}}&\frac{1+a^{2}-b^{2}}{1-a^{2 }+b^{2}}\end{array}\right).\] A straightforward verification shows that \[A\,B\,\Psi^{\prime}(z)=\widetilde{\Psi}^{\prime}(z)\.\] The last equality implies that up to translation \[A\,B\,\mathbf{x}(u,v)=\tilde{\mathbf{x}}(u,v)\.\] Hence, the considered transformation of type (4.3) of the function \(g(z)\) corresponds to a motion of the surface \(S\). Conversely, it is clear that any surface that coincides (up to position) with \(S\) may be obtained from \(S\) using as above two \(SO(1,2)\) matrices and a translation. \(\square\) As an application of Theorem 4.2, we may prove that any polynomial minimal surfaces \(S\) which has polynomial parametrization of degree 3 in isothermal coordinates is (up to position and homothety) an Enneper surface. Namely, we have the following result: **Theorem 4.3**.: _Let the minimal timelike surface \(S\) of negative Gauss curvature has polynomial parametrization of degree 3 in isothermal parameters. Then, up to position in space and homothety, \(S\) is (a part of) the Enneper surface of negative curvature._ **Proof.** Suppose that the surface is defined by \[S\ :\ \ {\bf x}={\bf x}(u,v).\] Similarly to the case of surfaces in the Euclidean space (see e.g. section 22.4 in [11]), \(S\) is the real part of the Weierstrass curve obtained by substituting the double number variables \(\dfrac{z}{2}\) and \(\dfrac{z}{2{\rm j}}\) formally in the places of the real variables \(u,v\): \[\Psi(z)=2{\bf x}\left(\dfrac{z}{2},\dfrac{z}{2{\rm j}}\right)-{\bf x}(0,0).\] Using a translation (if necessary) we may assume that \({\bf x}(0,0)=0\). Since the curve \(\Psi(z)\) is a cubic polynomial, then \[\Psi^{\prime}(z)=(\phi_{1}(z),\phi_{2}(z),\phi_{3}(z))=\left(-\dfrac{f(z)}{2}( 1+g^{2}(z)),\dfrac{{\rm j}}{2}f(z)(1-g^{2}(z)),f(z)g(z)\right)\,dz\] for some functions \(f(z)\) and \(g(z)\) and the functions \(\phi_{i}(z)\), \(i=1,2,3\) are polynomials of degrees at most \(2\). Moreover, at least one of them is of degree exactly \(2\). Hence, the same is true for the following three functions \[f(z)=-\phi_{1}+{\rm j}\phi_{2};\qquad f(z)g^{2}(z)=-\phi_{1}-{\rm j}\phi_{2}; \qquad f(z)g(z)=\phi_{3}. \tag{4.4}\] So, \(f(z)\) is a polynomial of degree at most \(2\). From the third equality of (4.4) we have \( g=\dfrac{\phi_{3}}{f}\). Since \(\phi_{3}\) and \(f\) are polynomials, we may write \(g(z)\) in the form \[g(z)=\dfrac{P(z)}{Q(z)}\,\] where the polynomials \(P(z)\) and \(Q(z)\) have no common zeros. If we assume that \(Q(z)\) is a constant, then \(g(z)\) is a polynomial and having in mind (4.4), we get \(f(z)=const\), \(g(z)=cz+d\). If we assume that \(Q(z)\) is not a constant, then from the second equality of (4.4), we get \[f(z)\dfrac{P^{2}(z)}{Q^{2}(z)}=-\phi_{1}-j\phi_{2},\] which is a polynomial and hence \[f(z)=\pm(az+b)^{2}=\pm Q^{2}(z)\.\] Up to symmetry of the surface, we assume that \[f(z)=(az+b)^{2},\qquad Q(z)=az+b\.\] Now, \(\psi_{3}=f(z)g(z)=(az+b)P(z)\) and since it is of degree at most \(2\), then \(P(z)\) is of degree at most \(1\), i.e. \(P(z)=cz+d\). Hence, we conclude that, up to homothety of the surface, \(f(z)\) and \(g(z)\) have the form \[f(z)=(az+b)^{2};\qquad g(z)=\dfrac{cz+d}{az+b}\.\] On the other hand, the Enneper surface is generated in canonical parameters by the pair of functions \(f_{1}(z)=1\), \(g_{1}(z)=z\) and due to Theorem 4.2 also by the functions \[g_{2}(z)=e^{{\rm j}\underline{\alpha}+z}\dfrac{\alpha+z}{1+\bar{\alpha}z}; \qquad\qquad f_{2}(z)=\dfrac{1}{g_{2}^{\prime}(z)}=e^{-{\rm j}\underline{\alpha }}\dfrac{(1+\bar{\alpha}z)^{2}}{|\alpha|^{2}-1}.\] We can change the parameter \(z\) by \[\frac{(a\alpha e^{\varphi\j}-c)z+\alpha be^{\varphi\j}-d}{(\bar{\alpha}c-ae^{ \varphi\j})z+\bar{\alpha}d-be^{\varphi\j}}\] and then the generating functions take the form: \[f_{3}(z)=\frac{e^{\varphi\j}(1-|\alpha|^{2})(az+b)^{2}}{\Big{(}(\bar{\alpha}c- ae^{\varphi\j})z+\bar{\alpha}d-be^{\varphi\j}\Big{)}^{2}};\qquad\qquad g_{3}(z)= \frac{cz+d}{az+b}.\] Of course, in the last expressions the parameters are not canonical. Note that we may choose arbitrary \(\varphi\) and \(\alpha\), so we put \[e^{\varphi\j}=\frac{\frac{c^{2}}{(bc-ad)^{2}}}{\Big{|}\frac{c^{2}}{(bc-ad)^{2} }};\qquad\qquad\bar{\alpha}=\frac{ae^{\varphi\j}}{c}\.\] Then, \(f_{3}(z)\) becomes proportional (with real coefficient) to \(f(z)\), thus proving the assertion. Note that \(bc-ad\) can not be zero (if we assume that \(bc-ad=0\), then the surface is planar which is not our case). _Remark 4.1_.: The same proposition holds for minimal timelike surfaces of positive Gauss curvature. **Acknowledgments:** The authors are partially supported by the National Science Fund, Ministry of Education and Science of Bulgaria under contract KP-06-N52/3.
2302.07644
Schrödinger symmetry of Schwarzschild-(A)dS black hole mechanics
We show that the dynamics of Schwarzschild-(A)dS black holes admits a symmetry under the 2d Schr\"odinger group, whatever the sign or value of the cosmological constant. This is achieved by reformulating the spherically-symmetric reduction of general relativity as a 2d mechanical system with a non-trivial potential controlled by the cosmological constant, and explicitly identifying the conserved charges for black hole mechanics. We expect the Schr\"odinger symmetry to drive the dynamics of quantum Schwarzschild-(A)dS black holes. This suggests that Schr\"odinger-preserving non-linear deformations (of the Gross-Piteavskii type) should capture universal quantum gravity corrections to the black hole geometry. Such scenario could be realized in condensed matter analogue models.
Jibril Ben Achour, Etera R. Livine, Daniele Oriti
2023-02-15T13:25:19Z
http://arxiv.org/abs/2302.07644v3
# Schrodinger symmetry of Schwarzschild-(A)dS black hole mechanics ###### Abstract We show that the dynamics of Schwarzschild-(A)dS black holes admit a symmetry under the 2d Schrodinger group, whatever the sign or value of the cosmological constant. This is achieved by reformulating the spherically-symmetric reduction of general relativity as a 2d mechanical system with a non-trivial potential controlled by the cosmological constant, and explicitly identifying the conserved charges for black hole mechanics. We expect the Schrodinger symmetry to drive the dynamics of quantum Schwarzschild-(A)dS black holes. This suggests that Schrodinger-preserving non-linear deformations (of the Gross-Pitaevskii type) should capture universal quantum gravity corrections to the black hole geometry. Such scenario could be realized in condensed matter analogue models. ## Introduction Black holes are iconic predictions of General Relativity which stand as a fantastic window to unravel the fundamental structure of spacetime. Indeed, the laws of black hole mechanics and their thermodynamical interpretation have revealed that they are equipped with an entropy and a temperature [1; 2]. It follows that black holes can be understood as many-body systems built from the collective behavior of (still unknown) microscopic degrees of freedom. Such thermodynamical point of view on gravitational systems has been widely extended since then, to cosmological spacetime, causal diamonds and light cones geometries. The key challenges in completing this picture are on the one hand, to identify the nature of these microscopic degrees of freedom, and on the other hand, to understand the emergence of classical geometries from such microscopic description. While there might be different ways to encode the microscopic degrees of freedom depending on the chosen model or theory, one expects that their dynamics, and thus the emergence of spacetime in the continuum hydrodynamical approximation, to be governed by universal symmetries. Dualities between gravitational and condensed matter systems, for which the mean-field approximation methods are well under control, provides a powerful avenue to shed light on these issues. Such mapping naturally emerged in the non-relativistic regime of holographic gauge/gravity dualities such as the AdS/CFT correspondence. In view of the prominent role played by the Schrodinger equation and its non-linear extensions in non-relativistic physics, an important effort has been devoted to construct cold atoms/gravity correspondence based on the Schrodinger group [3; 4]. Concretely, non-relativistic holography relates manifolds with Schrodinger isometries to non-relativistic CFT living on their boundary [5; 6; 7]. Condensed matter systems enjoying such non-relativistic conformal symmetry are characterized by an anisotropic scaling invariance of the spacetime coordinates of the form \[t\rightarrow\lambda t\;,\qquad x^{i}\rightarrow\lambda^{z}x^{i} \tag{1}\] where \(z\)=2 is the critical exponent. Such invariance appears in a variety of contexts, from strongly correlated fermions, vortices, monopoles, compressible fluid mechanics and in Bose-Einstein condensates. In particular, this conformal symmetry is realized for suitable non-linear Schrodinger equations describing ultra cold atoms gases, such as the Gross-Pitaevskii condensate and the Tonks-Girardeau gas [8; 9]. While the construction of dualities between such condensed matter systems and gravity has mostly been investigated in the framework of non-relativistic holography, it seems that dictionaries between non-linear Schrodinger and gravity could be identified based directly on the shared symmetries of the two classes of systems. The goal of this short paper is to develop this storyline for Schwarzschild-(A)dS black holes. Concretely, we show that the spherically-symmetric stationary reduction of general relativity, that can be called more descriptively Schwarzschild-(A)dS black hole mechanics, admits a symmetry under the 2d Schrodinger group, whatever the sign and value of the cosmological constant. This is achieved by explicitly identifying the conserved charges generating this symmetry. This symmetry should a priori be conserved when quantizing, in particular when considering quantum gravity corrections to the black hole geometry. This should set a strong criterion to discriminate between regularized black hole metric proposals in quantum gravity phenomenology. For instance, assuming that quantum Schwarzschild-(A)dS black holes could be generally modeled as a non-linear extension of 2d quantum mechanics with a self-interaction between black hole quanta, preserving the Schrodinger group symmetry fixes the self-interaction term to be in \(\psi^{4}\), thus implying a universal UV behavior to quantum black holes and the existence of a dictionary between black hole quantum mechanics and the Gross-Pitaevskii equation. This scenario is especially interesting with respect to the possibility of imagining a new type of analogue quantum black hole systems, e.g. with Bose-Einstein condensates, based on an exact mapping between dynamical conserved charges and not anymore on mimicking the Schwarzschild spacetime metric as for sonic black holes. This possibility could then be extended to a large class of cosmological dynamics following the symmetry and conserved charge analysis of [10; 11; 12]. We start by reviewing the Schrodinger symmetry of classical mechanics, which encodes its invariance under Galilean and conformal transformations, and showing that it is indeed preserved under quantization. The Casimirs of the Schrodinger group, initially vanishing at the classical level, acquires non-zero values in quantum mechanics and reflect the extra degrees of freedom represented by the wave-function dressing the classical system. We then move to Schwarzschild-(A)dS black holes. The spherically-symmetric reduction of general relativity can be written as a mechanical system, with a non-trivial potential given by the cosmological constant term, and we show that this potential does not spoil the invariance under the Schrodinger symmetry. Finally, we argue that this should be a key symmetry, to be preserved, for quantum black holes and we discuss its relevance for quantum gravity. ### Schrodinger symmetry and Galilean relativity Let us start with reviewing the algebra of conserved charges for the classical mechanics of a free particle in \(d\) spatial dimensions, driven by the action: \[S[t,x^{a}]=\frac{m}{2}\int\mathrm{d}t\,\dot{x}^{a}\dot{x}_{a}\,, \tag{2}\] where \(m\) is the particle's mass and the index \(a\) runs from \(1\) to \(d\). The canonical analysis defines the conjugate momentum and Poisson bracket, \[p_{a}=m\dot{x}_{a}\,,\quad\{x^{a},p_{b}\}=\delta_{b}^{a}\,, \tag{3}\] and the Legendre transform gives the Hamiltonian, \[S[t,x^{a}]=\int\mathrm{d}t\,\left[p_{a}\dot{x}^{a}-H\right]\quad\text{with }H=\frac{1}{2m}p_{a}p^{a}\,. \tag{4}\] By Noether theorem, symmetries are generated by conserved charges. In general, those conserved charges can depend explicitly on time and satisfy \[\mathrm{d}_{t}\mathcal{O}=\partial_{t}\mathcal{O}+\{\mathcal{O},H\}=0\,. \tag{5}\] The algebra of conserved charges for the free particle is well known. It leads to the Schrodinger algebra, which reflects the free particle's invariance under the Galilean transformations and conformal transformations. This construction is crucial, because this is the maximal symmetry preserved by the quantization. In more details, a first set of conserved charges consists in the momentum \(p_{a}\), the Galilean boost generator \(b_{a}\) and the angular momentum \(j_{ab}\), \[b_{a}=\frac{1}{m}\big{[}mx_{a}-tp_{a}\big{]}\,,\quad j_{ab}=x_{a}p_{b}-x_{b}p_ {a}\,, \tag{6}\] which satisfy the Galilean algebra \[\{p_{a},p_{b}\}=\{b_{a},b_{b}\}=0\,,\quad\{b_{a},p_{b}\}=\delta_ {ab}\,, \tag{7}\] \[\{j_{ab},p_{c}\}=\delta_{ac}p_{b}-\delta_{ab}p_{c}\,,\quad\{j_{ab },b_{c}\}=\delta_{ac}b_{b}-\delta_{ab}b_{c}\,,\] \[\{j_{ab},j_{cd}\}=\delta_{ac}j_{bd}-\delta_{ad}j_{bc}-\delta_{ad} j_{bc}+\delta_{bd}j_{ac}\,.\] The momentum \(p_{a}\) generates the symmetry under space translations \(x^{a}\mapsto x^{a}+w^{a}\), while the angular momentum \(j_{ab}\) generates the symmetry under \(\mathrm{SO}(d)\) space rotations. The vector \(b_{a}\) depends explicitly on the time \(t\), it is an evolving constant of motion, indicating the initial condition (at \(t=0\)) for the particle position. It can be interpreted as an extra component of the angular momentum with respect to a pair of conjugate variables \((x^{0},p_{0})=(t,m)\). It generates the symmetry under translation by a fixed speed, \[x^{a}\mapsto x^{a}+v^{a}t\,,\qquad p^{a}\mapsto p^{a}+mv^{a}\,. \tag{8}\] Together, \((p_{a},b_{a},j_{ab})\) encode the Galilean relativity of the free classical particle. To these, we add three other conserved charges \(q_{\mu}\), defined as \[q_{+} =mH\,, \tag{9}\] \[2q_{0} =D-2Ht\,,\] \[2mq_{-} =mx^{a}x_{a}-2tD+2t^{2}H\,,\] where we have introduced the dilatation generator \(D=x^{a}p_{a}\). These three observables form a \(\mathfrak{sl}(2,\mathbb{R})\) Lie algebra, \[\{q_{0},q_{\pm}\}=\pm q_{\pm}\,,\quad\{q_{+},q_{-}\}=-2q_{0}\,, \tag{10}\] and generate the conformal symmetry of the free particle: \(q_{+}\propto H\) generates time translations, \(q_{0}\) is the initial condition for \(D\) and generates inverse rescalings of the position and momentum, finally \(q_{-}\) gives the initial condition for the squared distance \(x^{2}\) and generates special conformal transformations. This conformal symmetry is a universal feature of mechanical systems, leading for instance to the conformal structure of the Hydrogen atom spectrum (e.g. [13]). The \(\mathfrak{sl}(2,\mathbb{R})\) does not commute with the Galilean sector; the non-vanishing brackets are: \[\{q_{0},p_{a}\}=+\tfrac{1}{2}p_{a}\,,\quad\{q_{-},p_{a}\}=+b_{a}\,, \tag{11}\] \[\{q_{0},b_{a}\}=-\tfrac{1}{2}b_{a}\,,\quad\{q_{+},b_{a}\}=-p_{a}\,,\] Putting all the conserved charges together, this algebra is known as the d-dimensional Schrodinger algebra \(\mathfrak{sb}(d)\), \[\mathfrak{sb}(d)=(\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(d))\oplus_{s} (\mathbb{R}^{d}\oplus\mathbb{R}^{d})\,, \tag{12}\] where \(\oplus_{s}\) denotes a semi-direct sum, where the \(\mathfrak{sl}(2,\mathbb{R})\) sector generated by the \(q\)'s and the \(\mathfrak{so}(d)\) sector generated by the \(j\)'s act non-trivially on the \(\mathbb{R}^{d}\oplus\mathbb{R}^{d}\) sector consisting in the \(p\)'s and \(b\)'s. Once exponentiated, these charges give the Schrodinger symmetry group, \[\mathrm{Sh}(d)=(\mathrm{SL}(2,\mathbb{R})\times\mathrm{SO}(d))\ltimes(\mathbb{ R}^{d}\times\mathbb{R}^{d})\,. \tag{13}\] This is the key symmetry group of mechanics preserved by quantization. An important remark is that, while there are \(2d\) independent variables in the phase space, given by the pairs \((x^{a},p_{a})\), we have identified \(3+d(d-1)/2+2d\) conserved charges. This means that these constants of motion are clearly redundant and that there exists relations between them. These relations are nevertheless not linear, and it is important to keep in mind that a non-linear combination of Lie algebra generators lays by definition out of that Lie algebra: the symmetry transformations generated by a conserved charge or a power of that charge are a priori not the same. Let us focus here on the two-dimensional case \(d=2\). A more systematic treatment for arbitrary dimension can be found in [14]. For \(d=2\), the angular momentum has a single component \(j\equiv j_{12}\). A first relation expresses it in terms of the two pairs of constants of motion \((b^{a},p_{a})\), \[{\cal C}_{2}\equiv b\wedge p-j=0\ \ \mbox{with}\ \ b\wedge p=\left(b_{1}p_{2}-b_{2}p_{ 1}\right), \tag{14}\] which reflects that the \(b_{a}\)'s are simply the evolving constants of motion for the positions \(x^{a}\). This actually is the quadratic Casimir of the Schrodinger algebra: it commutes with all the Schrodinger charges, and thus is invariant under translations, boosts, rotations and conformal transformations. Another set of conditions resulting from the expressions of the charges in terms of \(x\)'s and \(p\)'s gives the conformal charges in terms of the boost charges and momenta: \[q_{+}=\frac{p^{2}}{2}\,,\quad q_{0}=\frac{b^{a}p_{a}}{2}\,,\quad q_{-}=\frac{b ^{2}}{2}\,. \tag{15}\] But these relations are not invariant under conformal transformations. Another important relation is the balance equation giving the \(\mathfrak{sl}(2,\mathbb{R})\) Casimir in terms of the angular momentum: \[q_{+}q_{-}-q_{0}^{2}=\tfrac{1}{4}j^{2}\,, \tag{16}\] but it is not invariant under translations or boosts. It is nevertheless possible to repackage these relations in terms of the cubic Casimir of the Schrodinger algebra, \[{\cal C}_{3} \equiv q_{0}^{2}-q_{+}q_{-}+\tfrac{1}{4}j^{2}\] \[+\tfrac{b^{2}}{2}q_{+}+\tfrac{p^{2}}{2}q_{-}-b^{a}p_{a}q_{0}- \tfrac{b\wedge p}{2}j=0\,,\] which is appropriately invariant under all Schrodinger symmetries. Although the Schrodinger symmetry algebra is preserved by the quantization, and even characterizes the quantization procedure, these relations and vanishing Casimir conditions, \({\cal C}_{2}={\cal C}_{3}=0\), are not valid at the quantum level anymore. Their non-zero values actually encode the dressing of the classical particle with quantum fluctuations and reveal the infinite tower of new degrees of freedom when upgrading the classical variables \((x^{a},p_{a})\) to a wave-function \(\Psi(x^{a})\). The goal of the present letter is to show that the dynamics of (spherically symmetric) black holes in general relativity is also driven by the same Schrodinger symmetry charges, as pointed out in [10], to extend those previous results to include a non-vanishing cosmological constant, and to discuss its role in describing quantum black holes. ## II Symmetry of Quantum Mechanics Before moving on to black holes, we discuss the fate of the Schrodinger symmetry in standard non-relativistic quantum mechanics. We consider the free Schrodinger system in \(d\)-spatial dimension defined by the field theory Lagrangian: \[S[\Psi,\bar{\Psi}]=\int\mathrm{d}t\mathrm{d}^{d}x\,\left[i\hbar\bar{\Psi} \partial_{t}\Psi-\frac{\hbar^{2}}{2m}\partial_{a}\Psi\partial^{a}\bar{\Psi} \right]\,. \tag{18}\] The resulting field equation is the Schrodinger equation: \[i\partial_{t}\Psi=-\frac{\hbar}{2m}\partial_{a}\partial^{a}\Psi\,, \tag{19}\] which gives the equation of motion for the wave-function \(\Psi\) in the \(x\)-polarization. The canonical analysis of this action gives the pair of conjugate variables, \[\{\Psi(x),\bar{\Psi}(y)\}=\tfrac{1}{i\hbar}\,\delta^{(d)}(x-y)\,, \tag{20}\] and the field theory Hamiltonian, \[H=-\frac{\hbar^{2}}{2m}\int\mathrm{d}^{d}x\,\bar{\Psi}\partial_{a}\partial^{a }\Psi\,. \tag{21}\] We introduce the probability integral \(n=\int\mathrm{d}^{d}x\,\bar{\Psi}\Psi\), also understood as the number of particles, and the average position and momentum, \[X^{a}=\int\mathrm{d}^{d}x\,\bar{\Psi}x^{a}\Psi\,,\quad P_{a}=-i\hbar\int \mathrm{d}^{d}x\,\bar{\Psi}\partial_{a}\Psi\,, \tag{22}\] as well as the quadratic moments of the wave function, \[J_{ab} = -i\hbar\int\mathrm{d}^{d}x\,\bar{\Psi}\left(x_{a}\partial_{b}-x_ {b}\partial_{a}\right)\Psi\,, \tag{23}\] \[D = \frac{-i\hbar}{2}\int\mathrm{d}x^{d}\,\bar{\Psi}\left(x^{a} \partial_{a}+\partial_{a}x^{a}\right)\Psi\,,\] (24) \[{\cal X} = \int\mathrm{d}^{d}x\,\bar{\Psi}x^{a}x_{a}\Psi\,. \tag{25}\] The angular momentum \(J_{ab}\), the expectation value \(C\) of the dilatation generator \(\vec{x}\cdot\vec{p}\) and the position uncertainty \({\cal X}\) characterize the shape of the wave packet. The integrals, \(n\), \(P_{a}\) and \(J_{ab}\), have vanishing Poisson brackets with the Hamiltonian, and are thus constants of motion, \(\{n,H\}=\{P_{a},H\}=\{J_{ab},H\}=0\). As for classical mechanics, we introduce the evolving position observable: \[B_{a}=X_{a}-\tfrac{t}{m}P_{a}\,,\quad\mathrm{d}_{t}B_{a}=\partial_{t}B_{a}+\{B _{a},H\}=0\,. \tag{26}\] We compute the Poisson brackets between those observables, \[\{B_{a},P_{b}\} = \delta_{ab}n\,, \tag{27}\] \[\{J_{ab},P_{c}\} = \delta_{ac}P_{b}-\delta_{bc}P_{a}\,,\] \[\{J_{ab},B_{c}\} = \delta_{ac}B_{b}-\delta_{bc}B_{a}\,,\] \[\{J_{ab},J_{cd}\} = \delta_{ac}J_{bd}-\delta_{bc}J_{ad}-\delta_{ad}J_{bc}+\delta_{bd} J_{ac}\,,\] which form a centrally extended Galilean algebra, with the number of particles \(n\) as the central charge. We complete this set of conserved charges with the constants of motions encoding the evolution of the quadratic quantum uncertainty: \[Q_{+} = mH\,, \tag{28}\] \[2Q_{0} = D-2Ht\,,\] \[2mQ_{-} = m{\cal X}-2tD+2t^{2}H\,.\] The evolving constants of motion \(Q_{0}\) and \(Q_{-}\) are the initial conditions at \(t=0\), respectively for the observable \(D\) and the position spread \({\cal X}\). Their explicit time dependence exactly compensates their non-vanishing brackets with the Hamiltonian. As expected, these form a \(\mathfrak{sl}(2,\mathbb{R})\) algebra, \[\{Q_{0},Q_{\pm}\}=\pm Q_{\pm}\,,\quad\{Q_{+},Q_{-}\}=-2Q_{0}\,, \tag{29}\] whose Casimir is \({\cal C}_{\mathfrak{sl}}=Q_{0}^{2}-Q_{+}Q_{-}\). This is the quadratic uncertainty algebra of [15]. The remaining non-vanishing bracket are given by \[\{Q_{0},P_{a}\}=+\tfrac{1}{2}P_{a}\,,\quad\{Q_{-},P_{a}\}=+B_{a}\,, \tag{30}\] \[\{Q_{0},B_{a}\}=-\tfrac{1}{2}B_{a}\,,\quad\{Q_{+},B_{a}\}=-P_{a}\,.\] We recognize the same d-dimensional Schrodinger algebra \(\mathfrak{sh}(d)\) as for classical mechanics, \[\mathfrak{sh}(d)=(\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(d))\oplus_{s} \left(\mathbb{R}^{d}\oplus\mathbb{R}^{d}\right). \tag{31}\] The important difference with classical mechanics is that the Schrodinger Casimirs do not vanish anymore. This reveals a tower of extra degrees of freedom. Indeed, the Schrodinger charges for the classical particle could all be written as polynomials in the canonical position and momentum. This is no longer the case in quantum mechanics. The wave-function \(\Psi\) contains infinitely more information than the classical position and momentum: the charges \((J_{ij},Q_{0},Q_{\pm})\) are now independent from the linear observables \((P_{i},B_{i})\) and encode the shape of the wave-packet, they are legitimate degrees of freedom, representing the quantum fluctuations on top of the classical motion. To be more precise, we can look into the \(d=2\) case. A non-zero quadratic Casimir reveals an extra contribution to the angular momentum, \[{\cal C}_{2}=\langle\hat{x}_{1}\rangle\langle\hat{p}_{2}\rangle-\langle\hat{ x}_{2}\rangle\langle\hat{p}_{1}\rangle-n\langle J_{12}\rangle\,\neq 0\,, \tag{32}\] which actually means that the quantum state \(\Psi\) carries non-trivial correlation and entanglement between the two directions \(x_{1}\) and \(x_{2}\). Similarly, the cubic Casimir \({\cal C}_{3}\) relates the \(\mathfrak{sl}_{2}\) Casimir for the conformal symmetry to the Galilean generators. The fact that it does not vanish anymore, and that it can take arbitrary values, reflects that the (quadratic) quantum uncertainty - the spread of the wave packet - measured by the \(Q\)'s can evolve independently from the classical degrees of freedom \(X^{a},P_{a}\). From this perspective, non-zero values of the Schrodinger Casimirs, \({\cal C}_{2}\neq 0\), \({\cal C}_{3}\neq 0\), are witnesses of the quantumness of the system. Once exponentiated, these conserved charges generate symmetries of the system according to Noether's theorem. This gives the Schrodinger group, \[\mathrm{Sh}(d)=\left(\mathrm{SL}(2,\mathbb{R})\times\mathrm{SO}(d)\right) \ltimes(\mathbb{R}^{d}\times\mathbb{R}^{d})\,, \tag{33}\] identified as the maximal symmetry group of the free Schrodinger equation by Niederer in [16]. We catalogue, in the table 1, the various symmetry transformations. While phase multiplication, translations and boosts are usual transformations, it is instructive to give a closer look at the conformal transformations. Indeed, these are not mere rescalings. They are non-trivial symmetry transformations, creating a complex phase factor, affecting the complex width of Gaussian wave-packets, thus leading to physical effects. More precisely, these are given by time reparameterization, with a non-trivial rescaling of the space coordinates, following e.g. [17], \[t\mapsto\tilde{t}=f(t)\,,\quad x_{a}\mapsto\tilde{x}_{a}=\dot{f}(t)^{\frac{1}{ 2}}x_{a}\,, \tag{34}\] and both a conformal rescaling and a non-trivial phase for the wave-function, \[\Psi\mapsto\widetilde{\Psi}(\tilde{t},\tilde{x}_{a})=\dot{f}(t)^{-\frac{d}{ 4}}\,e^{i\frac{m}{4}\frac{m}{f}x_{a}x^{a}}\,\Psi(t,x_{a})\,, \tag{35}\] which leads to the following transformation of the action, \[S[\tilde{t},\tilde{x},\widetilde{\Psi}]=S[t,x,\Psi]-\frac{m}{4}\int\mathrm{ Sch}[f](t)x_{a}x^{a}\Psi\bar{\Psi}\,, \tag{36}\] with the Schwarzian derivative of the reparametrization function, \[\mathrm{Sch}[f]=\dot{h}-\frac{1}{2}h^{2}\,,\quad\mathrm{with}\ h=\bar{f}/ \dot{f}\,. \tag{37}\] This is a symmetry as soon as the Schwarzian derivative vanishes, i.e. when \(f\) is a Moebius transformations, \[\mathrm{Sch}[f]=0\Leftrightarrow f(t)=\frac{\alpha t+\beta}{\gamma t+\delta}\,. \tag{38}\] This is the \(\mathrm{SL}(2,\mathbb{R})\) symmetry group generated by the three conserved charges \(Q_{0}\) and \(Q_{\pm}\), as can be directly checked by looking at infinitesimal Moebius transformations. The purpose of the present work is to show that this Schrodinger symmetry also controls black hole dynamics in general relativity. This underlines the universality of the Schrodinger charges, but also provides a direct bridge between black hole mechanics and quantum mechanics, which should shed clarifying light on the quantization of black holes. \begin{table} \begin{tabular}{|l|l|} \hline charge & symmetry \\ \hline \(n\) & phase transformation \\ \(P_{a}\) & space translations \\ \(B_{a}\) & Galilean boosts \\ \(Q_{+}\propto H\) & time translation \\ \(Q_{0}\) & time dilatation \\ \(Q_{-}\) & special conformal \\ \hline \end{tabular} \end{table} Table 1: Schrödinger conserved charges ## III Schwarzschild-(A)dS Black Hole Mechanics We now turn to the main proof-of-concept model for general relativity, namely the eternal Schwarzschild-(A)dS black hole. The action driving the dynamics of the geometry of the black hole is obtained by symmetry reduction and gauge-fixing from the vacuum Einstein-Hilbert-A action \[S[g]=\frac{1}{\ell_{P}^{2}}\int_{\mathcal{M}}\mathrm{d}^{4}x\left[\mathcal{R}-2 \Lambda\right], \tag{39}\] where \(\ell_{P}\) is the Planck length. Boundary terms do not play any relevant role in the present analysis. We consider a static spherically symmetric manifold \(\mathcal{M}=\mathbb{R}\times\Sigma_{\epsilon}\) with line element \[\mathrm{d}s^{2}=\epsilon\left(-N^{2}(r)\mathrm{d}r^{2}+\gamma_{tt}(r)\mathrm{ d}t^{2}\right)+\gamma_{\theta\theta}(r)\mathrm{d}\Omega^{2}\,, \tag{40}\] where \(\gamma_{ij}(r)\) is the induced metric on the constant \(r\) hypersurfaces \(\Sigma_{\epsilon}\), and \(\mathrm{d}\Omega^{2}=\mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\varphi^{2}\) is the standard 2-metric on the angular sector. The parameter \(\epsilon=\pm 1\) allows to deal with both interior and exterior of the black hole using the same formalism. Our conventions are naturally adapted to the case \(\epsilon=+\) corresponding to the black hole interior: the coordinate \(r\) is time-like, and the radial metric component \(N(r)\) plays the role of the lapse between hypersurface. The case \(\epsilon=-\) corresponds to the exterior region of the black hole where \(r\) is a space-like coordinate and \(t\) is time-like. We decompose the metric components as \[\gamma_{tt}:=2\beta(r)/\alpha(r)\;,\qquad\gamma_{\theta\theta}:=\ell_{s}^{2} \alpha(r)\;, \tag{41}\] where we introduce a fiducial length scale \(\ell_{s}\) defining the dimensionful unit for the 2-sphere radius. Evaluating the full Einstein-Hilbert-\(\Lambda\) action on this metric ansatz gives the reduced action encoding the dynamics of the black hole geometry [10; 18]: \[S_{\epsilon}[\alpha,\beta]=\epsilon c\ell_{P}\int\mathrm{d}\tau\left[\frac{ \epsilon}{\ell_{s}^{2}}-\frac{\epsilon\alpha}{\ell_{\Lambda}^{2}}+\frac{ \beta\dot{\alpha}^{2}-2\alpha\dot{\alpha}\dot{\beta}}{2\alpha^{2}}\right], \tag{42}\] where we have introduced a field-rescaled radial coordinate \(\tau\) defined by: \[\mathrm{d}\tau=\sqrt{\frac{2\beta}{\alpha}}N(r)\mathrm{d}r\,, \tag{43}\] and the dot denotes the derivative with respect to \(\tau\). The length scale \(\ell_{\Lambda}=1/\sqrt{\Lambda}\) encodes the cosmological constant. The dimensionless constant \(c\) comes from restricting the range of spatial integration to a bounded region of the hypersurface \(\Sigma_{\epsilon}\). Indeed, the metric being homogeneous, the integration over the non-compact 3-manifold automatically yields an infinite result. This is naturally resolved by introducing an infra-red cut-off \(\ell_{0}\) for the coordinate \(t\). This gives: \[c=\frac{1}{\ell_{p}^{3}}\int_{t_{1}}^{t_{f}}\mathrm{d}t\oint\ell_{s}^{2} \mathrm{d}\Omega=\frac{\ell_{0}\ell_{s}^{2}}{\ell_{p}^{3}}\,, \tag{44}\] as the ratio between the IR scale and the UV scale of the system. The lapse \(N(r)\) has been completely absorbed in the definition of the radial coordinate \(\tau\). We can safely proceed to describing the system's phase space and evolution with respect to this coordinate. This is equivalent to gauge-fixing the lapse to \(N=\sqrt{\alpha/2\beta}\). We must nevertheless retain the equation of motion corresponding to lapse variations \(\delta N\), which implies that the Hamiltonian vanishes, as customary for relativistic systems. Solving the field equations gives the metric \[\mathrm{d}s^{2}=-\epsilon\frac{\alpha}{2\beta}\mathrm{d}\tau^{2}+\epsilon \frac{2\beta}{\alpha}\mathrm{d}t^{2}+\ell_{s}^{2}\alpha\mathrm{d}\Omega^{2}\,, \tag{45}\] with \(\alpha=k^{2}(\tau-\tau_{0})^{2}\) and \[-2\epsilon\beta=\frac{1}{\ell_{s}^{2}}(\tau-\tau_{0})(\tau-\tau_{1})-\frac{k ^{2}}{3\ell_{\Lambda}^{2}}(\tau-\tau_{0})^{4}\,, \tag{46}\] where \(\tau_{0}\), \(\tau_{1}\) and \(k\) are constants of integration. Rescaling the coordinates as \(r=k\ell_{s}(\tau-\tau_{0})\) and \(\tilde{t}=t/k\ell_{s}\), we recover the Schwarzschild-(A)dS solutions, \[\mathrm{d}s^{2}=-f(r)\mathrm{d}\tilde{t}^{2}+f(r)^{-1}\mathrm{d}r^{2}+r^{2} \mathrm{d}\Omega^{2}\,, \tag{47}\] with the metric component, \[f(r)=1-\frac{\ell_{M}}{r}-\frac{r^{2}}{3\ell_{\Lambda}^{2}}\quad\text{with }\ell_{M}=k\ell_{s}(\tau_{1}-\tau_{0})\,. \tag{48}\] The constant of integration \(\tau_{0},\tau_{1},k\) and the IR regularization scale \(\ell_{s}\) are combined together into the single physical parameter \(\ell_{M}\), which gives the Schwarzschild mass of the black hole. In order to study the symmetries of black hole mechanics, it is convenient to switch to its phase space description. We compute the canonical momenta: \[p_{\alpha}=\frac{\epsilon c\ell_{P}}{\alpha^{2}}(\beta\dot{\alpha}-\alpha \dot{\beta})\;,\qquad p_{\beta}=-\epsilon c\ell_{P}\frac{\dot{\alpha}}{\alpha }\;, \tag{49}\] forming the canonical pairs \(\{\alpha,p_{\alpha}\}=\{\beta,p_{\beta}\}=1\). The Hamiltonian reads \[\mathcal{H}=\mathbf{H}^{(\Lambda)}-\frac{c\ell_{P}}{\ell_{s}^{2}}\quad\text{ with}\;\;\mathbf{H}^{(\Lambda)}=\mathbf{H}^{(0)}+\frac{c\ell_{P}}{\ell_{\Lambda}^{2}} \alpha\;, \tag{50}\] \[\text{and}\quad\mathbf{H}^{(0)}=-\frac{1}{\epsilon c\ell_{P}}\left[\alpha p_{ \alpha}p_{\beta}+\frac{1}{2}\beta p_{\beta}^{2}\right]\;. \tag{51}\] Remember that we need to impose that the Hamiltonian vanishes \(\mathcal{H}=0\). This Hamiltonian constraint consists in a kinetic term \(\mathbf{H}^{(0)}\), a potential term whose coupling is the cosmological constant and a constant shift. This constant shift depends on the IR/UV ratio \(c\). It is crucial, since it changes the on-shell value of \(\mathbf{H}^{(\Lambda)}\). Now that the dynamics of black holes has been formulated as a mechanical system, let us show that it admits a symmetry group isomorphic to the Schrodinger group. Schrodinger charges for Black Holes As static spherically symmetric metrics in general relativity have been recast as a mechanical system with two degrees of freedom, we expect a symmetry under the \(d=2\) Schrodinger group, if it were a free system. The potential actually vanishes when the cosmological constant is set to \(0\), or equivalently when the cosmological scale is sent to infinity, \(\ell_{\Lambda}\to+\infty\). In that case, we naturally identify Schrodinger charges. Below, we further show that, surprisingly, the cosmological potential does not spoil this symmetry, and so the Schrodinger group still drives the black hole dynamics whatever the value of \(\Lambda\). Let us start with the case \(\ell_{\Lambda}\to+\infty\), corresponding to a vanishing cosmological constant \(\Lambda=0\) and asymptotically flat Schwarzschild black holes. Symmetries are generated by conserved charges \(\mathcal{O}\), here satisfying \[\mathrm{d}_{\tau}\mathcal{O}=\partial_{\tau}\mathcal{O}+\{\mathcal{O},\mathbf{ H}^{(0)}\}=0\,. \tag{52}\] Time-independent charges, i.e. with \(\partial_{\tau}\mathcal{O}=\{\mathcal{O},\mathbf{H}^{(0)}\}=0\), correspond to conformal Killing vectors in the field configuration space \((\alpha,\beta)\), while explicitly time-dependent charges, i.e. \(\partial_{\tau}\mathcal{O}\neq 0\), correspond to conformal Killing vectors in an extended field configuration space given by the Eisenhart-Duval lift Eisenhart and Duval (1988). This general approach was pushed forward in Eisenhart and Duval (1988) to investigate symmetries of gravitational mini-superspaces. Here, we identify translation and boost charges: \[\begin{split} P_{+}&=\sqrt{\alpha}p_{\alpha}+\frac{ \beta p_{\beta}}{2\sqrt{\alpha}}\,,\quad c\ell_{P}B_{+}=\epsilon c\ell_{P} \frac{\beta}{\sqrt{\alpha}}+\tau P_{+}\,,\\ P_{-}&=\sqrt{\alpha}p_{\beta}\,,\qquad\qquad c\ell _{P}B_{-}=\epsilon c\ell_{P}2\sqrt{\alpha}+\tau P_{-}\,.\end{split} \tag{53}\] They form a closed Lie algebra with the charge \(J=2\alpha p_{\alpha}\): \[\begin{split}\{P_{-},P_{+}\}&=0\,,\qquad\{B_{-},B_ {+}\}=0\,,\\ \{B_{\pm},P_{\pm}\}&=0\,,\qquad\{B_{\pm},P_{\mp}\}= \epsilon\,,\\ \{J,B_{\pm}\}&=\pm B_{\pm}\,,\quad\{J,P_{\pm}\}= \pm P_{\pm}\,,\end{split} \tag{54}\] where \(J\) generates rotations. We recognize the algebra of Galilean symmetries in two dimensions. Further introducing the dilatation generator \(D=(\alpha p_{\alpha}+\beta p_{\beta})\), we complete this set of conserved charges with the following observables, \[Q_{+}=c\ell_{P}\mathbf{H}^{(0)}\,,\quad Q_{0}=D-\tau\mathbf{H}^{ (0)}\,, \tag{55}\] \[c\ell_{P}Q_{-}=-2c\epsilon c\ell_{P}\beta-2\tau D+\tau^{2} \mathbf{H}^{(0)}\,,\] which form a \(\mathfrak{sl}(2,\mathbb{R})\) Lie algebra, \[\{Q_{0},Q_{\pm}\}=\pm Q_{\pm}\,,\qquad\{Q_{+},Q_{-}\}=-2Q_{0} \tag{56}\] The two sectors are coupled by non-vanishing Poisson brackets: \[\begin{split}\{Q_{0},P_{\pm}\}&=\frac{1}{2}P_{\pm }\,,\quad\{Q_{0},B_{\pm}\}=-\frac{1}{2}B_{\pm}\,,\\ \{Q_{-},P_{\pm}\}&=-B_{\pm}\,,\quad\{Q_{+},B_{\pm}\} =P_{\pm}\,,\end{split} \tag{57}\] leading to the 2d centrally extended Schrodinger algebra \(\mathfrak{sb}(2)=(\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(2))\oplus_{s} (\mathbb{R}^{2}\oplus\mathbb{R}^{2})\). Its quadratic and cubic Casimir both vanish, as expected in classical mechanics: \[\mathcal{C}_{2}=P_{+}B_{-}-P_{-}B_{+}-\epsilon J=0\,, \tag{58}\] \[\mathcal{C}_{3} =Q_{0}^{2}-Q_{+}Q_{-}-\frac{1}{4}J^{2}-\epsilon B_{+}B_{-}Q_{+}- \epsilon P_{+}P_{-}Q_{-}\] \[\quad-\epsilon(B_{-}P_{+}+B_{+}P_{-})Q_{0}+\tfrac{\epsilon}{2}(B _{-}P_{+}-B_{+}P_{-})J=0\,.\] The latter is the Schrodinger-invariant expression of the balance equation for the \(\mathfrak{sl}_{2}\) Casimir, \[Q_{0}^{2}-Q_{+}Q_{-}=\frac{1}{4}J^{2}\,. \tag{59}\] It is interesting to notice that the evolving position observables \(B_{\pm}\) allow to define a canonical transformation to phase space coordinates that diagonalize the kinetic Hamiltonian. Indeed, we read position coordinates from \(B_{pm}(\tau=0)\): \[X_{+}=\beta/\sqrt{\alpha}\,,\quad X_{-}=2\sqrt{\alpha}\,,\quad\{X_{\mp},P_{\pm }\}=1\,. \tag{60}\] Now the Hamiltonian takes a very simple form, \[\mathbf{H}^{(\Lambda)} =-\frac{\epsilon}{c\ell_{P}}P_{-}P_{+}+\frac{c\ell_{P}}{4\ell_{ \Lambda}^{2}}X_{-}^{2} \tag{61}\] \[=\frac{\epsilon}{2c\ell_{P}}(P_{2}^{2}-P_{1}^{2})+\frac{c\ell_{P} }{8\ell_{\Lambda}^{2}}(X_{1}+X_{2})^{2}\,,\] where we have introduced \[P_{\pm}=\frac{P_{1}\pm P_{2}}{\sqrt{2}}\,,\qquad X_{\pm}=\frac{X_{1}\mp X_{2}}{ \sqrt{2}}\,. \tag{62}\] This clarifies the mapping of black hole mechanics onto the \(d=2\) particle, with the awkward sign switch in the kinetic term, here \((P_{2}^{2}-P_{1}^{2})\) instead of \((P_{2}^{2}+P_{1}^{2})\). This sign is a central feature of general relativity. It signals the gravitational instability (due to conformal factor) that leads to gravitational collapse, black holes and cosmological expansion. The black hole phase space IR/UV ratio \(c\) plays the role of the 2d particle mass. Keep in mind that the black hole mass is a variable in black hole mechanics. It is a property of the chosen classical solution. More precisely, it is actually a conserved quantity, which we express in terms of the Schrodinger charges below in (66). The cosmological constant creates a quadratic trapping potential for the center-of-mass of the system. As this is a quadratic potential, it seems that one could absorb it in a redefinition of the momenta. It is indeed what happens, as we show below. This is our main result. Indeed, turning on the cosmological constant \(\Lambda\neq 0\), we find, quite remarkably, that the Schrodinger algebra is preserved. The conserved charges are mildly modified. Explicitly, while \(P_{-}\) and \(B_{-}\) do not acquire corrections, the other translation and boost charges become \[P_{+}^{(\Lambda)} =P_{+}-\epsilon\frac{c^{2}\ell_{P}^{2}}{\ell_{\Lambda}^{2}}\frac{ \sqrt{\alpha}}{p_{\beta}}\,, \tag{63}\] \[B_{+}^{(\Lambda)} =\frac{\beta^{(\Lambda)}}{\sqrt{\alpha}}+\epsilon\frac{\tau}{c \ell_{P}}P_{+}^{(\Lambda)}\,,\] \[J^{(\Lambda)} =J-\epsilon\frac{4c^{2}\ell_{P}^{2}}{3\ell_{\Lambda}^{2}}\frac{ \alpha}{p_{\beta}}\,.\] The conformal sector is similarly modified, \[Q_{+}^{(\Lambda)}=c\ell_{P}{\bf H}^{(\Lambda)}\,,\quad Q_{0}^{( \Lambda)}=D^{(\Lambda)}-\tau{\bf H}^{(\Lambda)}\,, \tag{64}\] \[c\ell_{P}Q_{-}^{(\Lambda)}=-2c\epsilon\ell_{P}\beta^{(\Lambda)} -2\tau D^{(\Lambda)}+\tau^{2}{\bf H}^{(\Lambda)}\,,\] with the following \(\Lambda\)-corrections: \[\beta^{(\Lambda)}=\beta-\epsilon\frac{2c^{2}\ell_{P}^{2}}{3\ell_ {\Lambda}^{2}}\frac{\alpha}{p_{\beta}^{2}}\,,\quad D^{(\Lambda)}=D-\epsilon \frac{4c^{2}\ell_{P}^{2}}{3\ell_{\Lambda}^{2}}\frac{\alpha}{p_{\beta}}\,. \tag{65}\] The new conserved charges satisfy \(\partial_{\tau}{\cal O}+\{{\cal O},{\bf H}^{(\Lambda)}\}=0\), and the Hamiltonian simply reads \(c\ell_{P}{\bf H}^{(\Lambda)}=-\epsilon P_{-}P_{+}^{(\Lambda)}\). We get the same Lie algebra as for the \(\Lambda=0\) case. It follows that the mechanics of Schwarzschild-(A)dS black holes is also invariant under the non-relativistic conformal Schrodinger symmetry. This result parallels the fact that the Schrodinger symmetry for 1d classical mechanics is preserved for two specific potentials: the harmonic potential and the inverse square potential (whose quantization was studied in [20]). From that point of view, the Schwarzchild-(A)dS black hole mechanics can be viewed as an extension of the flat Schwarzchild black hole mechanics similar to the extension of the free particle to the harmonic oscillator (with a positive or negative pulsation). We can compute the value of those observables on classical solutions. In particular, we get: \[J^{(\Lambda)}=\frac{c\ell_{P}}{\ell_{s}^{2}}(\tau_{1}-\tau_{0}) \,,\quad P_{-}=-\epsilon 2c\ell_{P}k\,,\] which allows to identify the black hole mass as a conserved charge: \[\ell_{M}=-\epsilon\frac{2\ell_{s}^{3}}{c^{2}\ell_{P}^{2}}J^{( \Lambda)}P_{-}\,. \tag{66}\] An important remark is that the cosmological constant \(\ell_{\Lambda}\) never appears in the on-shell values of the Schrodinger charges. For instance, the cosmological constant does not change the Schrodinger Casimirs \({\cal C}_{2}={\cal C}_{3}=0\). In fact, \(\Lambda\) shifts the definition of the conserved charges but does not affect at all the Schrodinger symmetry. Let us insist that these are not space-time isometries or diffeomorphisms, but non-trivial symmetry of general relativity under transformations acting on the space of metrics. Here, we have found that the cosmological constant does not affect the symmetry of general relativity, at least in the spherically symmetric sector. From the point of view of symmetries, \(\Lambda\) will appear back when breaking the Schrodinger symmetry, for instance by introducing an "observer". This can be simply achieved by going beyond the gravitational sector and looking at the dynamics of matter fields coupled to the geometry, in case the cosmological constant will surely modify the dynamics and symmetries of the matter field evolution. ## V Discussion & Prospects We have showed that the dynamics of stationary spherically-symmetric metrics in general relativity can be formulated as a two-dimensional mechanical system with a non-trivial potential whose coupling constant is the cosmological constant. We call this model black hole mechanics. Keep in mind that the evolution parameter here is the radial coordinate, which is space-like outside the black hole and time-like in the interior region. This allowed us to show that the black hole mechanics is invariant under the \(d=2\) Schrodinger group. This invariance holds both for the interior and the exterior regions of the black hole. Moreover, it holds whatever the value of the cosmological constant \(\Lambda\). The symmetry transformations act on the phase space of geometries and are not mere space-time transformations. They change the black hole mass \(\ell_{M}\) and the singularity position \(\tau_{0}\), as well as the IR regularization scale \(\ell_{s}\), while leaving the equations of motion invariant. Since the Schrodinger group is the key (maximal) symmetry of classical mechanics which is preserved under quantization, it is natural to expect quantum black holes to retain this symmetry. Breaking this symmetry when quantizing black holes would definitely signal a strong deviation with respect to the standard quantization logic and would reveal some important hidden physical ingredients in the description of black holes in general relativity. Digging deeper in this direction, here we have taken the perspective of considering quantum mechanics as a field extension of classical mechanics. Putting aside conceptual issues (e.g. the measurement problem and collapse of the wave-function), quantum mechanics is mathematically formulated as a description of the dynamics of the wave-function: classical positions and momenta, evolving in time, are replaced by a wave-function, considered as a space-time field, interpreted as a dressed classical object with classical positions and momenta, plus extra degrees of freedom representing the shape fluctuations of the wave packet. From this view point of quantization as field extension and turning to black holes, there are actually two natural field extensions of black hole mechanics: * On the one hand, it is natural to quantize black hole mechanics and lift a classical black hole metric to a wave-function with a fuzzy mass and a fuzzy singularity. Let us underline that this does not mean relaxing the hypothesis of stationarity or spherical symmetry: we describe quantum superposition of spherically symmetric metrics. This goes in the same direction as the line of research on effective black hole metrics taking into account quantum gravity corrections and attempting to solve the singularity problem without introducing anisotropy or leaving spherical symmetry, e.g. [21; 22; 23]. Our analysis means that preserving the Schrodinger symmetry should be crucial to this approach (see e.g. [24] using the conformal symmetry to constrain regularized black hole metrics in effective quantum gravity models). * On the other hand, the natural field theory of black hole mechanics is general relativity, which reestablishes inhomogeneities and anisotropies on top of the spherically symmetric background and describes their dynamics. From this perspective, general relativity is to be interpreted more as the non-perturbative field theory of black hole excitations, instead of its usual interpretation as the field theory encoding the non-linear properties of gravitational waves. More precisely, general relativity would lead to a non-perturbative hydrodynamic description of the black holes microstates with the black hole sector identified by the Schrodinger symmetry we have found here. Then it would be natural to understand if general relativity is invariant under an extension of the Schrodinger symmetry group. Let us underline that we expect that these symmetries would not be space-time diffeomorphisms, but non-trivial transformations on the phase space of geometries. Interestingly, it has been recently shown that the static perturbations of the Schwarzschild and Kerr black holes relevant to compute the Love numbers are also governed by a Schrodinger symmetry [25]. It would be interesting to further understand how this symmetry for perturbations can be related to the background symmetry discussed here. From a more general perspective, it would be enlightening to compare the Schrodinger charges derived here to the existing extended BMS charges and \(w_{1+\infty}\) charge algebra for asymptotically flat space-time as derived in e.g. [26; 27; 28; 29]. For both field theory extensions of black hole mechanics, we expect the Schrodinger Casimirs, \(\mathcal{C}_{2}\) and \(\mathcal{C}_{3}\), not to vanish anymore, and to reflect the extra structures and degrees of freedom dressing the black hole evolution. If both quantum black holes and asymptotically flat general relativity turn out both to preserve the Schrodinger symmetry, it should definitely be a _key symmetry of quantum gravity_. As a direct application of the present work, we would like to point out that the Schrodinger symmetry already selects quantum corrections to black hole dynamics. Indeed, the Schrodinger charge algebra also holds for suitable non-linear extensions of the Schrodinger dynamics. In quantum mechanics, atom-atom microscopic interactions can be taken into account by introducing a potential \(\mathcal{V}[\Psi,\bar{\Psi}]\), which encodes the self-interaction of the wave-function fluctuations and excitations, e.g. [8; 9]. We expect that, similarly, quantum gravity will lead to a self-interaction between the quanta of geometry forming the black hole. Remarkably, depending on the spatial dimension \(d\), one can show that the Schrodinger charge algebra is preserved for suitable self-interaction. Since such a potential is homogeneous, it does not affect the symmetry under phase transformations, translations and boosts. And one easily checks that the conformal symmetry (35) is preserved for \(\mathcal{V}\propto|\Psi|^{2n}\) when \(d(n-1)=2\). This is the case for the Gross-Pitaevskii equation in \(d=2\) dimensions with a quadratic potential \(\mathcal{V}\propto|\Psi|^{4}\), or the Tonks-Girardeau equation for \(d=1\) with the self-interaction potential \(\mathcal{V}\propto|\Psi|^{6}\) leading to the quintic non-linear Schrodinger equation. This shows that there exists a non-trivial set of UV corrected quantum dynamics protected by the Schrodinger symmetry. Applying this idea to quantum black holes suggests considering such symmetry-protected non-linear extensions of the Wheeler-de Witt equation for investigating the dynamics of the black hole wave function. It would be enlightening to understand which kind of quantum gravity models or scenarii generates such quantum black hole dynamics. We would like to conclude with the remark that, using the phase-amplitude factorization of the wave-function \(\Psi=\sqrt{\rho}e^{i\theta}\), the Schrodinger equation and its non-linear extension can be understood as Navier-Stokes' equation for compressible fluid dynamics leading to the hydrodynamics reformulation of quantum mechanics, e.g. [30]. Since black hole mechanics is invariant under the \(d\)=2 Schrodinger group, this means that we get an intriguing mapping between black hole quantum mechanics and two-dimensional hydrodynamics. It is tempting to speculate that this could be related to a fluid dynamics for gravitational quanta on the black hole horizon considered as a 2d membrane (as in the corner dynamics for general relativity [31]). A more down-to-earth expectation is this mapping surely provides a promising avenue to reformulate quantum black hole as a many-body Schrodinger system. In fact, it opens the door to the possibility of a new class of analogue condensed matter models for black holes, cosmology and quantum gravity phenomenology, based on an exact mapping of symmetries, conserved charges and dynamics, instead of focusing on shaping and manufacturing equivalents of space-time metrics. _Acknowledgements._ The work of J. Ben Achour is supported by the Alexander von Humboldt foundation and the Sir John Templeton foundation.
2306.04862
A Systematic Literature Review on Client Selection in Federated Learning
With the arising concerns of privacy within machine learning, federated learning (FL) was invented in 2017, in which the clients, such as mobile devices, train a model and send the update to the centralized server. Choosing clients randomly for FL can harm learning performance due to different reasons. Many studies have proposed approaches to address the challenges of client selection of FL. However, no systematic literature review (SLR) on this topic existed. This SLR investigates the state of the art of client selection in FL and answers the challenges, solutions, and metrics to evaluate the solutions. We systematically reviewed 47 primary studies. The main challenges found in client selection are heterogeneity, resource allocation, communication costs, and fairness. The client selection schemes aim to improve the original random selection algorithm by focusing on one or several of the aforementioned challenges. The most common metric used is testing accuracy versus communication rounds, as testing accuracy measures the successfulness of the learning and preferably in as few communication rounds as possible, as they are very expensive. Although several possible improvements can be made with the current state of client selection, the most beneficial ones are evaluating the impact of unsuccessful clients and gaining a more theoretical understanding of the impact of fairness in FL.
Carl Smestad, Jingyue Li
2023-06-08T01:26:22Z
http://arxiv.org/abs/2306.04862v1
# A Systematic Literature Review on Client Selection in Federated Learning ###### Abstract. With the arising concerns of privacy within machine learning, federated learning (FL) was invented in 2017, in which the clients, such as mobile devices, train a model and send the update to the centralized server. Choosing clients randomly for FL can harm learning performance due to different reasons. Many studies have proposed approaches to address the challenges of client selection of FL. However, no systematic literature review (SLR) on this topic existed. This SLR investigates the state of the art of client selection in FL and answers the challenges, solutions, and metrics to evaluate the solutions. We systematically reviewed 47 primary studies. The main challenges found in client selection are heterogeneity, resource allocation, communication costs, and fairness. The client selection schemes aim to improve the original random selection algorithm by focusing on one or several of the aforementioned challenges. The most common metric used is testing accuracy versus communication rounds, as testing accuracy measures the successfulness of the learning and preferably in as few communication rounds as possible, as they are very expensive. Although several possible improvements can be made with the current state of client selection, the most beneficial ones are evaluating the impact of unsuccessful clients and gaining a more theoretical understanding of the impact of fairness in FL. systematic literature review, software metric, federated learning, client selection, neural network + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal journal: Computer graphics + Footnote †: journal journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics + Footnote †: journal: Computer graphics forward-snowballing was performed on a set of six papers, resulting in 47 primary studies to review after the quality assessment and study selection. The contributions of this SLR are as follows. * It summarizes the main challenges in terms of client selection for FL. The main challenges are heterogeneity, resource allocation, communication costs, and fairness. * It summarizes the important metrics for measuring client selection in regard to the main challenges. The most commonly used metrics are testing accuracy and communication rounds. * It discusses possible future work within the field of client selection for FL. The rest of the paper is organized as follows. The related work is presented in section 2. The research methodology and implementation are presented in section 3. Section 4 shows the results of this SLR, and section 5 discusses the results. Lastly, section 6 concludes the study and proposes future work. ## 2. Related Work There are several literature reviews and surveys related to FL. Hou et al. (2018) performed an SLR of blockchain-based FL and specialized in the architectures and applications. They identified four security issues in FL which motivate the use of blockchain. The study mentioned the Internet of Things (IoT), medicine, and the Internet of Vehicles (IoV) as promising fields for application but did not mention client selection. Pfitzner et al. (Pfitzner et al., 2017) conducted an SLR of FL in a medical context. They focused on the areas that were promising for digital health applications. Antunes et al. (Antunes et al., 2018) did an SLR of FL for healthcare and focused on the architecture and remaining issues regarding applying FL to electronic health records (EHR). Both Pfitzner et al. (Pfitzner et al., 2017) and Antunes et al. (Antunes et al., 2018) focused on the security perspective and did not summarize client selection issues. Lo et al. (Lo et al., 2018) performed an SLR of FL from a software engineering perspective. They focused on what FL is, the different applications, general challenges, and how they are addressed. The five most common challenges were communication efficiency, statistical heterogeneity, system heterogeneity, data security, and client device security. The study noticed that client selection is mostly server-based but did not discuss it further. Liu et al. (Liu et al., 2018) conducted an SLR of FL but from a model quality perspective. The study presents several algorithms types, such as neural networks, decision trees, etc., with corresponding client-side algorithms but does not consider client selection. Shaheen et al. (Shaheen et al., 2018) investigated the applications, challenges and research trends of FL. The study underwent 105 research studies and discovered that the most promising application is within the healthcare domain. They reported data imbalance, system heterogeneity, expensive communication, privacy concerns, statistical heterogeneity, and resource allocation as the main challenges of implementing FL. However, they did not relate any of these challenges to client selection. Witt et al. (Witt et al., 2018) wrote an SLR of FL from the incentivization methods perspective. This study also discusses blockchain as a possible improvement but does not mention client selection outside the scope of blockchain. Hosseinzadeh et al. (Hosseinzadeh et al., 2018) did an SLR of FL with emphasis on IoT, focusing on the evaluation factors and the future and open challenges of FL-based IoT. The study mentions a possible client selection method but does not focus on the topic. Lo et al. (Lo et al., 2018) reviewed the different architectural patterns to design FL systems. The study reports 15 architectural patterns, where one of which is the client selector. The study provides a high-level overview of possible solutions, such as resource-based, data-based, and performance-based client selection, as well as some of the benefits and drawbacks of the pattern. Abreha et al. (Abreha et al., 2018) systematically surveyed FL in edge computing. The survey reports the main challenges as communication cost, reliability, privacy, and administrative policies. It also discusses client selection to a small degree by mentioning existing studies on the topic. Ali et al. (Ali et al., 2018) conducted an SLR of incentive-driven FL and the associated security challenges. Some incentive mechanisms include auction theory and blockchain but do not touch on the topic of client selection and possibly how to incentivize clients. Ma et al. (Ma et al., 2018) reviewed the state-of-the-art in solving non- independant and identically distributed (Non-IID) data in FL and addressed future trends for the topic. When the datasets are not independent and identically distributed, it leads to less correlation and dependencies because samples of the datasets do not have the same probability distribution. Non-IID data is one of the largest challenges in FL, and the study discusses ways to improve it through, e.g., data enhancements and data selection. One of these methods is client selection, but the survey does not go more into depth than linking to relevant papers. ## 3. Research Design and Implementation To summarize the state of the art of client selection of FL and to answer our research questions, we performed a systematic literature review based upon the guidelines (Wohlin, 2018) and (Shi et al., 2018). ### Search Strategy Generally, the SLR approach for generating a search strategy is to break down the research questions into searchable terms and generate a list of synonyms, abbreviations, and alternative spellings. As there exist a vast amount of studies on the topic of FL, this process became unmanageable. Thus, the strategy used in this paper is based on the guidelines for snowballing in SLR by Wohlin (Shi et al., 2018), as shown in Figure 1, which includes the following main steps: * Step 1: Generate a start set of studies (including only papers that will be a part of the final analysis) * Step 2: Perform backward- and forward snowballing * Step 3: Decide to include or exclude the study * Step 4: Iterate until finding no new papers To start the snowballing procedure, a starting set was needed. Google Scholar was used to generate this starting set by using relevant terms such as "Federated Learning" and "Client Selection in Federated Learning." The results are listed below. * "Communication-Efficient Learning of Deep Networks from Decentralized Data" (Krishnan et al., 2017) * "Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge" (Krishnan et al., 2017) * "Client selection and bandwidth allocation in wireless federated learning networks: A long-term perspective" (Krishnan et al., 2017) * "Federated Learning in a Medical Context: A Systematic Literature Review" (Krishnan et al., 2017) * "A Systematic Literature Review of Blockchain-based Federated Learning: Architectures, Applications and Issues" (Krishnan et al., 2017) * "A state-of-the-art survey on solving non-IID data in Federated Learning" (Krishnan et al., 2017) When starting to perform forward and backward snowballing on the starting set, it was apparent that there were too many papers to add as (Krishnan et al., 2017) is the first paper on federated learning and is cited by almost every relevant paper in the field. The paper provided a definition name for devices in FL, namely "clients." By investigating several studies, it was clear that, despite FL being a young field within machine learning, a consensus existed on using the term "Client Selection" for choosing the appropriate devices. Thus, a substring search on Google Scholar with the "cited by" feature was conducted with that substring to choose the most relevant studies. ### Study Selection and Quality Assessment We defined including and exclusion criteria to identify primary studies. For a paper to be included, it has to fulfil all the following inclusion criteria: * Written in English * Published after 2017 because 2017 is the origin of federated learning appeared in (Krishnan et al., 2017) * Discusses client selection in Federated Learning * Peer-reviewed According to (Krishnan et al., 2017), quality can be seen as to which extent the study minimizes bias and maximizes internal and external validity. Table 1 shows the different quality assessment criteria for empirical and non-empirical sources (Krishnan et al., 2017). For each selected paper, we assessed its quality according to the quality assessment criteria and awarded one point for _yes_ and zero points for _no_. We awarded half a point if it was uncertain whether or not the study fulfilled the criterion. Then an average was generated for each paper. A paper has to have an average of 0.5 or more to be accepted as the primary study. By applying the selection and quality assessment criteria, a total of 47 papers were chosen as primary studies for data extraction and synthesis. ### Data Synthesis Data synthesis involves collating and summarizing the results of the primary studies (Krishnan et al., 2017). SLRs within IT and software engineering are generally qualitative in nature. Based on the overview of data synthesis provided by (Krishnan et al., 2017), we synthesize the data in a spreadsheet, where the common themes, patterns, and finding between the extracted information can be viewed. For each RQ, relevant data were extracted and put into their respective columns according to the research question. Lastly, a list was manually generated based on the challenges and themes that were created for answering the research questions. The data synthesis process was recorded and available at 1. Footnote 1: [https://docs.google.com/spreadsheets/d/1jGpkb/OcXazrRrcR_RdshmTshX0NNDCIXU9Wgw3SECAiw/edit/usp-sharing](https://docs.google.com/spreadsheets/d/1jGpkb/OcXazrRrcR_RdshmTshX0NNDCIXU9Wgw3SECAiw/edit/usp-sharing) ## 4. Research Results This section presents the results of each research question. ### RQ1: What are the main challenges in client selection? Results show that 23 studies tried to improve upon heterogeneity, 13 studies revolved around resource allocation, eight studies focused on communication costs, and three studies had fairness as the main challenge. The distribution of the challenges can be seen in Figure 2. Several studies report more than one challenge, but it has been assigned to the challenge it focuses mostly on. Figure 1. Illustration of snowballing in SLR (Krishnan et al., 2017) Figure 2. Distribution of challenges reported from the primary studies ### Heterogeneity In FL, the training is executed on the client's local devices. This will result in differences between the clients as they will have different datasets and availability. This is the most common challenge found in FL, and McMahan et al. (2017) reported heterogeneity as the main challenge. Almost half of the primary studies tried to improve it through different measures. Ma et al. (2017) conducted a state-of-the-art survey on solving non-IID data in FL and concluded that data heterogeneity could be divided into the following categories: feature distribution skew, label distribution skew, same label (different features), same feature (different labels) and quantity skew. Cho et al. (2010) report that heterogeneity also might arise due to partial client participation, as only a small fraction of client nodes participate in each round of training. If the client selection algorithm selects an improper subset of clients with poor-quality data, this will result in an inefficient trained model (Ma et al., 2017). Ma et al. (2017) reported label distribution skew as one of the most significant parameters which lead to performance degradation, while Rai et al. (2017) reported skewed data as one of the most critical factors. Zhang et al. (2017) reported that heterogeneity / Non-IID might bring the biases of some clients into the model training and cause accuracy degradation. This claim is supported by Zhang et al. (2017), who claim an urgent need for client selection strategies that promise data unbiasedness in FL. Li et al. (2017) analyzed the limitations of the state-of-the-art client selection in regard to heterogeneity and concluded that due to under-exploited statistical- and system efficiency, not all the model updates would contribute to the model training equally. As various clients have diverse data sizes and importance, uploading unimportant updates significantly degrades the system's efficiency. According to Li et al. (2017), a significant problem with utilizing FL with IoT is that the local data of sensors are constantly changing. This will have a similar effect as device failures and might lead to skewed distributed data, which leads to model degradation. There might also exist label noise on some clients, which exists naturally. This will lead to unnecessary information being exchanged (Zheng et al., 2017). To summarize, the key findings for the challenge of heterogeneity are as follows. * 48.93% of the studies reported heterogeneity as the main challenge for FL. * It might result in an inefficient trained model, performance- and accuracy-degradation. * Heterogeneity might increase biases and unnecessary exchange of information. ### Resource Allocation Resource allocation was the second most common problem in the primary studies. This is due to several reasons, but the main one is due to the fact that the training process becomes inefficient when some clients have limited computational resources (Xu and Wang, 2017). Xu and Wang (2017) state that a considerable challenge in resource allocation is that learning rounds are temporally interdependent and have varying significance toward the final learning outcome. According to Yu et al. (2017), it is unnecessary to select more clients than needed, and it is beneficial to have fewer clients. Still, the challenge consists of the trade-off between the number of clients, energy consumption, and resource allocation. Furthermore, within hierarchical federated learning (HFL), unique challenges exist, such as clients sometimes being inaccessible to the edge servers. Due to differences in resources and hardware specifications, the "straggler effect" is bound to happen (Zhang et al., 2017). Zhang et al. (2017) stated that clients are constrained by personal energy and computation that may reduce the efficiency of ML training tasks. This is because the training and transmission of large models are very energy-consuming and might be difficult on low-energy edge devices. During training, there might be changes in client resources due to volatility of client population, client data, and training status (Zhang et al., 2017). The topic of energy consumption within FL is important, as training and transmission of large models are energy-consuming, while edge devices generally have little energy. Zeng et al. (2017) propose a client selection policy of giving the lowest priority to clients with poor communication capacity and a bad channel. To summarize, the key findings for the challenge of resource allocation are as follows. * 27.65% of the studies reported resource allocation as the main challenge of FL. * The training process becomes inefficient when some clients have limited computational resources. * Training and transmission of large models are very energy-consuming and difficult for low-energy devices. ### Communication costs The third most common problem was the communication costs in FL. The communication cost is essential as every time the global model updates, it needs to receive the local aggregation of all the selected clients. According to Tan et al. (2017), the communication power required to reach convergence makes up a large portion of the cost. One of the challenges is that a client with low computing \begin{table} \begin{tabular}{l c c} **Quality Criteria** & **Empirical** & **Non-empirical** \\ \hline Was the motivation for the study provided? & X & X \\ Is the relevance to the industry discussed? & X & X \\ Are the most important sources linked to or discussed? & X & X \\ Is the aim (e.g., objectives, research goal) reported? & X & X \\ Was the research method or design described? & X & \\ Were any threats to validity clearly stated? & X & \\ \end{tabular} \end{table} Table 1. Quality assessment criteria based on (Zheng et al., 2017) power might not return the local model update on time, leading to a long convergence time (Levevic et al., 2017). Studies (Zhou et al., 2017; Li et al., 2018) state that the trade-off between communication costs and accuracy is a challenge. Asad et al. (2017) state that another challenge is the long distance between the different clients and the global server, which results in increased bandwidth usage. By default, FL is done synchronously. This implies that a round of communication / global model updates is only executed once every client has uploaded their model. This leads to an effect known as the straggler effect, where the system is only as fast as the slowest link (Zhou et al., 2017). This issue is also addressed by Qu et al. (2018). Another fundamental challenge with communication costs is the energy usage of clients in FL. As vast amounts of data are generated from mobile and edge devices, these devices are energy-restricted. It is imperative to improve the energy efficiency of the systems (Zhou et al., 2017). According to Deng et al. (2018), clients' hardware conditions and data resources can vary significantly, which might lead to negative performance. To summarize, the key findings for the challenge of communication costs are as follows. * 17.02% of the studies reported communication costs as the main challenge of FL. * Clients with low power or slow will lead to long convergence time. As FL is done synchronously, this implies that the learning is as fast as the slowest client. * The possibly long distance between clients and servers will result in increased bandwidth usage. ### Fairness The last common problem encountered was fairness. Only three studies reported it as the main challenge which they tried to solve. However, fairness is a researched topic within several similar fields, such as Resource Allocation (RA) and ML. In the context of resource allocation, the problem is defined as allocating a scarce shared resource among many users. For machine learning, it is typically defined as the protection of some specific attribute(s) by, e.g., preprocessing the data to remove information about the protected attribute (Levic et al., 2017). In the context of FL, if the client selection algorithm always selects the fastest devices, it might boost the training process. However, as stated by Huang et al. (2019) " _But clients with low priority are simply being deprived of chances to participate at the same time, which we refer to it as an unfair selection._" It might result in undesirable effects, such as omitting some portions of data. Also, if there are less data involved, data diversity will not be guaranteed and might hurt the performance of model training. Jee Cho et al. (2018) state that by focusing on improving fairness, the uniformity of performance across clients will be improved as well. Li et al. (2019) define fairness in FL as follows: **Definition 1** (_Fairness of performance distribution_).: For trained models \(w\) and \(\tilde{w}\), \(w\) provides a more fair solution to the federated learning objective (1) than model \(\tilde{w}\), if the performance of model \(w\) on the \(m\) devices, \(\{a1,\dots,a_{m}\}\), is more uniform than the performance of model \(\tilde{w}\) on the \(m\) devices. Note: Decoupling is the main benefit of FL. The FL algorithms may involve hundreds to millions of remote devices learning locally by minimizing the objective function \(f(w)\) (1) (Li et al., 2019): \[\min_{w}f(w)=\sum_{k=1}^{m}p_{k}F_{k}(w) \tag{1}\] where \(m\) is the total number of devices, \(p_{k}\geq 0\), \(\sum_{k}p_{k}=1\), and the local objective \(F_{k}\)'s can be defined by empirical risks over local data. Through this definition, it becomes apparent that learned models which might be biased towards devices with large numbers of data points or commonly occurring devices are unfair. According to Ma et al. (2019), differences in data distribution and uncertainty in data quality are challenging in FL and data selection might exacerbate the unfairness of FL. There are several different methods for prioritizing clients. If one selects all the "fast" devices, it might result in faster training but will deprive slower clients of the chance to participate. If the selection is one-sided, it will bring negative side effects, such as neutralizing some portions of data (Li et al., 2019). In addition, clients may not provide honest results through various attacks, such as Byzantine attacks, which minimizes the effect of actual results of honest clients and reduces fairness (Li et al., 2019). To summarize, the key findings for the challenge of fairness are as follows. * 6.38% of the studies reported fairness as the main challenge of FL. * Selecting only the fastest clients might result in an unfair selection, as slower clients are deprived of the chance to participate. * An unfair selection might lead to heavy biases as some portions of the data are neutralized. ### RQ2: How are clients selected in federated learning The different solutions are presented in this subsection and divided into their respective challenges. A summary of the findings is shown in Table 2. #### 4.6.1. Heterogeneity The most common approach to address this issue is to try to select a subset of clients who together give a more homogenous dataset (Li et al., 2019). Ma et al. (2019) performed a state-of-the-art survey on solving non-IID data in FL and mentioned (Li et al., 2019) as a possible solution through client selection. They proposed selecting clients with small data heterogeneity based on Thompson sampling. Abdulrahman et al. (2018) suggested a similar algorithm of selecting a subset of clients who together form a homogeneous subset. Zhang et al. (2019) proposed to measure the degrees of non-IID data present in each client and then select the clients with the lowest degrees. Li et al. (2019) and Saha et al. (2019) had similar ideas but suggested a more holistic approach by also including the system heterogeneity (e.g., resources) as well. Lin et al. (2019) propose to dynamically update the selection weights according to the impact of the client's data. Clustered Federated Learning (CFL) was introduced as an efficient scheme to balance out the non-IID data, and Albaseer et al. (2018) suggest leveraging the devices' heterogeneity to schedule them based on round latency and bandwidth to select clients. According to Li et al. (Li et al., 2017), this type of approach works well within IoT due to the advantage of naturally clustered factory devices. Lee et al. (Lee et al., 2017) also find clusters of clients who together have near IID data through being distribution-aware. In order to address the issue of label distribution skew, Ma et al. (Ma et al., 2018) suggested a method where you check the similarity between the aggregated data distribution of the selected clients and compare it to the global data distribution. Rai et al. (Rai et al., 2019) suggest giving each client an irrelevance score which improves the data distribution skewness. Cao et al. (Cao et al., 2019) have an interesting approach to clustering clients by grouping them according to classes of data and then randomly selecting one client within every group. Another promising approach suggested by Balakrishnan et al. (Balakrishnan et al., 2019) is to introduce diversity into client selection by measuring how a subset of clients can represent the whole when aggregated on the server for each communication round. Generally, the studies try to keep an unbiased client selection in order to promote fairness. However, Cho et al. (Cho et al., 2019) report that biasing the client selection towards choosing clients with higher local losses resulted in an improvement in the partial client participation problem. Abdulrahman et al. (Abdulrahman et al., 2019) suggested another approach to the same problem but suggested a multicriteria-based approach to predict if they were capable of performing the FL task. Other studies, such as (Wang et al., 2019), suggest strengthening client selection with cryptographic methods such as homomorphic encryption (HE). Pang et al. (Pang et al., 2019) bring forward the idea of selecting clients at different global iterations to guarantee the completion of the FL job. Lastly, Guo et al. (Guo et al., 2019) take into account both model weight divergence and local model training loss for selecting clients. #### 4.6.2. Resource Allocation In order to improve the effect of some clients having limited resources, Nishio and Yonetani (Nishio and Yonetani, 2019) suggest an algorithm that manages clients based on their resource conditions. Thus, allowing as many client updates as possible. Xu and Wang (Xu and Wang, 2019) create an algorithm that utilizes bandwidth allocation under long-term client energy constraints by using available wireless channel information in order to improve resource allocation. To deal with the resource allocation problem, Yu et al. (Yu et al., 2019) suggest maximizing the number of clients while minimizing the energy consumption by the clients by allocating a set amount of resources in terms of CPU and transmission power. Within HFL, Qu et al. (Qu et al., 2019) propose a client selection scheme with a network operator that learns the number of successful participating clients while dealing with a limited resource budget. Similarly, (Liu et al., 2019; Wang et al., 2019; Wang et al., 2019) suggested evaluating the learning quality of clients on a limited resource budget and then selecting the best clients. Shi et al. (Rai et al., 2019) suggest that clients should be selected by considering and quantifying factors such as the relative impact of clients' data and resource differences and then selecting the clients with the most significant score. Another method to deal with resource allocation is to focus on minimizing energy consumption and training delays in order to encourage more clients to participate in model updating. This may be done through reinforcement learning that learns to select the best subset of clients (Wang et al., 2019). Du et al. (Du et al., 2019) propose an algorithm that utilizes fuzzy logic by considering the number of local data, computing capability, and network resources of each client. #### 4.6.3. Communication Costs As communication cost is a vital challenge in FL, many attempts have been executed in order to improve it. Ko et al. (Ko et al., 2019) developed a joint client selection algorithm that selects appropriate devices and allocates suitable amounts of resources to reduce convergence time due to high communication costs. Hossinzadeh et al. (Hossinzadeh et al., 2019) suggested a distributed client selection algorithm where the client devices participate in aggregation, resulting in lower communication costs while maintaining the low loss. Li et al. (Li et al., 2019) had a similar approach where they selected a subset of clients to participate in each round of training, and the remaining clients did not have to do any training, resulting in both lower computing and communication resources. Another proposed solution is proposed by Asad et al. (Asad et al., 2019), where there is a 3-way hierarchical framework to improve communication efficiency. It creates a cluster head that is responsible for communication with the global server, and local devices communicate with the cluster head. This will lead to model downloading and uploading requiring less bandwidth due to the short distances from the source to the destination. To tackle the energy consumption challenge, Zeng et al. (Zeng et al., 2019) suggested only selecting the clients who provide significant information with each round. This would enable them to select fewer clients and end up with lower total energy consumption. In order to omit the "straggler effect" introduced through synchronous FL, Zhu et al. (Zhu et al., 2019) suggest an asynchronous approach where the server did not have to wait for all clients to be finished with their training. Tan et al. (Tan et al., 2019) proposed to utilize stochastic integer programming that selects clients in a reputation-aware manner. #### 4.6.4. Fairness Huang et al. (Huang et al., 2019) promote a fairness-guaranteed client selection algorithm. They conclude that the final accuracy may increase by focusing on fairness but might sacrifice training efficiency. Whereas Jee Cho et al. (Cho et al., 2019) suggest improving fairness through biased client selection by selecting the ones with higher local loss. Wan et al. (Wan et al., 2019) propose to select the most honest and useful clients by utilizing a multi-armed bandit approach, resulting in dishonest clients being filtered out. ### RQ3: Which metrics are important for measuring client selection? The relevant metrics regarding client selection entirely depend on the problem the study is trying to improve upon. The different key metrics for each of the main challenges in client selection are summarized in Table 3. #### 4.7.1. Heterogeneity The most common metric used is measuring the test accuracy against the number of communication rounds. Out of the 20 studies which reported heterogeneity as the biggest challenge, 14 used this metric to measure the success of their client selection. This metric was also utilized by the original FL paper (Zheng et al., 2019) and is directly comparable to the standard within regular machine learning, where "Test Accuracy vs. Epoch" is very commonly seen. The main difference stems from FL having many clients send their model updates to a global server and then aggregate them. In that regard, a communication round corresponds to one epoch of the global server. Studies (Bou et al., 2017; Zhang et al., 2018) included a similar metric: the number of communication rounds up to a given threshold accuracy. This approach's main benefit is that it focuses more on minimizing the number of communication rounds, which are very costly in FL. Lastly, Abdulrahman et al. (2018) looked into how many selected clients are able to finish training without dropping out. #### 4.7.2. Resource Allocation For the challenge of resource allocation, the most common metric seen is also "Testing Accuracy vs. Communication Rounds." This is as expected, as it directly measures how well the FL-algorithm performs. Some studies supplement it with other metrics such as energy, delay, and client consumption (Zhou et al., 2018). For mobile edge computing (MEC) systems, the energy is the basis of the client training model, and delay determines the iteration speed and convergence speed of the global model. #### 4.7.3. Communication costs As already stated in section 4.4, the cost of communication between clients and the global server is one of the most expensive parts of FL. Thus, utilizing the right metrics to validate the reduced cost is vital. The typical "Testing Accuracy vs. Communication Rounds" is commonly seen in the studies, as higher testing accuracy in fewer communication rounds will lead to lower costs. Another beneficial metric is convergence time and latency, reported by Ko et al. (2018), as reducing the time spent in communication will lead to lower costs. Furthermore, Tan et al. (Tan et al., 2018) introduced the cost of hiring clients as an essential metric, as it was simply overlooked in existing studies and contributed a large part of the overall costs. #### 4.7.4. Fairness Other than the already discussed "testing accuracy vs. communication rounds" metric, Huang et al. (2018) utilized different metrics for measuring improved fairness. For instance, they included metrics such as the availability of the client and mathematically measured the long-term fairness through constraints. ### RQ4: What can be improved with the current client selection? There are a lot of improvements that can be made with the current client selection. As decentralized learning is still pretty young, there is room for improvement within all discussed challenges in this SLR. #### 4.8.1. Heterogeneity For hierarchical federated learning (HFL), Albaseer et al. (2017) suggested looking into finding the optimal thresholds for splitting clusters of clients, which certainly would improve the communication efficiency of the learning network. #### 4.8.2. Resource Allocation The primary studies reported several measures for possible future work which seem exciting and beneficial for the current client selection schemes. Xu and Wang (2018) stated that selecting clients as late as possible improves the efficiency of the client selection, but there is a lack of theoretical and practical research on the topic. The most commonly suggested improvement was reported by three of the studies (Zhou et al., 2018; Zhang et al., 2018; Zhang et al., 2018). They suggested looking into the effect of unsuccessful clients (or free-riders) and how to quantify the impact. These types of clients bring a lot of overhead costs into the learning network, and exploring the effects and solutions to those would undoubtedly improve the current client selection. None of the primary studies focused mainly on the effect of unsuccessful clients. However, studies (Zhou et al., 2018; Zhang et al., 2018) focused on optimizing client selection for volatile FL. This volatility stems from the dynamic of clients' data and the unreliable nature of clients (e.g., unintentional shutdown and network instability). Therefore, some work has been done on the topic, but there is certainly a gap that may be improved more. \begin{table} \begin{tabular}{c|l} **Challenge** & **Solution(s)** \\ \hline \multirow{4}{*}{Heterogeneity} & - Select subset of client to make up homogeneous dataset \\ & - Measure degrees of non-IID data and select lowest values \\ & - Balance out non-IID data through clustered FL \\ & - Give clients an irrelevance score and base selection of that \\ & - Select a subset of clients who represent the entire set \\ & - Utilize cryptography and weight divergence \\ \hline \multirow{4}{*}{Resource Allocation} & - Base selection on resource conditions \\ & - Maximize the amount of clients by minimizing energy consumption \\ & - Encourage clients to participate in model updating \\ & - Utilize fuzzy logic by considering several resource factors \\ \hline \multirow{4}{*}{Communication Costs} & - Joint client selection algorithm to reduce convergence time \\ & - Distributed client selection where the clients decide to participate \\ & - Only active clients should perform training \\ & - 3-way hierarchical framework to improve efficiency \\ & - Select the client with the most significant information each round \\ & - Asynchronous FL \\ \hline \multirow{4}{*}{Fairness} & - Fairness-guaranteed client selection algorithm \\ & - Improve fairness through biased client selection \\ & - Select honest clients \\ \end{tabular} \end{table} Table 2. Solutions compared to challenges Those studies focus much more on the client's ability to enter and leave training rather than the effect of unsuccessful clients. #### 4.8.3. Communication Costs Ko et al. (Ko et al., 2018) discussed the possibility of creating an incentive mechanism to encourage more computing power to FL. So far, there are no incentives for the client devices to allocate more resources to learning than necessary. Thus, giving them some sort of incentive mechanism would increase computational power and improve the problem of resource allocation. Perhaps it would make it easier to create a more homogenous resource distribution amongst the clients. #### 4.8.4. Fairness Even though only two studies reported fairness as the main challenge, studies, such as (Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2015), mention it as a possibly important factor that could promise a higher accuracy. Others mentioned it as a possible future direction for their work. For instance, Shi et al. (Shi et al., 2017) reported that fairness might play an essential role in FL training and that studying it in a volatile context would be beneficial. As already discussed in section 4.6, studies already focus on fairness in client selection, but there is still a knowledge gap within the topic. Huang et al. (Huang et al., 2018) looked into the trade-off between fairness and training accuracy but concluded that they could not quantify the relationship and that looking further into analyzing the fairness factor for FL would be worthy of investigation. ## 5. Discussion This section discusses how the SLR compares to the related work as well as the limitations of the study. ### Comparison to Related Work To our knowledge, there currently does not exist any SLR focusing solely on client selection. The previous work has focused either on general FL challenges or the application of FL. Therefore, the main benefit of this SLR is its focus on FL from the perspective of client selection. However, there are a lot of similarities between the related work and this review, as they all encompass the challenges within FL. The value of this review to the industry is as a reference for the different client selection techniques and how they impact overall learning. There is also value in viewing possible future directions for client selection when looking into what can be improved. This review has found a couple of areas that researchers may look more into from the perspective of client selection. Firstly, there are a vast amount of different client selection schemes proposed for FL, which all claim to outperform the state-of-the-art of random selection. It would be beneficial to compare these selection schemes with possible application cases in order to form an improved state-of-the-art solution for client selection. Secondly, the topic of fairness is not thoroughly explored. Several studies mention fairness as an important factor, but there does not exist much research on the topic of exploring the trade-offs and benefits of focusing on it. Although FL is a relatively new field within machine learning, it already shows promising prospects within several application domains, such as healthcare, natural language processing, smart cities and IoT. For certain industries, it might be more trivial to implement a well-functioning system as the developers know the types of devices on which the algorithm will be implemented, but this is not the case for applications such as IoT and edge computing. In those fields, the developers do not necessarily know much about the client devices which will perform the learning, thus making it much more difficult to tackle the several challenges reported by the related work and found in this SLR. Client selection is an integral part of a well-functioning FL system, as it may be utilized to improve the challenges of heterogeneity, resource allocation, communication costs, and fairness. Despite the previous- and related work conducted on the topic, there is no de facto standard for the client selection algorithm within any application of FL. Even within a subset of any challenge, such as the issue of clients dropping out during training, multiple possible solutions exist, such as asynchronous FL, partial aggregation of dropped-out clients, and resource-aware FL. Within each category, there exist many algorithms to tackle the challenge through client selection, which shows the importance of exploring the topic further and possibly finding the best approach. For academia and industry, this SLR may assist in several ways. Firstly, it can be used as a reference guide for the most prominent existing challenges and their consequences for learning. Secondly, for each given challenge, the SLR presents several different possible existing solutions to tackle it. This is especially valuable to the industry when deciding to implement an FL system and deciding whether or not their ecosystem is well-suited for it. The SLR also provides guidelines to mitigate some of the challenges. \begin{table} \begin{tabular}{c|l} **Challenge** & **Metric(s)** \\ \hline \multirow{3}{*}{Heterogeneity} & - Testing accuracy vs communication rounds \\ & - Communication rounds until threshold accuracy \\ & - Number of selected client able to finish training \\ \hline \multirow{2}{*}{Resource Allocation} & - Testing accuracy vs communication rounds \\ & - Energy, delay, and client consumption \\ \hline \multirow{3}{*}{Communication Costs} & - Testing accuracy vs communication rounds \\ & - Convergence time vs latency \\ & - Cost of hiring clients \\ \hline \multirow{3}{*}{Fairness} & - Testing accuracy vs communication rounds \\ & - Availability of clients \\ \cline{1-1} & - Long-term fairness constraints \\ \end{tabular} \end{table} Table 3. Metrics compared to challenges ### Limitations Although the guidelines for systematic reviews by (Kang et al., 2022) were followed, several points may have been improved. We might miss some primary studies in the study search stage because there were many studies on the topic of FL. For instance, performing the forward snowballing procedure on the original FL-paper McMahan et al. (Mahan et al., 2022) resulted in around 7000 studies. Even though there is plenty of academic research on the topic, we did not look into any grey literature as a possible source. There certainly may exist many exciting discussions and ideas on FL which are not discussed in academic journals but in blogs and newspapers. We might exclude papers that are relevant to the study during the paper selection process. To mitigate this risk, the papers' inclusion and exclusion are cross-checked and agreed upon by both authors. ## 6. Conclusions and Future Work We performed an SLR and summarized the challenges, solutions, and metrics for evaluating the solutions and possible future work of client selection in FL. Information from 47 primary studies is analyzed and synthesized. This study is the only SLR, as far as the author is aware, focusing solely on client selection in FL. The SLR delights several possible future research challenges we want to focus on. The most beneficial ones regard the impact of unsuccessful clients or fairness. Improving one of those challenges could benefit FL, as the training efficiency might increase, and the communication costs would be reduced. The communication cost is also one of the most significant problems in FL. Thus, improving it would be beneficial.
2302.08577
For Generated Text, Is NLI-Neutral Text the Best Text?
We explore incorporating natural language inference (NLI) into the text generative pipeline by using a pre-trained NLI model to assess whether a generated sentence entails, contradicts, or is neutral to the prompt and preceding text. First, we show that the NLI task is predictive of generation errors made by GPT-3. We use these results to develop an NLI-informed generation procedure for GPT-J. Then, we evaluate these generations by obtaining human annotations on error types and overall quality. We find that an NLI strategy of maximizing entailment improves text generation when the nucleus sampling randomness parameter value is high, while one which maximizes contradiction is in fact productive when the parameter value is low. Overall, though, we demonstrate that an NLI strategy of maximizing the neutral class provides the highest quality of generated text (significantly better than the vanilla generations), regardless of parameter value.
Michail Mersinias, Kyle Mahowald
2023-02-16T20:46:36Z
http://arxiv.org/abs/2302.08577v3
# Keep it Neutral: Using Natural Language Inference to Improve Generation ###### Abstract We explore incorporating natural language inference (NLI) into the text generative pipeline by using a pre-trained NLI model to assess whether a generated sentence entails, contradicts, or is neutral to the prompt and preceding text. First, we show that the NLI task is predictive of generation errors made by GPT-3. We use these results to develop an NLI-informed generation procedure for GPT-J. Then, we evaluate these generations by obtaining human annotations on error types and overall quality. We find that an NLI strategy of maximizing entailment improves text generation when the nucleus sampling randomness parameter value is high, while one which maximizes contradiction is in fact productive when the parameter value is low. Overall, though, we demonstrate that an NLI strategy of maximizing the neutral class provides the highest quality of generated text (significantly better than the vanilla generations), regardless of parameter value. ## 1 Introduction Large Language Models (LLMs) like GPT-3 can now generate fluid and, at times, humanlike text Brown et al. (2020); Dou et al. (2022). But even top-performing models are still more likely to produce text that is incoherent, self-contradictory, off-prompt, or redundant compared to human text. Dou et al. (2022) found that annotators marked about 11% of GPT-3 tokens as part of self-contradictory text spans, compared to around 6% for humans. And they marked about 20% of GPT-3 tokens as part of redundant spans, compared to just about 10% for humans.1 Footnote 1: Our code and results are available at [https://anonymous.4open.science/r/nli_text_generation](https://anonymous.4open.science/r/nli_text_generation). A possible reason for this is that, whereas models rely on statistical regularities to generate text, humans have the ability to engage in careful, logical reasoning. Nye et al. (2021) like this to the System 1 and System 2 idea in cognitive science, whereby humans are posited to have both a fast cognitive system for making quick intuitive judgments as well as a slower, more thoughtful system Kahneman (2011). For instance, System 1 might be used for recognizing a friend's face, whereas System 2 is used for solving SAT questions. This dichotomy is also relevant in language. Imagine encountering the text: "As a Miami attorney, I help local residents file their state income tax returns". It's grammatical and reasonable at a glance. But, if you reflect on it and draw on some world knowledge, you might realize that Miami is in Florida and Florida does not have a state income tax, and so the statement is less sensible than it seemed. Perhaps it is this lack of ability to edit utterances post-hoc (as opposed to just predicting the next word) that makes language models susceptible to incoherence or redundancy. Nye et al. (2021) use this insight to develop a system that combines LLM generation with a symbolic reasoning model, essentially using the LLM as a fast generation system and then using the symbolic reasoner to refine and rerank the output. While it is costly to build rich world-reasoning models, there has been an enormous amount of Figure 1: Average holistic ratings for generations from vanilla GPT-J (control), vs. NLI Strategies of maximizing for neutral, contradiction, or entailment, for 2 different choices of parameter values. Neutral performs best in all cases (significantly better than control), but maximizing contradictions is better than the control when randomness is low, and maximizing entailment is better than the control when randomness is high. work on Natural Language Inference (NLI) tasks and models for solving them (e.g., Conneau et al., 2018; Williams et al., 2018; Poliak et al., 2018; Nie et al., 2020; Bowman et al., 2015). The NLI task is traditionally framed takes a premise and a hypothesis and queries whether the hypothesis is entailed by the premise, contradicts the premise, or is neutral with respect to the premise (Nie et al., 2020; Bowman et al., 2015). We propose that we can use these models, and the extensive body of work around them, in order to guide model generation--essentially using NLI ratings as a way of reranking (Shen et al., 2019; Holtzman et al., 2018) the "fast" output of an LLM, using a well-established logical inference task. In this vein, Holtzman et al. (2018) used an NLI approach, alongside other approaches, to try to get RNNs to generate more coherent text. Specifically, they focused on selecting sentences that maximize the probability of being neutral to what preceded them, but did not conduct a thorough evaluation of whether this was the best strategy. While they showed that their models helped generations overall, the NLI part showed mixed results and, on its own, did not help much. Should they instead have maximized entailments? There is intuitive appeal to that idea, since it would guarantee that the text is coherent. But, since entailments are logically implied by what precedes them, maximizing entailment might lead to increased redundancy. (See Merrill et al. 2022 for proof of how text generated by a perfectly informative agent can be used to learn semantic relationships between sentences.). In this work, we evaluate several strategies, not just the neutral one. Moreover, the state of natural language generation has dramatically improved since the RNNs in Holtzman et al. (2018), and so the ways in which models fail differ from the state-of-the-art in 2018. To that end, we investigate how NLI can inform text generation, by first systematically investigating whether an NLI task can predict the kinds of errors made by GPT-3 in the publicly-available-for-research Scarecrow dataset (Dou et al., 2022). Since generated text has different failure modes depending on the parameters (i.e. for the nucleus sampling parameter \(p\), text generated with high values is more likely to be incoherent/off-prompt, text generated with lower values is more likely to be redundant), we pay particular attention to how the NLI task interacts with the nucleus sampling parameter. We use the results of this analysis to motivate a number of possible ways in which the NLI task can be used to improve generated outputs. Comparing across conditions, we generate new text using GPT-J (Wang and Komatsuzaki, 2021) and evaluate it on a subset of the Scarecrow criteria. We show that using an NLI filter can improve GPT-J generations. ## 2 Analysis of GPT-3 Text Through Natural Language Inference First, we conduct a systematic exploration on a subset of the Scarecrow(Dou et al., 2022) dataset (GPT-3 generations with a static temperature parameter value of 1, and either a high or low nucleus sampling parameter \(p\) as described). Each dataset entry provides the prompt, the generated text and a \begin{table} \begin{tabular}{r r r r} \hline \hline \(p\) (nucleus sampling param) & CON & ENT & NEU \\ \hline 40 & 3.44 & 12.93 & 83.62 \\.96 & 12.41 & 1.37 & 86.20 \\ \hline \hline \end{tabular} \end{table} Table 1: For high and low \(p\) parameters in Scarecrow, breakdown of NLI classes for generated text. Neutral is by far the most common in both settings, but entailment is more common than contradiction when randomness is low, and _vice versa_ when randomness is high. \begin{table} \begin{tabular}{r||r|r|r||r|r|r} \hline \hline & \multicolumn{3}{c||}{Low p (0.4)} & \multicolumn{3}{c}{High p (0.96)} \\ \cline{2-7} & **CON** & **ENT** & **NEU** & **CON** & **ENT** & **NEU** \\ \hline All & 3.44 & 12.93 & 83.62 & 12.41 & 1.37 & 86.20 \\ \hline CO & 1.66 & 3.33 & 95.00 & 7.85 & 0.52 & 91.62 \\ \hline OP & 12.50 & 0.00 & 87.50 & 23.07 & 0.00 & 76.92 \\ \hline SC & 25.00 & 0.00 & 75.00 & 20.00 & 0.00 & 80.00 \\ \hline IN & 0.00 & 0.00 & 0.00 & 31.81 & 4.54 & 63.63 \\ \hline RD & 2.22 & 28.88 & 68.88 & 16.66 & 6.66 & 76.66 \\ \hline \hline \end{tabular} \end{table} Table 2: Distribution of spans in Scarecrow text marked with each error type (CO: correct, Off-Prompt: off-prompt, SC: self-contradiction; IN: incoherent, RD: redundant) by at least half of annotators, broken down by NLI class. Figure 2: Proportion of erroneous examples in Scarecrow per error type for high and low \(p\) parameter. list of errors identified by human annotators. For our analysis and evaluation, we focus on "language errors" in Scarecrow, specifically the categories off-prompt (OP), self-contradiction (SC), incoherent (IC), and redundant (RD) error types, as assigned by at least 50% of human annotators. For attaining NLI ratings, we use an off-the-shelf pre-trained BART-large-mnli model for natural language inference. We treat the Scarecrow prompt as the premise and the GPT-3-generated text as the hypothesis and run the NLI task. The distribution of NLI ratings appears in Table 1. We present the distribution of error types across each NLI class in Table 2. It shows that Correct (CO) segments are very likely to be neutral (95% of the time for low randomness, 92% of the time for high randomness). But, when there is a redundancy (RD) error, the probability of entailment goes way up: to 29% in the low randomness condition and 7% in the high randomness condition (although in all cases, the neutral class remains dominant). When there are off-prompt or self-contradictory errors, in both settings, the probability of the contradiction class increases sharply. As shown in Figure 3, we computed Spearman rank correlations between the proportion of text marked with each error type and the NLI probability assigned to each category (entailment/neutral/contradiction). In the low randomness setting, the contradiction NLI class was significantly associated with more self-contradictions and the entailment category with fewer self-contradictions but more redundancy, and the neutral category with less redundancy. In the high randomness setting, the contradiction NLI class was significantly associated with more off-prompt and incoherent errors, entailments with more off-prompt errors, and neutral with _fewer_ off-prompt and incoherent errors. All other correlations were not significant at \(p<.05\). Overall, we observe that a low randomness setting leads to a higher probability of text classified as entailment, and that such text is also more likely to be redundant. In contrast, a high randomness setting leads to a higher probability of text classified as contradiction, which is also more likely to be off-prompt or incoherent. In both settings, text with no errors is significantly more likely to be classified as neutral--lending support to the idea that the neutral class is preferable. ## 3 Realtime NLI to Improve Generation MethodMotivated by this finding, we propose a novel approach for overcoming the issues present in the generated text of LLMs in order to improve its quality by incorporating _natural language inference_ in the text generation pipeline. For text generation, we use the open-source GPT-J Wang and Komatsuzaki (2021) while for NLI we use a pre-trained BART-large-mnli model, as above. Using a random subset of 50 prompts contained in the Scarecrow dataset, we generate a continuation for each of 2 nucleus sampling parameters \(p\) (0.4 or 0.96) x 4 conditions (one for vanilla GPT with no NLI constraints, and one for each of our three NLI Strategies as described below), for a total of 8 continuations for each of the 50 prompts (n=400 total). For the vanilla GPT-J setting, we generate using the relevant parameter for nucleus sampling (0.4 or 0.96), with a max length of 256 tokens including the initial prompt. For the NLI Strategies, we first generate a continuation with a maximum length of 128 tokens including the initial prompt. Then, the chosen NLI Strategy is applied to each candidate sentence of the generated text. The candidate sentence is treated as the "hypothesis", while each sentence of the prompt is used as the "premise". The candidate sentence is appended to the continuation (and therefore becomes part of the hypothesis for subsequent candidates) _iff_ it satisfies the chosen NLI Strategy for every sentence in the preceding text. Otherwise it is discarded along with the remaining candidate sentences. The NLI Strategy conditions are: Figure 3: For high and low \(p\) (randomness) parameters in Scarecrow, rank correlation between proportion of text showing an error type (y-axis) and the probability of the given NLI class (x-axis). * **ENT**: _P(entailment) > P(contradiction)_ * **CON**: _P(contradiction) > P(entailment)_ * **NEU**: _P(neutral) > 0.85_ For instance, if we are applying the **NEU** strategy, we reject a sentence assigned neutral probability < 0.85, relative to _any_ sentence in the prompt _or any_ previously generated sentence appended to the continuation. We also reject all sentences after it. The process is repeated until there are seven _consecutive_ failed attempts to expand the continuation or until the continuation exceeds 256 characters and has 3 or more sentences. In order to ensure sufficient length of generated text for our evaluation, we restarted the process from the initial prompt for any examples whose produced text included less than 2 sentences. After running this process twice, in all cases, at least one sentence was generated that passed the NLI check. See Appendix A for details on the compute set-up. Two annotators (both undergraduate students at our institution, paid $15 per hour and informed of how the data would be used) evaluated the generations using the Scarecrow annotation framework, as well as a 1-5 holistic rating of overall generation quality (see Appendix B for guidelines). Because some of the prompts were either erroneously not annotated by both annotators or included sensitive content that we removed from our final data set, we ended up with 326 examples with ratings by both annotators and used those for analysis. ResultsFocusing first on the average holistic rating assigned to the generation, we observe that maximizing neutral NLI status improves generation. We ran a regression predicting average holistic rating for a prompt, based on the NLI Strategy used, with the control (vanilla GPT-J) as the baseline. When \(p\) was 0.4 (low randomness), **NEU** (the neutral strategy) was rated significantly higher than the control (\(\beta=.54,p<.05\)), as was the case in the high randomness condition (\(\beta=.70,p<.01\)). As shown in Figure 1, when randomness is low, the **CON** (contradiction strategy) outperforms the control and **ENT** (entailment strategy) underperforms it, but neither significantly so. In the high randomness condition, **ENT** is significantly better than the control (\(\beta=.49,p<.01\)) but still worse than neutral, while **CON** performs similarly to control. To better understand the source of the difference, we considered the specific error annotations, as shown in Figure 4. We treated the control as the baseline and ran regressions iteratively for error types and the randomness parameter \(p\). For parameter \(p=0.96\) (high randomness), relative to control, **NEU** showed significantly fewer off-prompt errors (\(p<.001\)) and incoherent errors (\(p<.05\)). For parameter \(p=0.40\) (low randomness), relative to control, **NEU** did not show significant differences but was particularly better on redundancy errors. When randomness is high, we also observe that **ENT** is significantly better on off-prompt errors but significantly _worse_ on redundancy errors. When randomness is low, **CON** is significantly worse for off-prompt errors. ## 4 Conclusion Using NLI as a mechanism for choosing among possible generations can be a productive way to imbue text generation systems with something more like System 2-style reasoning. In particular, maximizing neutral sentences seems most promising. But it also seems that, in cases where one wants Figure 4: For our text generation task, the average human annotator ratings for each of 4 Scarecrow error types, broken up by whether we use vanilla GPT-J output (control), maximize neutral NLI relationships in generated text, maximize entailments, or maximize contradictions. Maximizing neutral is best overall, but maximizing entailment is better than maximizing contradiction when randomness is high and _vice versa_ when randomness is low. to generate text with a high randomness parameter, maximizing entailments could be productive. Similarly, in a low randomness setting, maximizing contradictions could actually make the text better by avoiding redundancy. The NLI task is particularly valuable for this purpose because of its extremely wide use and abundance of pre-trained models. ## Limitations First, while our method incorporates NLI into the text generation process in a real-time and zero-shot fashion, there is still the issue of computational efficiency. Specifically, the NLI Strategies that maximize entailment and contradiction often require multiple generations to produce a sentence which passes their respective NLI checks. Because LLM text generation is already slow for some use cases, this process may cause a bottleneck. Second, as Nye et al. (2021) show, there is much the NLI task does not capture. NLI tasks capture only a fraction of the possible real-world use cases that one might want to use for guiding generation. Future work might explore using additional kinds of tasks, with the caveat that increasing the complexity of the task could slow generation down even more. Finally, we tested the generation procedure on only GPT-J, but are open to the possibility that more sophisticated models (especially those like ChatGPT that already include human feedback) might already do better at some of the errors identified in Scarecrow, and so could benefit less from our procedure. ## Acknowledgements We thank Clara Meister and Jessy Li for helpful conversations. K.M. acknowledges funding from NSF Grant 2104995.
2301.11576
Empirical process sampled along a stationary process
Let $(X_{\underline{\ell}})_{\underline{\ell} \in \mathbb Z^d}$ be a real random field (r.f.) indexed by $\mathbb Z^d$ with common probability distribution function $F$. Let $(z_k)_{k=0}^\infty$ be a sequence in $\mathbb Z^d$. The empirical process obtained by sampling the random field along $(z_k)$ is $\sum_{k=0}^{n-1} [{\bf 1}_{X_{z_k} \leq s}- F(s)]$. We give conditions on $(z_k)$ implying the Glivenko-Cantelli theorem for the empirical process sampled along $(z_k)$ in different cases (independent, associated or weakly correlated random variables). We consider also the functional central limit theorem when the $X_{\underline{\ell}}$'s are i.i.d. These conditions are examined when $(z_k)$ is provided by an auxiliary stationary process in the framework of ``random ergodic theorems''.
Guy Cohen, Jean-Pierre Conze
2023-01-27T07:51:07Z
http://arxiv.org/abs/2301.11576v1
# Empirical process ###### Abstract. Let \((X_{\underline{\ell}})_{\underline{\ell}\in\mathbb{Z}^{d}}\) be a real random field (r.f.) indexed by \(\mathbb{Z}^{d}\) with common probability distribution function \(F\). Let \((z_{k})_{k=0}^{\infty}\) be a sequence in \(\mathbb{Z}^{d}\). The empirical process obtained by sampling the random field along \((z_{k})\) is \(\sum_{k=0}^{n-1}[\mathbf{1}_{X_{z_{k}}\leq s}-F(s)]\). We give conditions on \((z_{k})\) implying the Glivenko-Cantelli theorem for the empirical process sampled along \((z_{k})\) in different cases (independent, associated or weakly correlated random variables). We consider also the functional central limit theorem when the \(X_{\underline{\ell}}\)'s are i.i.d. These conditions are examined when \((z_{k})\) is provided by an auxiliary stationary process in the framework of "random ergodic theorems". Key words and phrases:Empirical process, sampling along a stationary process, local times, Glivenko-Cantelli theorem, functional central limit theorem, random walks 2010 Mathematics Subject Classification: Primary: 60F05, 28D05, 22D40, 60G50; Secondary: 47B15, 37A25, 37A30 ###### Contents * 1 General results on the empirical process along a sub-sequence * 1.1 Preliminaries * 1.2 A Glivenko-Cantelli type theorem * 1.3 A sufficient condition for a FCLT for the sampled empirical process * 2 Local times for ergodic sums * 2.1 Auxiliary general results * 2.2 Non centered cocycles * 2.3 Counterexamples * 3 Examples * 3.1 Random walks * 3.2 Extensions of the r.w. case * 3.3 Step functions over rotations ## 1. Introduction For a sequence \((X_{k})\) of real i.i.d. random variables with common probability distribution function \(F\), the empirical process is defined by \(\sum_{k=0}^{n-1}\left[\mathbf{1}_{X_{k}\leq s}-F(s)\right]\). Recall two classical results. (A) the Glivenko-Cantelli theorem: _a.s. the sequence of empirical distribution functions \(F_{n}(s):=\frac{1}{n}\sum_{k=0}^{n-1}\mathbf{1}_{X_{k}\leq s}\) converges uniformly to \(F\), i.e. \(\sup_{s}|F_{n}(s)-F(s)|\to 0\);_ (B) a functional central limit theorem (FCLT): if the r.v.s \(X_{k}\) have a common distribution \(F\) over \([0,1]\), then _the process \(\frac{1}{\sqrt{n}}\sum_{k=0}^{n-1}\left[\mathbf{1}_{X_{k}\leq s}-F(s)\right]\) converges weakly to a Brownian bridge in the space of cadlag functions on \([0,1]\)._ In this paper we study the extension of these results when the process is sampled along a subsequence, analogously to what is done for limit theorems in random scenery. In the sequel, for \(d\geq 1\), \((X_{\underline{\ell}})_{\underline{\ell}\in\mathbb{Z}^{d}}\) will be a real random field (r.f.) indexed by \(\mathbb{Z}^{d}\) defined on a probability space \((\Omega,\mathcal{F},\mathbb{P})\) with common probability distribution function \(F\). The expectation on \((\Omega,\mathbb{P})\) is denoted by \(\mathbb{E}\). We consider in particular the case of a r.f. of i.i.d. r.v.'s or of stationary associated r.v.'s. Let \((z_{k})_{k=0}^{\infty}\) be a sequence in \(\mathbb{Z}^{d}\). The process obtained by sampling the random field along \((z_{k})\) is \(W_{n}(s):=\sum_{k=0}^{n-1}[\mathbf{1}_{X_{z_{k}}\leq s}-F(s)]\). We will call \(W_{n}(s)\) "empirical process sampled along \((z_{k})\)", or simply "sampled empirical process". A general question is whether the above results (A), (B) extend to the sampled empirical process \(W_{n}(s)\), in particular when \((z_{k})\) is given by another stationary process with values in \(\mathbb{Z}^{d}\). In Section 1, we give conditions on \((z_{k})\) implying that (A) and (B) are still valid for an empirical process sampled along \((z_{k})\) in different cases: independent, associated or weakly correlated random variables. The conditions are expressed in terms of the following quantities associated to the sequence \((z_{k})\) in \(\mathbb{Z}^{d}\): local time, maximal local time and number of self-intersections (up to time \(n\)) defined, for \(n\geq 1\), by \[\begin{split}& N_{n}(\underline{\ell}):=\#\{0\leq k\leq n-1:\ z_{k}=\underline{\ell}\},\\ & M_{n}:=\max_{\underline{\ell}}N_{n}(\underline{\ell}),\ V_{n}:= \#\{0\leq j,k\leq n-1:\,z_{j}=z_{k}\}.\end{split} \tag{1}\] They satisfy \(\sum_{\underline{\ell}}N_{n}(\underline{\ell})=n\) and \(n\leq V_{n}=\sum_{\underline{\ell}}N_{n}^{2}(\underline{\ell})\leq nM_{n}\leq n^ {2}\). In the other sections, \((z_{k})\) is given by a stationary process (or equivalently by the sequence \((S_{k}f(x))_{k\geq 1}\) of ergodic sums of a function \(f\) over a dynamical system). The conditions found in Section 1 lead to study the local times, maximum number of visits, number of self-intersections for the sequence \((S_{k}f(x))\). General remarks are presented in Section 2. Then in Section 3, we consider two families of examples: random walks and some ergodic sums over a rotation. The Glivenko-Cantelli theorem along ergodic sums (extension of (A)) is strongly related to random ergodic theorems, in particular to results in [23] and [25]. This is discussed in the last Section 4. Finally let us mention the quenched FCLT for the 2-parameters process \[W_{n}(s,t):=\sum_{k=0}^{[nt]-1}\left[\mathbf{1}_{X_{Z_{k}(x)}\leq s}-F(s) \right]\text{, }(s,t)\in[0,1]^{2}.\] When \((X_{\underline{\ell}})\) is a r.f. of i.i.d. r.v.'s indexed by \(\mathbb{Z}^{2}\) and when the sampling is provided by a 2-dimension centered random walk \((Z_{k})\) with a moment of order 2, the weak convergence for a.e. \(x\) toward a Kiefer-Muller process can be shown. This will be the content of a forthcoming paper. ### Acknowledgements Part of this research was done during visits of the first author to the IRMAR at the University of Rennes 1 and of the second author to the Center for Advanced Studies in Mathematics at Ben Gurion University. The authors are grateful to their hosts for their support. ## 1. **General results on the empirical process along a sub-sequence** ### Preliminaries In this subsection, results on the empirical process along a sub-sequence are shown for independent variables, as well for some of them for wider classes (associated, PDQ and weakly correlated random variables). We start by recalling some notions and auxiliary results. _1) Associated variables_ **Definition** (cf. [17]): A finite set of real random variables \(\mathbf{T}=(T_{1},T_{2},\ldots,T_{n})\) is said to be _associated_ if \(\text{Cov}[f(\mathbf{T}),g(\mathbf{T})]\geq 0\), for every coordinate-wise non-decreasing functions \(f=f(x_{1},...,x_{n})\) and \(g=g(x_{1},...,x_{n})\) for which \(\mathbb{E}[f(\mathbf{T})]\), \(\mathbb{E}[g(\mathbf{T})]\), \(\mathbb{E}[f(\mathbf{T})\,g(\mathbf{T})]\) exist. An infinite set of random variables is associated if any finite subset of it is associated. Association of random variables is preserved under taking subsets and forming unions of independent sets (of associated random variables). In particular a family of independent variables is associated. Clearly, orthogonal associated random variables are independent. Examples of (non independent) stationary associated processes with absolutely summable series of correlations are provided by some Ising models. References to such examples of stationary \(\mathbb{Z}^{d}\) random fields which satisfies the FKG inequalities and with absolutely summable correlations can be found in Newman's paper [26]. Notice that the FKG inequalities expresses the association property of the r.v.'s. 2) _PQD variables_ Two r.v.'s \(X,Y\) are called (cf. [24]) _positively quadrant dependent (PQD)_ if, \[\mathbb{P}(X>x,Y>y)\geq\mathbb{P}(X>x)\,\mathbb{P}(Y>y),\forall x,y\in\mathbb{R}.\] The property is preserved by centering. Any pairwise associated r.v.'s are pairwise PQD. Pairwise independent random variables are pairwise PQD associated. Two random variables \(X\) and \(Y\) are PQD if and only if for every non-decreasing functions \(f\) and \(g\), \(\text{Cov}(f(X),g(Y))\geq 0\) (whenever the covariance exists) ([17, Theorem 4.4]). 3) We will use the following results: _a) Maximal inequality of Newman and Wright_[27, Inequality (12)]: _If \((W_{i})\) is a sequence of centered associated, square integrable random variables, it holds:_ \[\mathbb{P}(\max_{1\leq j\leq n}|\sum_{i=1}^{j}W_{i}|\geq\lambda\,\|\sum_{i=1}^{ n}W_{i}\|_{2})\leq 2\mathbb{P}(|\sum_{i=1}^{n}W_{i}|\geq(\lambda-\sqrt{2})\,\| \sum_{i=1}^{n}W_{i}\|_{2}),\forall\lambda\geq 0. \tag{2}\] _b) Hoeffding's identity_ (see [2, Theorem 3.1]) _Let \(X,Y\) be random variables with finite second moments. For any absolutely continuous functions \(f,g\) on \(\mathbb{R}\), such that \(\mathbb{E}[f^{2}(X)+g^{2}(Y)]<\infty\), it holds_ \[\text{Cov}(f(X),g(Y))=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f^{\prime }(x)g^{\prime}(y)[\mathbb{P}(X>x,Y>y)-\mathbb{P}(X>x)\mathbb{P}(Y>y)]dxdy.\] _In particular, if \(X,Y\) are PDQ random variables, if \(|f^{\prime}|,|g^{\prime}|\leq M\) a.e., we have_ \[|\text{Cov}(f(X),g(Y))|\leq M^{2}\text{Cov}(X,Y).\] 4) Uniformity in the analogues of Glivenko-Cantelli theorem will follow from the lemma: **Lemma 1.1**.: _[_7_, Lemma, p 140]_ _Let \(F_{n}\), \(F\) be a family of right continuous distributions on \(\mathbb{R}\). Assume that, for each point \(x\) in a dense countable set \(Q\subset\mathbb{R}\), we have \(F_{n}(x)\to F(x)\). Let \(J\) be the set of jumps of \(F\) and assume that \(F_{n}(x)-F_{n}(x^{-})\to F(x)-F(x^{-})\) for every \(x\in J\). Then \(F_{n}(x)\to F(x)\) uniformly in \(\mathbb{R}\)._ **A strong law of large numbers** First we state a law of large numbers for bounded r.v.'s valid under weak hypotheses. Let \((U_{\underline{\ell}})_{\underline{\ell}\in\mathbb{Z}^{d}}\) be a r.f. indexed by \(\mathbb{Z}^{d}\) of square integrable r.v's on a probability space \((\Omega,\mathcal{F},\mathbb{P})\). Let \((z_{k})_{k\geq 0}\) be a sequence in \(\mathbb{Z}^{d}\), \(d\geq 1\), with numbers of self-intersections \(V_{n},n\geq 1\). The partial sums along \((z_{k})\) are denoted by \(S_{n}:=\sum_{k=0}^{n-1}U_{z_{k}}\). By the Cauchy-Schwarz inequality, if \(\sum_{\underline{\ell}}\sup_{\underline{\ell}}|\langle U_{\underline{\tau}+ \underline{\ell}},U_{\underline{\tau}}\rangle|<+\infty\), if holds for a finite constant \(C_{0}\): \[\|\sum_{i=0}^{n-1}U_{z_{i}}\|_{2}^{2}=\sum_{\underline{\ell}}\sum_{\underline {\underline{r}}}N_{n}(\underline{r}+\underline{\ell})N_{n}(\underline{r}) \langle U_{\underline{\tau}+\underline{\ell}},U_{\underline{\tau}}\rangle \leq V_{n}\sum_{\underline{\ell}}\sup_{\underline{r}}|\langle U_{\underline{ \tau}+\underline{\ell}},U_{\underline{\tau}}\rangle|=C_{0}V_{n}. \tag{3}\] In particular if the r.f. is stationary and the series of correlations is absolutely summable (i.e., \(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}|\langle X_{\underline{0}},X_{ \underline{\ell}}\rangle|<+\infty\)), then the spectral density of the r.f. exists and is the continuous non-negative function \(\rho\) on \(\mathbb{T}^{d}\) with Fourier coefficients \(\int_{\mathbb{T}^{d}}e^{2\pi i\langle\underline{\ell},\underline{\ell} \rangle}\,\rho(\underline{\ell})\,d\underline{\ell}=\langle X_{\underline{0}}, X_{\underline{\ell}}\rangle\) and it holds: \[\|S_{n}\|_{2}^{2}=\|\sum_{i=0}^{n-1}U_{z_{i}}\|_{2}^{2}\leq V_{n}\sum_{ \underline{\ell}}|\langle U_{\underline{\ell}},U_{\underline{0}}\rangle|. \tag{4}\] **Proposition 1.2**.: _Suppose the r.v.'s \(U_{\underline{\ell}}\) on \((\Omega,\mathbb{P})\) centered and uniformly bounded by the same constant \(K\), \(\|U_{\underline{\ell}}\|_{\infty}\leq K,\forall\underline{\ell}\). Assume that \((z_{k})\) is such that_ \[V_{n}\leq C_{1}\frac{n^{2}}{(\log n)^{\beta}},\mbox{ for constants }C_{1},\beta. \tag{5}\] _1) Then, if \(\beta>1\) and \(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\sup_{r\in\mathbb{Z}^{d}}| \langle U_{\underline{\tau}+\underline{\ell}},U_{\underline{\tau}}\rangle|<+\infty\), 2) or if \(\beta>\zeta\) for some \(\zeta\in[1,2]\) and the r.f. \((U_{\underline{\ell}})\) is stationary with \(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}|\langle U_{\underline{\ell}},U_{ \underline{0}}\rangle|^{\zeta}<\infty\), the (strong) LLN holds: \(\frac{S_{n}(\omega)}{n}\to 0\), for \(\mathbb{P}\)-a.e \(\omega\)._ _Proof_. 1) For convenience, if \(t\) is in \(\mathbb{R}^{+}\), we define \(S_{t}\) as \(S_{[t]}\). From (3) it follows \[\int(\frac{|S_{n}|}{n})^{2}\,d\mathbb{P}\leq C_{0}\frac{V_{n}}{n^{2}}\leq C_{0} C_{1}\frac{1}{(\log n)^{\beta}}.\] Therefore, putting \(\beta=1+\eta\) and \(\alpha=1-\eta/2\) (which implies \(\alpha\beta>1\)) we have \[\sum_{k}\int(\frac{|S_{2k^{\alpha}}|}{2^{k^{\alpha}}})^{2}\,d\mathbb{P}\leq C_ {0}C_{1}\sum_{k}\frac{1}{(\log 2^{k^{\alpha}})^{\beta}}=C^{\prime}\sum_{k}\frac{1}{k^{ \alpha\beta}}<+\infty;\] hence: \(\lim_{k\to+\infty}\frac{S_{2k^{\alpha}}}{2^{k^{\alpha}}}=0\), a.e. For \(n\geq 1\), let \(k_{n}\) be such that \(2^{(k_{n})^{\alpha}}\leq n<2^{(k_{n}+1)^{\alpha}}\) (that is: \(k_{n}=[(\log_{2}n)^{1/\alpha}]\)). We put \(q_{n}:=2^{(k_{n}+1)^{\alpha}}-2^{(k_{n})^{\alpha}}\) and \(p_{n}=n-2^{(k_{n})^{\alpha}}\leq q_{n}\). For \(q_{n}\), the following estimate holds: \(q_{n}=2^{(k_{n})^{\alpha}}(2^{(k_{n}+1)^{\alpha}-(k_{n})^{\alpha}}-1)\sim C^{\prime \prime}\frac{2^{(k_{n})^{\alpha}}}{(k_{n})^{1-\alpha}}\). Using the uniform boundedness of the r.v.'s, we can write: \[|\frac{S_{n}}{n}-\frac{S_{2^{(k_{n})^{\alpha}}}}{2^{(k_{n})^{ \alpha}}}|=|\frac{S_{2^{(k_{n})^{\alpha}}}+\sum_{i=2^{(k_{n})^{\alpha}}}^{2^{( k_{n})^{\alpha}}}U_{z_{i}}}{2^{(k_{n})^{\alpha}}+p_{n}}-\frac{S_{2^{(k_{n})^{ \alpha}}}}{2^{(k_{n})^{\alpha}}}|=|\frac{2^{(k_{n})^{\alpha}}\,\sum_{i=2^{(k_{ n})^{\alpha}}}^{2^{(k_{n})^{\alpha}}+p_{n}}U_{z_{i}}-p_{n}S_{2^{(k_{n})^{ \alpha}}}}{2^{(k_{n})^{\alpha}}(2^{(k_{n})^{\alpha}}+p_{n})}|\] \[\leq\frac{2^{(k_{n})^{\alpha}}\,\sum_{i=2^{(k_{n})^{\alpha}}}^{2^ {(k_{n})^{\alpha}}+p_{n}}|U_{z_{i}}|+p_{n}|S_{2^{(k_{n})^{\alpha}}}|}{2^{(k_{n })^{\alpha}}(2^{(k_{n})^{\alpha}}+p_{n})}\leq\frac{2^{(k_{n})^{\alpha}}\,\sum_ {i=2^{(k_{n})^{\alpha}}}^{2^{(k_{n})^{\alpha}}+q_{n}}|U_{z_{i}}|+q_{n}|S_{2^{( k_{n})^{\alpha}}}|}{2^{(k_{n})^{\alpha}}(2^{(k_{n})^{\alpha}})}\] \[\leq\frac{q_{n}K2^{(k_{n})^{\alpha}}+q_{n}|S_{2^{(k_{n})^{\alpha} }}|}{2^{(k_{n})^{\alpha}}(2^{(k_{n})^{\alpha}})}=\frac{q_{n}}{2^{(k_{n})^{ \alpha}}}(K+\,\frac{|S_{2^{(k_{n})^{\alpha}}}|}{2^{(k_{n})^{\alpha}}}).\leq \frac{C}{2(k_{n})^{1-\alpha}}(K+\,\frac{|S_{2^{(k_{n})^{\alpha}}}|}{2^{(k_{n}) ^{\alpha}}})\to 0.\] 2) We consider now the stationary case. Since \(\zeta=1\) is special case of 1), we assume \(\zeta\in]1,2]\). We put \(\beta=\zeta+\eta\), where \(\eta\) is \(>0\) in view of the hypothesis. First, suppose that \(\zeta=2\). Then under the hypothesis, the r.f. has a spectral measure \(\nu_{\varphi}\) absolutely continuous with respect to the Lebesgue measure \(\lambda\) on the torus with a density \(\rho\in L^{2}(d\underline{t})\) given by the Fourier series \(\rho(\underline{t})=\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\langle U_{ \underline{\ell}},U_{\underline{0}}\rangle\,\mathrm{e}^{2i\pi\langle \underline{\ell},\underline{t}\rangle}\). Using the inequality \(\lambda\{\rho>M_{n}\}\leq M_{n}^{-2}\,\|\rho\|_{2}^{2}\), we can write: \[\frac{\|S_{n}\|_{2}^{2}}{n^{2}}=\frac{1}{n^{2}}\,\int_{\mathbb{T }^{d}}|\sum_{j=0}^{n-1}e^{2\pi i\langle z_{j},\underline{t}\rangle}|^{2}\,d \nu_{\varphi}(\underline{t})\leq\frac{M_{n}}{n^{2}}\,\int_{\mathbb{T}^{d}}| \sum_{j=0}^{n-1}e^{2\pi i\langle z_{j},\underline{t}\rangle}|^{2}\,d \underline{t}+\int_{\rho>M_{n}}\,\rho\,d\underline{t}\] \[\leq M_{n}\frac{V_{n}}{n^{2}}+(\lambda\{\rho>M_{n}\})^{\frac{1}{2}}\| \rho\|_{2}\leq M_{n}\frac{V_{n}}{n^{2}}+M_{n}^{-1}\,\|\rho\|_{2}^{2}.\] Taking \(M_{n}=(\log n)^{1+\frac{1}{2}\eta}\), we obtain the bound \[\frac{1}{n^{2}}\,\|\sum_{j=0}^{n-1}U_{z_{j}}\|_{2}^{2}\,\leq\frac{C}{(\log n)^ {1+\frac{1}{2}\eta}}\] and then we finish the proof as in 1). Now, suppose that \(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}|\langle U_{ \underline{\ell}},U_{\underline{0}}\rangle|^{\zeta}<\infty\) with \(1<\zeta<2\). The spectral density \(\rho\) exists and is in \(L^{2}(\lambda)\), since \(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}|\langle U_{ \underline{\ell}},U_{\underline{0}}\rangle|^{2}<\infty\). Moreover it belongs to \(L^{\zeta^{\prime}}(\lambda)\) where \(\zeta,\zeta^{\prime}\) are conjugate exponents (see: [31], p. 102, or [21] Th. 31.22), and it satisfies: \[\|\rho\|_{\zeta^{\prime}}\leq(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}|\langle U _{\underline{\ell}},U_{0}\rangle|^{\zeta})^{1/\zeta}.\] Holder's inequality implies: \(\int_{\rho>M_{n}}\,\rho\,d\underline{t}\leq(\lambda\{\rho>M_{n}\})^{1/\zeta} \|\rho\|_{\zeta^{\prime}}\). As \[\lambda\{\rho>M_{n}\}\leq M_{n}^{-\zeta^{\prime}}\int\rho^{\zeta^{\prime}}\,d \underline{t}=M_{n}^{-\zeta^{\prime}}\|\rho\|_{\zeta^{\prime}}^{\zeta^{\prime}},\] it follows: \[\int_{\rho>M_{n}}\,\rho\,d\underline{t}\leq M_{n}^{-\zeta^{\prime}/\zeta}\|\rho\|_ {\zeta^{\prime}}^{1+\zeta^{\prime}/\zeta}.\] Therefore, we obtain \[\frac{1}{n^{2}}\,\int_{\mathbb{T}^{d}}|\sum_{j=0}^{n-1}e^{2\pi i\langle z_{j}, \underline{t}\rangle}|^{2}\,d\nu_{\varphi}(\underline{t})\leq M_{n}\frac{V_{n }}{n^{2}}+\int_{\rho>M_{n}}\,\rho\,d\underline{t}\leq M_{n}\frac{V_{n}}{n^{2} }+M_{n}^{-\zeta^{\prime}/\zeta}\|\rho\|_{\zeta^{\prime}}^{1+\zeta^{\prime}/ \zeta}.\] Now we take \(M_{n}\) such that : \(M_{n}/(\log n)^{\beta}=M_{n}^{-\zeta^{\prime}/\zeta}\), i.e. \(M_{n}=(\log n)^{\beta/\zeta^{\prime}}\). We get \[\frac{1}{n^{2}}\,\|\sum_{j=0}^{n-1}U_{z_{j}}\|_{2}^{2}\,\leq\frac{C}{(\log n)^ {\beta(1-1/\zeta^{\prime})}}=\frac{C}{(\log n)^{\beta/\zeta}}=\frac{C}{(\log n )^{1+\eta/\zeta}}\text{ with }\eta>0,\] and the end of the proof is as above. **Remarks 1.3**.: 1) Let us give an example of a non stationary r.f. \((U_{\underline{\ell}})\) which satisfies Condition 1) of the previous proposition. We take \((U_{\underline{\ell}}=V_{\underline{\ell}}W_{\underline{\ell}},\underline{ \ell}\in\mathbb{Z}^{d})\), where \((V_{\underline{\ell}})\) and \((W_{\underline{\ell}})\) are two r.f.'s independent from each other, with \((V_{\underline{\ell}})\) centered stationary and such that \(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}|\langle V_{\underline{\ell}},V_{ \underline{0}}\rangle|<\infty\), and \((W_{\underline{\ell}})\) satisfying \(\sup_{\underline{\ell},\underline{p}}|\langle W_{\underline{\ell}+\underline{ p}},W_{\underline{\ell}}\rangle|<\infty\). The r.f. \((W_{\underline{\ell}})\) can be viewed as a (multiplicative) noise (which can be non stationary) independent from the r.f. \((U_{\underline{\ell}})\). Clearly the condition in 1) is satisfied. 2) For a stationary r.f. \((U_{\underline{\ell}})\) with a bounded spectral density (but with a series of correlations which may be not absolutely summable), then like in 1) the condition \(\beta>1\) is sufficient for the conclusion of the theorem. Now, we give a pointwise bound for the sampled sums, first for i.i.d. r.v.'s, then for a stationary random field \((U_{\underline{\ell}})_{\underline{\ell}\in\mathbb{Z}^{d}}\) of associated r.v.'s. **Proposition 1.4**.: _1) Suppose that the r.v.'s \(U_{\underline{\ell}},\underline{\ell}\in\mathbb{Z}^{d}\), are i.i.d., centered, uniformly bounded by a constant \(K\), \(\|U_{\underline{0}}\|_{\infty}\leq K\), and that \(\mathbb{E}|U_{0}|^{2}=1\). Then it holds_ \[\limsup_{n}\frac{|S_{n}|}{\sqrt{V_{n}}\,(2\log\log n)^{\frac{1}{2}}}\leq K,\, \mathbb{P}\text{-}a.e. \tag{6}\] _If \(V_{n}=o(n^{2}\,(\log\log n)^{-1})\), then \(\lim_{n}\frac{S_{n}}{n}=0\), \(\mathbb{P}\)-a.e._ _2) Suppose the random field stationary and the r.v.'s \(U_{\underline{\ell}}\) centered associated._ _a) For all_ \(\varepsilon>0\)_, it holds, with_ \(\sigma_{n}:=\|\sum_{i=0}^{n-1}U_{z_{i}}\|_{2}\)_:_ \[\limsup_{n}\frac{|S_{n}|}{\sigma_{n}\,(\log\sigma_{n})^{\frac{1}{2}+ \varepsilon}}\leq 1,\,\mathbb{P}\text{-}a.e. \tag{7}\] _b) If moreover the r.f. has a summable series of correlations, then, for all_ \(\varepsilon>0\)_,_ \[|S_{n}|=O(\sqrt{V_{n}}\,(\log n)^{\frac{1}{2}+\varepsilon}),\,\mathbb{P}\text{-}a.e. \tag{8}\] _If \(V_{n}\leq Cn^{2}\,(\log n)^{-(1+\eta)})\) for some constants \(C,\eta>0\), then \(\lim_{n}\frac{S_{n}}{n}=0\), \(\mathbb{P}\)-a.e._ _Proof_. _A)_ Recall that \(\sigma_{n}=\|\sum_{i=0}^{n-1}U_{z_{i}}\|_{2}\). In case 2) we may assume \(\|U_{\underline{0}}\|_{2}=1\), and then in all cases \(\sigma_{n}\leq n\) and by association \(\sigma_{n}\geq n^{\frac{1}{2}}\). We have in case 1) \(\sigma_{n}=\sqrt{V_{n}}\) and in case 2b), for associated variables, by (4): \(\sigma_{n}\leq(\sum_{\underline{p}}\langle U_{\underline{p}},U_{\underline{0}} \rangle)^{\frac{1}{2}}\,\sqrt{V_{n}}\). By association, \(\sigma_{n}\) is non-decreasing and tends to infinity. For \(\rho>1\), let \(n_{k}=n_{k}(\rho)\) be a strictly increasing sequence of integers such that \(\rho^{k}<\sigma_{n_{k}}\leq\rho^{k+1}\). Since \(1\leq\sigma_{k+1}^{2}-\sigma_{k}^{2}\leq 1+2k\), such a sequence exists after a certain rank. By the choice of \((n_{k})\) we have \[\rho^{k}<\sigma_{n_{k}}\leq\rho^{k+1}<\sigma_{n_{k+1}}\leq\rho^{k+2}. \tag{9}\] Moreover, we have \(\sigma_{n_{k+1}}/\sigma_{n_{k}}\leq\rho^{2}\) and, since \(\sigma_{n}\leq n\), \(n_{k}\geq\rho^{k}\). Let \((\lambda_{n})\) be a non decreasing sequence of positive numbers such that \[\lambda_{n_{k}}>\sqrt{2},\ \limsup_{k}\lambda_{n_{k+1}}/\lambda_{n_{k} }\leq 1, \tag{10}\] \[\sum_{k}\mathbb{P}\big{(}\big{|}\sum_{i=0}^{n_{k}-1}U_{z_{i}} \big{|}\geq(\lambda_{n_{k}}-\sqrt{2})\,\|\sum_{i=0}^{n_{k}-1}U_{z_{i}}\|_{2} \big{)}<\infty.\] By the previous inequalities and by Newman-Wright's inequality (2) for the sequence of centered associated random variables 1\((W_{i})=(U_{z_{i}})\), we have Footnote 1: as it is a subset of a set of associated r.v.’s \[\sum_{k}\mathbb{P}(\max_{0\leq j\leq n_{k}-1}\big{|}\sum_{i=0}^{j}U_{z_{i}} \big{|}\geq\lambda_{n_{k}}\,\|\sum_{i=0}^{n_{k}-1}U_{z_{i}}\|_{2})\leq 2\sum_{k} \mathbb{P}(|\sum_{j=0}^{n_{k}-1}U_{z_{j}}|\geq(\lambda_{n_{k}}-\sqrt{2})\|\sum _{j=0}^{n_{k}-1}U_{z_{j}}\|_{2})<+\infty.\] By the Borel-Cantelli lemma, it follows: \[\limsup_{k}\frac{\max_{0\leq j\leq n_{k+1}-1}\big{|}\sum_{i=0}^{j}U_{z_{i}} \big{|}}{\lambda_{n_{k+1}}\,\sigma_{n_{k+1}}}\leq 1,\mathbb{P}\text{-a.e.}\] Hence \(\mathbb{P}\)-a.e. \[\limsup_{k}\frac{\max_{0\leq j<n_{k+1}-1}|\sum_{i=0}^{j}U_{z_{i}} \big{|}}{\lambda_{n_{k}}\sigma_{n_{k}}}\leq\limsup_{k}\bigl{(}\frac{\lambda_{ n_{k+1}}}{\lambda_{n_{k}}}\frac{\sigma_{n_{k+1}}}{\sigma_{n_{k}}}\bigr{)}\leq\rho^{2}. \tag{11}\] Observe that, if \(|S_{i}|>\rho^{2}\lambda_{i}\sigma_{i}\), for some \(i\in[n_{k},n_{k+1}[\), then \(\max_{0\leq j<n_{k+1}}|S_{j}|>\rho^{2}\lambda_{n_{k}}\sigma_{n_{k}}\). This shows: \[\{|S_{n}|>\rho^{2}\lambda_{n}\sigma_{n},\,\text{i.o.}\}\subset\{\max_{0\leq j <n_{k+1}}|S_{j}|>\rho^{2}\lambda_{n_{k}}\sigma_{n_{k}},\,\text{i.o.}\}.\] By this inclusion and (11) it follows: \(\limsup_{n}\frac{|\sum_{i=0}^{n-1}U_{z_{i}}|}{\lambda_{n}\sigma_{n}}\leq \rho^{2}\), \(\mathbb{P}\)-a.e. Taking \(\rho=\rho_{n}\) with \(\rho_{n}\downarrow 1\), we obtain \[\limsup_{n}\frac{|\sum_{i=0}^{n-1}U_{z_{i}}|}{\lambda_{n}\sigma_{n}}\leq 1, \,\mathbb{P}\text{-a.e.} \tag{12}\] _B) Choice of a sequence \((\lambda_{k})\) such that (10) is satisfied._ ### Case 1) Suppose that the \(U_{k}\)'s are i.i.d. r.v.'s. Recall that if \((W_{j},j\geq 1)\) are centered bounded sequence of independent random variables on \((\Omega,\mathbb{P})\), for any finite sum of the \(W_{j}\)'s it holds by Hoeffding's inequality for differences of martingale ([20]), for every \(\varepsilon>0\): \[\mathbb{P}(|\sum_{j}W_{j}|>\varepsilon)\leq 2\exp(-\frac{1}{2}\frac{ \varepsilon^{2}}{\sum_{j}\|W_{j}\|_{\infty}^{2}}). \tag{13}\] We apply it to the family \((N_{n}(\underline{\ell})U_{\underline{\ell}},\,\underline{\ell}\in\mathbb{Z} ^{d})\). From the hypotheses, we have: \[\sum_{\underline{\ell}}\|N_{n}(\underline{\ell})U_{\underline{\ell}}\|_{ \infty}^{2}\leq K^{2}\sum_{\underline{\ell}}N_{n}^{2}(\underline{\ell})=K^{2} V_{n}.\] With \(\varepsilon=(\lambda-\sqrt{2})\sqrt{V_{n}}\), (13) implies: \[\mathbb{P}\big{(}\big{|}\sum_{\ell}N_{n}(\ell)\,U_{\underline{ \ell}}\big{|}\geq(\lambda-\sqrt{2})\sqrt{V_{n}}\big{)}\] \[\leq 2\exp\big{(}-\frac{1}{2}(\lambda-\sqrt{2})^{2}\frac{V_{n}}{ K^{2}V_{n}}\big{)}=2\exp\big{(}-\frac{1}{2K^{2}}(\lambda-\sqrt{2})^{2}\big{)}.\] Let \(c,\delta\) be such that \(c>\delta>K^{2}\). In the previous inequality, we take \[\lambda=\lambda_{n}=(2c\log\log n)^{\frac{1}{2}}.\] Let \(k(c,\delta)\) be such that \(\lambda_{n_{k}}>\sqrt{2}\) and \(c(1-\frac{2}{\sqrt{c\log\log n_{k}}})\geq\delta>1\), for \(k\geq k(c,\delta)\). We have: \[\sum_{k=k(c,\delta)}^{\infty}\mathbb{P}\big{(}\big{|}\sum_{i=1}^ {n_{k}-1}U_{z_{i}}\big{|}\geq\,(\lambda_{n_{k}}-\sqrt{2})\,\|\sum_{i=1}^{n_{k} -1}U_{z_{i}}\|_{2}\big{)}\leq 2\sum_{k=k(c,\delta)}^{\infty}\exp\big{(}-\frac{1}{2 K^{2}}(\lambda_{n_{k}}-\sqrt{2})^{2}\big{)}\] \[\leq\frac{2}{\exp K^{2}}\sum_{k=k(c,\delta)}^{\infty}\exp\big{(} -\frac{c}{K^{2}}\log\log n_{k})(1-\frac{2}{\sqrt{c\log\log n_{k}}})\big{)}\] \[\leq\frac{2}{\exp K^{2}}\sum_{k=k(c,\delta)}^{\infty}\frac{1}{(k \log\rho)^{\frac{\delta}{K^{2}}}}<\infty.\] Now we can apply (12). It follows: \[\limsup_{n}\frac{|\sum_{i=0}^{n-1}U_{z_{i}}|}{\sqrt{2c(\log\log n)V_{n}}}\leq 1,\mathbb{P}\text{-a.e.}\] Taking \(c=c_{n}\) with \(c_{n}\downarrow K^{2}\), we get (6). ### Case 2) For general associated r.v.'s, we use simply that \(\mathbb{P}\big{(}\big{|}\sum_{i=0}^{n-1}U_{z_{i}}\big{|}\geq\,\lambda\,\|\sum_ {i=0}^{n-1}U_{z_{i}}\|_{2}\big{)}\leq\frac{1}{\lambda^{2}}\). We take \(\lambda_{n}=(\log\sigma_{n})^{\frac{1}{2}+\varepsilon}\), with \(\varepsilon>0\). By (9) we have \(\lambda_{n_{k}}\geq(k\log\rho)^{\frac{1}{2}+\varepsilon}\), and therefore, for a constant \(C_{1}\): \(\sum_{k}\frac{1}{\lambda_{n_{k}}^{2}}\leq C_{1}\sum_{k}k^{-(1+2\varepsilon)}<+\infty\); hence condition (10). Moreover we have \(k\log\rho\leq\log\sigma_{n_{k}}\leq\log\sigma_{n_{k+1}}\leq(k+2)\log\rho\); hence \[\frac{\lambda_{n_{k+1}}}{\lambda_{n_{k}}}=\big{(}\frac{\log\sigma_{n_{k+1}}}{ \log\sigma_{n_{k}}}\big{)}^{\frac{1}{2}+\varepsilon}\leq(1+\frac{2}{k})^{\frac {1}{2}+\varepsilon}\to 1.\] By (12), this proves (7) in 2a) For case 2b) we have \(\sigma_{n}^{2}\leq V_{n}\sum_{\underline{p}}\langle U_{\underline{p}},U_{ \underline{0}}\rangle\) and \(\sigma_{n}\leq n\), hence it yields (8). The last conclusion in case 2b) is now clear. Remark that it follows also from Proposition 1.2. ### A Glivenko-Cantelli type theorem #### Empirical process Let us consider a random field of r.v.'s \((X_{\underline{\ell}},\underline{\ell}\in\mathbb{Z}^{d})\) on \((\Omega,\mathcal{F},\mathbb{P})\) with common distribution function \(F\). Let \((z_{k})\subset\mathbb{Z}^{d}\) be a sequence with self-intersections \((V_{n})\). _Notation._ We say that \((X_{\underline{\ell}},\underline{\ell}\in\mathbb{Z}^{d})\) satisfies a Glivenko-Cantelli theorem along a sequence \((z_{k})\) in \(\mathbb{Z}^{d}\) if \[\lim_{n}\sup_{s}|\frac{1}{n}\sum_{k=1}^{n}\mathbf{1}_{(-\infty,s]}(X_{z_{k}}( \omega))-F(s)|=0,\ \text{for}\ \mathbb{P}\text{-a.e.}\omega.\] We show now a Glivenko-Cantelli theorem along a sequence \((z_{k})\) under various hypotheses on \((z_{k})\) and on \((X_{\underline{\ell}})\) (mixing, i.i.d., associated or PQD). Le \((X_{\underline{\ell}},\underline{\ell}\in\mathbb{Z}^{d})\) be a r.f. Denoting by \(\sigma(X_{\underline{\ell}})\) the \(\sigma\)-algebra generated by the random variable \(X_{\underline{\ell}}\), we define a coefficient of mixing by \[\gamma(\underline{\ell}):=\sup_{r\in\mathbb{Z}^{d}}\sup_{A\in\sigma(X_{ \underline{\ell}}),\,B\in\sigma(X_{\underline{\ell}+\underline{r}})}|\mathbb{ P}(A\cap B)-\mathbb{P}(A)\mathbb{P}(B)|. \tag{14}\] Observe that for every \(s,t\in\mathbb{R}\) it holds: \[\sup_{\underline{r}\in\mathbb{Z}^{d}}|\langle 1_{X_{\underline{r}}\leq s}- \mathbb{P}(X_{\underline{r}}\leq s),1_{X_{\underline{\ell}+\underline{r}}\leq t }-\mathbb{P}(X_{\underline{\ell}+\underline{r}}\leq t)\rangle|\leq\gamma( \underline{\ell}),\forall\underline{\ell}\in\mathbb{Z}^{d}. \tag{15}\] By (15) and Proposition 1.2, we get: **Theorem 1.5**.: _Let \((z_{k})\) be such that \(V_{n}\leq C_{1}\frac{n^{2}}{(\log n)^{\beta}}\), for constants \(C_{1}>0,\beta\). If \(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\gamma(\underline{\ell})<+\infty\) and \(\beta>1\), or if the r.f. is stationary and \(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\gamma(\underline{\ell})^{\zeta}<+\infty\), for some \(\zeta\in[1,2]\) and \(\beta>\zeta\), then \((X_{\underline{\ell}},\underline{\ell}\in\mathbb{Z}^{d})\) satisfies a Glivenko-Cantelli theorem along \((z_{k})\)._ Using Proposition 1.4, we consider now the i.i.d. and associated cases. **Theorem 1.6**.: _a) If \((X_{\underline{\ell}})_{\underline{\ell}\in\mathbb{Z}^{d}}\) is a r.f. of i.i.d. r.v.'s, then under the condition \(V_{n}=o(n^{2}\,(\log\log n)^{-1})\) it satisfies a Glivenko-Cantelli theorem along \((z_{k})\). b) If \((X_{\underline{\ell}})_{\underline{\ell}\in\mathbb{Z}^{d}}\) is a strictly stationary r.f. of associated r.v.'s such that \(\sum_{\underline{\ell}}\langle X_{\underline{\ell}},X_{\underline{0}}\rangle\) converges, then, under the condition \(V_{n}=O(n^{2}\log^{-(1+\eta)}n)\) for some \(\eta>0\), for a.e. \(\omega\), we _have for each continuity point \(s\) of \(F\):_ \[\lim_{n}\frac{1}{n}\sum_{k=0}^{n-1}\mathbf{1}_{(-\infty,s]}(X_{z_{k}}(\omega))=F(s). \tag{16}\] _If \(F\) is continuous, the convergence is uniform in \(s\)._ _Proof_. a) Denote by \(F_{n}(s)(\omega)\) the means \(\frac{1}{n}\sum_{k=0}^{n-1}\mathbf{1}_{(-\infty,s]}(X_{z_{k}}(\omega))\). Let \(Q\) be a dense countable set of continuity points of \(F\). For every \(s\in Q\), by the assumption on \(V_{n}\) and Proposition 1.4, there is a null set \(N(s)\) such that, for a sequence \(\varepsilon_{n}\) tending to \(0\), for every \(\omega\not\in N(s)\), \[|F_{n}(s)(\omega)-F(s)|\leq\varepsilon_{n}(V_{n}\log\log n)^{-\frac{1}{2}}| \sum_{k=0}^{n-1}\big{(}\mathbf{1}_{(-\infty,s]}(X_{z_{k}})-F(s)\big{)}|\to 0.\] Then \(F_{n}(s)(\omega)\to F(s)\) for every \(\omega\) outside the null set \(N:=\cup_{s\in Q}N(s)\) and for \(s\in Q\). Similarly by Proposition 1.4, for every \(s\) in the set \(J\) of jumps of \(F\), we have \(F_{n}(s)(\omega)-F_{n}(s^{-})(\omega)\to F(s)-F(s^{-})\) a.e. As \(J\) is countable, this convergence holds for every \(s\in J\) and \(\omega\not\in\tilde{N}\), where \(\tilde{N}\) is a null set. Outside the null set \(N\cup\tilde{N}\), Lemma 1.1 applied with \(Q\) and \(J\) implies the result. b) We consider now the case of a strictly stationary random field of associated r.v.'s. Let \(s\) be a continuity point of the common distribution \(F\). For every \(\epsilon>0\) there exists \(\delta>0\), such that \(F(s+\delta)-F(s-\delta)\leq\epsilon\). As in [30], for \(\delta>0\) and \(s\), define the approximated step function \(h_{\delta,s}\) by \(h_{\delta,s}(x)=0\), if \(x\leq s-\delta\) and \(h_{\delta,s}(x)=1+\frac{x-s}{\delta}\) if \(s-\delta\leq x\leq s\), otherwise, \(h_{\delta,s}(x)=1\). It is a non decreasing continuous function with \(h^{\prime}_{\delta,s}(x)=1/\delta\) for \(s-\delta<x<s\). It follows from the above Hoeffding's identity applied to this approximated step function (see [2]): \[\mathrm{C}ov(h_{\delta,s}(X_{\underline{\ell}}),h_{\delta,s}(X_{ \underline{0}})) \leq\delta^{-2}\langle X_{\underline{\ell}},X_{\underline{0}}\rangle,\] \[\mathrm{C}ov(h_{\delta,s+\delta}(X_{\underline{\ell}}),h_{ \delta,s+\delta}(X_{\underline{0}})) \leq\delta^{-2}\langle X_{\underline{\ell}},X_{\underline{0}}\rangle.\] By association and non decreasing, \(\big{(}h_{\delta,s}(X_{\underline{\ell}})\big{)}\) as well as \(\big{(}h_{\delta,s+\delta}(X_{\underline{\ell}})\big{)}\) are stationary r.f.s of associated r.v.'s, and we may apply Proposition 1.4 to their centered versions (also associated). The condition simply reads, for \(\tau=s,s+\delta\): \[\sum_{\underline{\ell}}\mathrm{C}ov(h_{\delta,\tau}(X_{\underline{\ell}}),h_ {\delta,\tau}(X_{\underline{0}}))\leq\delta^{-2}\sum_{\underline{\ell}} \langle X_{\underline{\ell}},X_{\underline{0}}\rangle<\infty.\] We put \(\overline{S}_{n}=\sum_{k=0}^{n-1}h_{\delta,s}(X_{z_{k}})\) and \(\underline{S}_{n}=\sum_{k=0}^{n-1}h_{\delta,s+\delta}(X_{z_{k}})\). By \(h_{\delta,s+\delta}(x)\leq\mathbf{1}_{\{x>s\}}\leq h_{\delta,s}(x)\), it holds \(\underline{S}_{n}\leq\sum_{k=0}^{n-1}\mathbf{1}_{(s,\infty)}(X_{z_{k}})\leq \overline{S}_{n}\). Hence by Proposition 1.4, we have almost everywhere \(\frac{1}{n}\underline{S}_{n}\to\mathbb{E}[h_{\delta,s+\delta}(X_{\underline{0}})]\) and \(\frac{1}{n}\overline{S}_{n}\to\mathbb{E}[h_{\delta,s}(X_{\underline{0}})]\). Since \[\mathbb{E}[h_{\delta,s}(X_{\underline{0}})]\leq F(s)-F(s-\delta)+1-F(s)\leq \epsilon+1-F(s),\] \[\mathbb{E}[h_{\delta,s+\delta}(X_{\underline{0}})]\geq 1-F(s+\delta)=1-F(s)-(F (s+\delta)-F(s))\geq 1-F(s)-\epsilon,\] we conclude \[1-F(s)-\epsilon\leq\liminf_{n}\frac{1}{n}\sum_{k=0}^{n-1}\mathbf{1}_{(s,\infty)}( X_{z_{k}})\leq\limsup_{n}\frac{1}{n}\sum_{k=0}^{n-1}\mathbf{1}_{(s,\infty)}(X_{z_{k} })\leq 1-F(s)+\epsilon.\] Subtracting the \(1\)'s and taking \(\epsilon\to 0\), we get (16). _PQD variables._ The result shown for associated variables can be extended to the class of PDQ variables, but with a stronger condition on the local times of the sequence \((z_{k})\). **Proposition 1.7**.: _Let \((U_{\underline{\ell}})\) be a centered stationary random field of pairwise PQD r.v.'s such that \(\sum_{\underline{\ell}}\langle U_{\underline{\ell}},U_{\underline{0}}\rangle\) converges. Let \((z_{k})\) be a sequence of points with maximal local times sequence \((M_{n})\). If \(\sum_{n\geq 1}\frac{M_{n}}{n^{2}}<+\infty\), then \(\frac{1}{n}(U_{z_{0}}+\cdots+U_{z_{n-1}})\) converges a.e. to \(0\)._ _Proof._ We apply the following result of [4]: let \((Y_{j}:j\geq 1)\) be a sequence of pairwise centered PQD r.v.'s. with finite variance. If \(\sum_{j\geq 1}j^{-2}\mathrm{Cov}(Y_{j},\sum_{i=1}^{j}Y_{i})\) converges and \(\sup_{j}\mathbb{E}|Y_{j}|<\infty\), then \(n^{-1}\sum_{i=1}^{n}Y_{i}\to 0\) a.e. Taking for \(Y_{j}\) the (still) pairwise PQD r.v.'s \(U_{z_{j}}\), we get the result, since \(\mathrm{Cov}(U_{z_{j}},U_{z_{1}}+\cdots+U_{z_{j}})\leq M_{j}\sum_{\underline{ \ell}}\langle U_{\underline{0}},U_{\underline{\ell}}\rangle\). Now, we consider the empirical distribution. **Theorem 1.8**.: _Let \((X_{\underline{\ell}})\) be a centered strictly stationary random field of pairwise PQD r.v.'s with distribution function \(F\) such that \(\sum_{\underline{\ell}}\langle X_{\underline{\ell}},X_{\underline{0}}\rangle\) converges. Let \((z_{k})\) be a sequence of points with maximal local times sequence \((M_{n})\). If \(\sum_{n\geq 1}\frac{M_{n}}{n^{2}}<+\infty\), then for each continuity point \(s\) of \(F\), we have for a.e. \(\omega\): \(\lim_{n}\frac{1}{n}\sum_{k=0}^{n-1}\mathbf{1}_{(-\infty,s]}(X_{z_{k}}(\omega) )=F(s)\)._ _In particular, if \(F\) is continuous, the above convergence is uniform over \(s\)._ _Proof._ The r.f.s \(h_{\delta,s}(X_{\underline{\ell}})\) and \(h_{\delta,s+\delta}(X_{\underline{\ell}})\) are still stationary pairwise PQD. The proof is analogous to the proof of Theorem 1.6. For the last statement, we use Lemma 1.1. _Remark._ If \(M_{n}=O(n\,(\log n)^{-(1+\eta)})\), then \(V_{n}=O(n^{2}\,(\log n)^{-(1+\eta)})\). If \(V_{n}\leq C\frac{n^{2}}{(\log n)^{\beta}}\), with \(\beta>2\), then \(\sum_{n\geq 1}\frac{M_{n}}{n^{2}}<+\infty\). As shown in Section 3, \(\sum_{n\geq 1}\frac{M_{n}}{n^{2}}\) converges when the sampling is done along random walks, but diverges in some examples of sampling along "deterministic" random walks. ### A sufficient condition for a FCLT for the sampled empirical process After a Glivenko-Cantelli theorem for sampled empirical processes, we consider now the Functional Central Limit Theorem (FCLT). Let \((z_{k})\) be in \(\mathbb{Z}^{d},d\geq 1\), with the associated quantities \(N_{n}(\underline{\ell})\), \(M_{n}\) and \(V_{n}\) defined by (1). Before restricting to a r.f. of i.i.d. r.v.'s, first we examine the variance in the more general situation where the series of correlations is absolutely summable. _Kernel associated to a sequence \((z_{k})\) and variance._ Let \(K_{n}\) be the kernel (which is a real even function on \(\mathbb{T}^{d}\) depending on \(n\geq 0\)) defined by the equivalent formulas: \[K_{n}(\underline{t}) = |\sum_{k=0}^{n-1}e^{2\pi i\langle z_{k},\underline{t}\rangle}|^{2} =n+2\sum_{k=1}^{n-1}\sum_{j=0}^{n-k-1}\cos(2\pi\langle z_{k+j}-z_{j},t))=|\sum_ {\underline{\ell}\in\mathbb{Z}^{d}}N_{n}(\underline{\ell})\,e^{2\pi i\langle \underline{\ell},t\rangle}|^{2}\] \[= n+2\sum_{\underline{\ell}}\bigl{(}\sum_{k=1}^{n-1}\sum_{j=0}^{n -k-1}1_{z_{k+j}-z_{j}=\underline{\ell}}\bigr{)}\cos(2\pi\langle\underline{\ell },t\rangle).\] If \((X_{\underline{\ell}},\underline{\ell}\in\mathbb{Z}^{d})\) is a stationary r.f. such that \(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}|\langle X_{\underline{\ell}},X_{ \underline{0}}\rangle|<+\infty\), the spectral density \(\rho\) is continuous and we have: \[\int|\sum_{k=0}^{n-1}X_{z_{k}}|^{2}d\mathbb{P}=\int_{\mathbb{T}^{d}}K_{n}( \underline{t})\,\rho(\underline{t})\,d\underline{t}\leq\|\rho\|_{\infty}V_{n }\leq(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}|\langle X_{\underline{\ell}},X_ {\underline{0}}\rangle|)V_{n}.\] One can ask if there is an asymptotic variance, i.e., a limit for the normalised quantity \(V_{n}^{-1}\int|\sum_{k=0}^{n-1}X_{z_{k}}|^{2}d\mathbb{P}\) which is bounded if the series of correlations is absolutely summable. The existence of asymptotic variance is shown in [11] in the case of summation along a random walk. We will come back to the question of its positivity for transient random walks in Subsection 3.1. ### Functional Central limit Theorem in the i.i.d. case The following result gives a sufficient condition for a Functional Central limit Theorem (FCLT) along a sequence \((z_{k})\) in the i.i.d. case. The standard Brownian bridge process \(W^{0}(s)\) is the centered Gaussian process \(W^{0}(s):=W(s)-sW(1)\) in \(C(0,1)\), where \(W(s)\) is the Wiener process. It has the properties \(W^{0}(0)=W^{0}(1)=0\) and \(\mathbb{E}[W^{0}(s_{1})W^{0}(s_{2})]=s_{1}\wedge s_{2}-s_{1}s_{2}\). Let \((X_{\underline{k}})_{\underline{k}\in\mathbb{Z}^{d}}\) be i.i.d. random variables with common probability distribution \(F\) in \([0,1]\). We put \(W^{0}_{F}=W^{0}\circ F\). Let \(Y_{n}(s)\) be the random element in \(D[0,1]\) defined by \[Y_{n}(s)=\frac{1}{\sqrt{V_{n}}}\sum_{k=0}^{n-1}\left[\mathbf{1}_{X_{z_{k}} \leq s}-F(s)\right]=\frac{1}{\sqrt{V_{n}}}\sum_{\underline{\ell}\in\mathbb{Z} ^{d}}N_{n}(\ell)\left[\mathbf{1}_{X_{\underline{\ell}}\leq s}-F(s)\right].\] **Theorem 1.9**.: \(Y_{n}(s)\rightarrow_{D[0,1]}W^{0}_{F}(s)\)_, if \((z_{k})\) satisfies the condition_ \[\lim_{n}\frac{M_{n}^{2}}{V_{n}}=0, \tag{18}\] _Proof_. The result follows from the Cramer-Wold device, if we prove convergence of the finite dimensional distributions and tightness. The variance is \[\mathbb{E}[Y_{n}(s)]^{2}=\frac{1}{V_{n}}\,\sum_{\underline{\ell}}\,N_{n}^{2}( \ell)\,\mathbb{E}[\mathbf{1}_{X_{\underline{\ell}}\leq s}-F(s)]^{2}=\sigma^{2 }(s)=F(s)(1-F(s)). \tag{19}\] 1) _Finite dimensional distributions._ The convergence follows from Lindeberg's theorem for triangular arrays of independent random variables as in [3, thm 7.2]. The Lindeberg's condition for the triangular array of independent r.v.'s \(\big{(}\frac{N_{n}(\ell)[\mathbf{1}_{X_{\underline{\ell}}\leq s}-F(s)]}{\sqrt{V_ {n}}}\big{)}_{\underline{\ell},n}\) follows from \[\frac{1}{V_{n}}\sum_{\underline{\ell}}\int_{\{N_{n}(\underline{ \ell})|\mathbf{1}_{X_{\underline{\ell}}\leq s}-F(s)|\geq\varepsilon\sqrt{V_{n }}\}}N_{n}^{2}(\underline{\ell})\,|\mathbf{1}_{X_{\underline{\ell}}\leq s}-F(s )|^{2}d\mathbb{P}\] \[\leq \frac{1}{V_{n}}\sum_{\underline{\ell}}N_{n}^{2}(\underline{\ell} )\int_{\{\sup_{\underline{\ell}}N_{n}(\underline{\ell})|\mathbf{1}_{X_{ \underline{0}}\leq s}-F(s)|\geq\varepsilon\sqrt{V_{n}}\}}\,|\mathbf{1}_{X_{ \underline{0}}\leq s}-F(s)|^{2}d\mathbb{P}\to 0,\] for every \(\varepsilon>0\), since \(V_{n}=\sum_{\underline{\ell}}N_{n}^{2}(\underline{\ell})\) and \(\frac{\sup_{\underline{\ell}}N_{n}(\underline{\ell})}{\sqrt{V_ {n}}}\to 0\), by assumption. For the correlation of the process taken at \(s_{1}\) and \(s_{2}\), it holds by independence: \[\mathbb{E}[Y_{n}(s_{1})Y_{n}(s_{2})] = \frac{1}{V_{n}}\sum_{\underline{\ell}_{1},\underline{\ell}_{2}}N _{n}(\underline{\ell}_{1})N_{n}(\underline{\ell}_{2})\mathbb{E}[(\mathbf{1}_{ X_{\underline{\ell}_{1}}\leq s_{1}}-F(s_{1}))(\mathbf{1}_{X_{\underline{\ell}_{2}} \leq s_{2}}-F(s_{2}))]\] \[= \frac{1}{V_{n}}\sum_{\ell}N_{n}^{2}(\underline{\ell})(F(s_{1} \wedge s_{2})-F(s_{1})F(s_{2}))=F(s_{1}\wedge s_{2})-F(s_{1})F(s_{2}).\] This proves the convergence in distribution: \(Y_{n}(s)\to W_{F}^{0}(s)\) for every \(s\). Let us show now the convergence of the finite dimensional distributions. Starting with the asymptotic distribution of \(aY_{n}(s_{1})+bY_{n}(s_{2})\), by the above computation, we have \[\mathbb{E}[(aY_{n}(s_{1})+bY_{n}(s_{2}))^{2}]= \tag{20}\] \[a^{2}F(s_{1})(1-F(s_{1}))+b^{2}F(s_{2})(1-F(s_{2}))+2ab(F(s_{1} \wedge s_{2})-F(s_{1})F(s_{2})).\] As above, it is easily seen that Lindeberg's condition is satisfied for the triangular array defined by \(aY_{n}(s_{1})+bY_{n}(s_{2})\). It means that the asymptotic distribution of \(aY_{n}(s_{1})+bY_{n}(s_{2})\) is centered Gaussian with variance as computed above. Note that \(\mathbb{E}[(aW^{0}(s_{1})+bW^{0}(s_{2}))^{2}]\) is also given by (20) above. Similarly, for every \(s_{1}\leq\cdots\leq s_{r}\), it holds \[(Y_{n}(s_{1}),\cdots,Y_{n}(s_{r}))\to_{dist}(W_{F}^{0}(s_{1}),\ldots,W_{F}^{0 }(s_{r})).\] _Tightness._ First we suppose \(F\) continuous. Following the method of [3], it is enough to show that for \(s\leq t\leq r\), uniformly in \(n\), \[\mathbb{E}[(Y_{n}(t)-Y_{n}(s))^{2}(Y_{n}(r)-Y_{n}(t))^{2}]\leq C(F(r)-F(s))^{ 2}.\] Putting \(F(u,v):=F(v)-F(u)\), \(f(\underline{\ell},u,v):=\mathbf{1}_{u<X_{\underline{\ell}}\leq v}-F(u,v)\), for \(u<v\), we compute \[\mathbb{E}[(Y_{n}(t)-Y_{n}(s))^{2}(Y_{n}(r)-Y_{n}(t))^{2}]=\frac{1}{V_{n}^{2}} \mathbb{E}\big{[}\big{(}\sum_{\underline{\ell}}N_{n}(\underline{\ell})f( \underline{\ell},s,t)\big{)}^{2}\big{(}\sum_{\underline{\ell}}\,N_{n}( \underline{\ell})f(\underline{\ell},t,r)\big{)}^{2}\big{]}.\] By expansion and independence, the above expression is sum of three types of terms: \[\frac{1}{V_{n}^{2}}\sum_{\underline{\ell}}N_{n}^{4}(\underline{\ell})\,[A],\, \frac{1}{V_{n}^{2}}\sum_{\underline{\ell}_{1},\underline{\ell}_{2}}N_{n}^{2}( \underline{\ell}_{1})N_{n}^{2}(\underline{\ell}_{2})\,[B],\,\frac{1}{V_{n}^{2}} \sum_{\underline{\ell}_{1}\neq\underline{\ell}_{2}}N_{n}^{2}(\underline{\ell}_ {1})N_{n}^{2}(\underline{\ell}_{2})\,[C],\] with \(A=\mathbb{E}[f^{2}(\underline{\ell},s,t)f^{2}(\underline{\ell},t,r)],B= \mathbb{E}[f^{2}(\underline{\ell}_{1},s,t)]\,\mathbb{E}[f^{2}(\underline{\ell }_{2},t,r)],\) \[C=\mathbb{E}[f(\underline{\ell}_{1},s,t)f(\underline{\ell}_{1},t,r))]\, \mathbb{E}[f(\underline{\ell}_{2},s,t)f(\underline{\ell}_{2},t,r))].\] By stationarity and since the intervals do not overlap, we have \[A =F(s,t)F^{2}(t,r)+F^{2}(s,t)F(t,r)-3F^{2}(s,t)F^{2}(t,r),\] \[B =F(s,t)(1-F(s,t))\cdot F(t,r)(1-F(t,r)),\,C=F^{2}(s,t)F^{2}(t,r).\] Since \(0\leq F(s,t),F(t,r),F(s,r)\leq 1\) and \(F(s,t),F(t,r)\leq F(s,r)\), it follows \[A\leq 2F^{3}(s,r)\leq 2F^{2}(s,r),\,B\leq F^{2}(s,r),\,\,C\leq F^{4}(s,r) \leq F^{2}(s,r).\] Recall that \(V_{n}=\sum_{\underline{\ell}}N_{n}^{2}(\underline{\ell})\). Using \(\|\cdot\|_{\ell_{4}}\leq\|\cdot\|_{\ell_{2}}\) for the bound of the first term, we have for some fixed constant \(C>0\): \[\mathbb{E}[(Y_{n}(t)-Y_{n}(s))^{2}(Y_{n}(r)-Y_{n}(t))^{2}]\leq C(F(r)-F(s))^{2},\,\,\forall n.\] Hence by [3, Theorem 15.6], for non decreasing continuous \(F\), the sequence of processes \((Y_{n}(s))\) is tight in \(D(0,1)\). This proves that, if \(F\) is continuous, then \(Y_{n}\to_{D(0,1)}(W^{0}\circ F)\). Now, for a general \(F\) a classical method is to use a generalized inverse. Let us describe it briefly. We consider first the uniform empirical process. Let \((\zeta_{k})\) be uniformly distributed i.i.d. r.v.'s. Denote the empirical process along \((z_{k})\) with respect to \((\zeta_{k})\) by \(U_{n}(s)\). By applying what we have just proved for a continuous distribution, \(U_{n}(s)\to_{D(0,1)}W^{0}(s)\). Now let \(F^{-1}(t):=\inf\{s:t\leq F(s)\}\). We get \(\mathbb{P}(F^{-1}(\zeta_{0})\leq s)=\mathbb{P}(X_{0}\leq s)=F(s)\), so \(Y_{n}(s)=_{dist.}U_{n}(F(s))\). We may then proceed as in Billingsley ([3, Theorem 5.1]) to deduce the FCLT for \(Y_{n}(s)\) with \(W^{0}(F(s))\) as limit. ## 2. **Local times for ergodic sums** In the previous section about limit theorems for the empirical process sampled along \((z_{k})\), we have found sufficient conditions on the quantities \(V_{n}\) and \(M_{n}\) associated to \((z_{k})\). When \((z_{k})\) is given by a "cocycle", \(z_{k}=S_{k}f(x)\), one can ask if these conditions are satisfied. We start with some general facts and construct counterexamples for which condition (18) is not satisfied. In the next section, we will discuss two very different examples of cocycles: first the case of random walks, then "stationary random walks" generated by a rotation. ### Auxiliary general results First we introduce some notation and make general remarks. **Notation 2.1**.: Let \((X,\mathcal{B},\mu)\) be a probability space and \(T\) a measure preserving transformation acting on \(X\) such that the dynamical system \((X,\mathcal{B},\mu,T)\) is ergodic. Let \(f\) be a measurable function on \(X\) with values in \(\mathbb{Z}^{d}\), \(d\geq 1\). Its ergodic sums generated by the iteration of \(T\), denoted by \(f_{k}\) (or \(S_{k}f\)), are \[f_{k}(x):=\sum_{j=0}^{k-1}f(T^{j}x),k\geq 1,\ f_{0}(x)=0.\] The sequence \((f_{k}(x),k\geq 1)\) can be viewed as a "stationary random walk" defined on \((X,\mathcal{B},\mu)\). It will be called a "cocycle" and denoted by \((\mu,T,f)\) or simply \((T,f)\). For \(x\in X\), we put (cf. (1)) \(N_{0}(x,\underline{\ell})=0\) and, for \(n\geq 1\), \[N_{n}(T,f,x,\underline{\ell}) := \#\{1\leq k\leq n:\ f_{k}(x)=\underline{\ell}\},\,\underline{\ell }\in\mathbb{Z}^{d},\] \[M_{n}(T,f,x) := \max_{\underline{\ell}\in\mathbb{Z}^{d}}N_{n}(T,f,x,\underline{ \ell}),\] \[V_{n}(T,f,x) := \#\{1\leq j,k\leq n:\,f_{j}(x)=f_{k}(x)\}=\sum_{\underline{\ell }\in\mathbb{Z}^{d}}\,N_{n}^{2}(x,\underline{\ell}).\] Most of the time, we will omit \(T\) and \(f\) in the notation and write simply \(N_{n}(x,\underline{\ell})\), \(M_{n}(x)\), \(V_{n}(x)\). We have \(\sum_{\underline{\ell}}N_{n}(x,\underline{\ell})=n\) and \(n\leq V_{n}(x)\leq n\,M_{n}(x)\). _A question is to know if the following conditions hold for a.e. \(x\):_ \[V_{n}(x)=o(n^{2}\,(\log\log n)^{-1})\ \text{or}\ V_{n}(x)\leq C_{1} \frac{n^{2}}{(\log n)^{\beta}},\ \text{with}\ \beta>1, \tag{22}\] \[\lim_{n}\frac{M_{n}^{2}(x)}{V_{n}(x)}=0. \tag{21}\] For a random walk this is related to a question studied in [16] and later in [15]: How many times does the walk revisit the most frequently visited site in the first \(n\) steps? _Cylinder map._ We denote by \(\tilde{T}_{f}\) the map (sometimes called cylinder map) \((x,\underline{\ell})\to(Tx,\underline{\ell}+f(x))\) acting on \(X\times\mathbb{Z}^{d}\), endowed with the infinite invariant measure \(\tilde{\mu}\) defined as the product of \(\mu\) by the counting measure on \(\mathbb{Z}^{d}\). For \(\varphi:X\times\mathbb{Z}^{d}\to\mathbb{R}\) the ergodic sums for the cylinder map are \[\tilde{S}_{n}\varphi(x,\underline{\ell}):=\sum_{k=0}^{n-1}\varphi(\tilde{T}_{ f}^{k}(x,\underline{\ell}))=\sum_{k=0}^{n-1}\varphi(T^{k}x,\underline{\ell}+f_{k}(x )).\] With \(\varphi_{0}:=\mathbf{1}_{X\times\{\underline{0}\}}\), it holds \[\tilde{S}_{n}\varphi_{0}(x,-\underline{\ell})=\sum_{k=0}^{n-1}\mathbf{1}_{X \times\{\underline{0}\}}(T^{k}x,-\underline{\ell}+f_{k}(x))=\#\{0\leq k\leq n- 1:\,f_{k}(x)=\underline{\ell}\}.\] Therefore, \(\tilde{S}_{n}\varphi_{0}(x,-\underline{\ell})=N_{n}(\underline{\ell})(x)\). _Recurrence/transience._ It can be shown that a cocycle \((\mu,T,f)\) (over an ergodic dynamical system) is either _recurrent_ or _transient_. For \(f\) with values in \(\mathbb{Z}^{d}\), it means that either \(S_{k}f(x)=0\) infinitely often for a.e. \(x\), or \(S_{k}f(x)=0\) finitely often for a.e. \(x\). In the latter case, we have \(\lim_{k}|S_{k}f(x)|=+\infty\), \(\mu\)-a.e. Let \(\mathcal{R}_{n}(x)=\{\underline{\ell}\in\mathbb{Z}^{d}:\ f_{k}(x)=\underline{ \ell}\mbox{ for some }k\leq n\}\) be the "range" of the cocycle, i.e., the set of points visited by \(f_{k}(x)\) up to time \(n\). In [14] the following result is shown (for the general case of a cocycle with values in a countable group): let \(G\) be a countable group and \(f:X\to G\). If the cocycle \((T,f)\) is recurrent, then \(\mbox{Card}(\mathcal{R}_{n}(x))=o(n)\) for a.e. \(x\). If it is transient, there exists \(c>0\) such that \(\mbox{Card}(\mathcal{R}_{n}(x))\sim c\,n\) for a.e. \(x\). Using the lemma below, this implies for a.e. \(x\): \[\liminf_{n}\frac{V_{n}(x)}{n}>0\mbox{ in the transient case, }\frac{V_{n}(x)}{n}\to+\infty\mbox{ in the recurrent case.} \tag{23}\] To show (23) we use the following inequality valid for a general sequence \((z_{k})\): **Lemma 2.2**.: _If \(\mathcal{A}\) is a non empty subset in \(\mathbb{Z}^{d}\), we have:_ \[V_{n}\geq\frac{\left(\sum_{k=0}^{n-1}1_{z_{k}\,\in\mathcal{A}}\right)^{2}}{ \mbox{Card}(\mathcal{A})}. \tag{24}\] _Proof._ Cauchy-Schwarz inequality implies: \[\sum_{k=0}^{n-1}1_{z_{k}\,\in\mathcal{A}}=\sum_{\underline{\ell}\in\mathcal{A} }\sum_{k=0}^{n-1}1_{z_{k}\,=\underline{\ell}}\leq(\sum_{\underline{\ell}\in A }(\sum_{k=0}^{n-1}1_{z_{k}\,=\underline{\ell}})^{2})^{\frac{1}{2}}\,(\mbox{ Card}(\mathcal{A}))^{\frac{1}{2}}\leq V_{n}^{\frac{1}{2}}\,\,(\mbox{Card}( \mathcal{A}))^{\frac{1}{2}}.\qed\] If \(z_{k}=S_{k}f(x)\), this show (23). Indeed by taking \(\mathcal{A}=\mathcal{R}_{n}(x)\) we get \[V_{n}(x)\geq\frac{n^{2}}{\mbox{Card}(\mathcal{R}_{n}(x))}. \tag{25}\] **Lemma 2.3**.: _The following formulas hold for quantities defined in 2.1._ \[V_{n}(x) = n+2\sum_{k=1}^{n-1}\,\sum_{j=0}^{n-k-1}\,(1_{f_{k}(T^{j}x)= \underline{0}}), \tag{27}\] \[= 2[N_{n-1}(Tx,0)+N_{n-2}(T^{2}x,0)+...+N_{1}(T^{n-1}x,0)]+n,\ n\geq 2,\] (28) \[M_{n}(x) = \max[N_{n}(x,0),1+\max_{1\leq k\leq n-1}N_{n-k}(T^{k}x,0)]\leq 1+ \max_{0\leq k\leq n-1}N_{n}(T^{k}x,0),\] (29) \[= M_{n-1}(Tx)+1_{\underline{\ell}(n-1,Tx)=\underline{0}}\leq M_{n- 1}(Tx)+1. \tag{26}\] _Proof._ a) From \(f_{k}(x)=f(x)+f_{k-1}(Tx)\), \(k\geq 1\), it follows \[N_{n}(x,\underline{\ell})=N_{n-1}(Tx,\underline{\ell}-f(x))+1_{f(x)=\underline{ \ell}},\,n\geq 1. \tag{30}\] Therefore we have: \[\sum_{\underline{\ell}\in\mathbb{Z}^{d}}N_{n}^{2}(x,\underline{\ell} )=\sum_{\underline{\ell}\in\mathbb{Z}^{d}}[N_{n-1}(Tx,\underline{\ell}-f(x))+1_{f (x)=\underline{\ell}}]^{2}=\sum_{\underline{\ell}\in\mathbb{Z}^{d}}[N_{n-1}(Tx,\underline{\ell})+1_{\underline{\ell}=\underline{0}}]^{2}\] \[=\sum_{\underline{\ell}\neq 0}[N_{n-1}(Tx,\underline{\ell})]^{2}+[N_{n- 1}(Tx,\underline{0})+1]^{2}=\sum_{\underline{\ell}}[N_{n-1}(Tx,\underline{ \ell})]^{2}+2N_{n-1}(Tx,\underline{0})+1.\] Hence the relation \[V_{n}(x)=V_{n-1}(Tx)+2N_{n-1}(Tx,\underline{0})+1. \tag{31}\] We have \(V_{1}(x)=1\) and by the previous relation we get by induction (26) and (27). b) For \(x\in X\), let \(\underline{\ell}(n,x)\) (a most visited site by \(S_{k}(x)\) up to time \(n\)) be defined by \[\underline{\ell}(n,x) := \underline{0},\text{ if }N_{n}(x,\underline{0})\geq N_{n}(x, \underline{\ell}),\text{ for all }\underline{\ell}\neq 0,\] \[\text{ else } := \underline{\ell}_{1},\text{ if }\ell_{1}\text{ is such that }M_{n}(x)=N_{n}(x,\underline{\ell}_{1})>N_{n}(x, \underline{0}).\] Let \(p_{n}(x)\in[1,n]\) be the first visit of \(S_{k}(x)\) to \(\underline{\ell}(n,x)\) for \(k=1,...,n\). By definition \(M_{n}(x)=N_{n}(x,\underline{\ell}(n,x))\). We have \(M_{n}(x)=N_{n}(x,0)\) if \(\underline{\ell}(n,x)=0\), else \(M_{n}(x)=N_{n-p_{n}(x)}(T^{p_{n}(x)}x,0)+1\), by the cocycle relation \(S_{p_{n}(x)+k}(x)-S_{p_{n}(x)}(x)=S_{k}(T^{p_{n}(x)}x)\). This implies: \[M_{n}(x)\leq\max[N_{n}(x,0),N_{n-p_{n}(x)}(T^{p_{n}(x)}x,0)+1]\leq\max[N_{n}(x,0),\max_{1\leq k\leq n}N_{n-k}(T^{k}x,0)+1].\] It follows (noticing that \(N_{0}(x,0)=0\)): \[M_{n}(x)\leq 1+\max_{0\leq k\leq n-1}N_{n-k}(T^{k}x,0)\leq 1+\max_{0\leq k \leq n-1}N_{n}(T^{k}x,0). \tag{32}\] This shows (28). c) Observe also the following relation: by (30) we have: \[M_{n}(x) = \sup_{\underline{\ell}}[N_{n-1}(Tx,\underline{\ell}-f(x))+1_{ \underline{\ell}-f(x)=\underline{0}}]=\sup_{\underline{\ell}}[N_{n-1}(Tx, \underline{\ell})+1_{\underline{\ell}=\underline{0}}]\] \[= \max\,[\sup_{\underline{\ell}\neq\underline{0}}N_{n-1}(Tx, \underline{\ell}),\,N_{n-1}(Tx,\underline{0})+1].\] If \(\underline{\ell}(n-1,Tx)=\underline{0}\), then \(N_{n-1}(Tx,\underline{0})\geq\sup_{\underline{\ell}\neq\underline{0}}N_{n-1} (Tx,\underline{\ell})\). If \(\underline{\ell}(n-1,Tx)\neq\underline{0}\), then \(N_{n-1}(Tx,\underline{0})<\sup_{\underline{\ell}\neq\underline{0}}N_{n-1}(Tx,\underline{\ell})\). This shows (29). **Remark 2.4**.: By (28), if \(K_{n}\) is a uniform bound over \(x\) of \(N_{n}(x,\underline{0})\), then \(M_{n}(x)\leq K_{n}\). Likewise, if \(N_{n}(x,\underline{0})\leq K_{n}\), for a.e. \(x\), then \(M_{n}(x)\leq K_{n}\), for a.e. \(x\). Now we show that the set of \(x\in X\) such that \(\lim_{n}\frac{M_{n}^{2}(x)}{V_{n}(x)}=0\) has measure \(0\) or \(1\). **Lemma 2.5**.: _It holds: \(\lim_{n}\big{[}\frac{M_{n}^{2}(x)}{V_{n}(x)}-\frac{M_{n}^{2}(Tx)}{V_{n}(Tx)} \big{]}=0\)._ _If \(T\) is ergodic, there is a constant \(\gamma\in[0,1]\) such that \(\limsup_{n}\frac{M_{n}^{2}(x)}{V_{n}(x)}=\gamma\) for a.e. \(x\)._ Proof.: We use (31) and (29). Putting \(\varepsilon=1_{\underline{\ell}(n-1,Tx)=\underline{0}}\), we have: \[|\frac{M_{n}^{2}(x)}{V_{n}(x)}-\frac{M_{n-1}^{2}(Tx)}{V_{n-1}(Tx)}|= |\frac{M_{n-1}^{2}(Tx)+\varepsilon(2M_{n-1}(Tx)+1)}{V_{n-1}(Tx)+2N_{n-1}(Tx, \underline{0})+1}-\frac{M_{n-1}^{2}(Tx)}{V_{n-1}(Tx)}|\] \[=|\frac{\varepsilon(2M_{n-1}(Tx)+1)}{V_{n}(x)}-\frac{(2N_{n-1}(Tx, \underline{0})+1)}{V_{n}(x)}\,\frac{M_{n-1}^{2}(Tx)}{V_{n-1}(Tx)}|\] \[\leq\frac{2M_{n-1}(Tx)+1}{V_{n}(x)}+\frac{2N_{n-1}(Tx, \underline{0})+1}{V_{n}(x)}\leq\frac{4M_{n-1}(Tx)}{V_{n}(x)}+\frac{2}{V_{n}( x)}\leq\frac{4}{\sqrt{n}}+\frac{2}{V_{n}(x)}.\] For the last inequality we use that either \(M_{n}(x)\geq\sqrt{n}\), hence \(\frac{M_{n}(x)}{V_{n}(x)}\leq\frac{1}{M_{n}(x)}\leq\frac{1}{\sqrt{n}}\), or \(M_{n}(x)<\sqrt{n}\), hence \(\frac{M_{n}(x)}{V_{n}(x)}\leq\frac{\sqrt{n}}{n}=\frac{1}{\sqrt{n}}\). Therefore, \[|\frac{M_{n}^{2}(x)}{V_{n}(x)}-\frac{M_{n-1}^{2}(Tx)}{V_{n-1}(Tx)}|\to 0. \tag{33}\] Observe now that \[M_{n}(x)=M_{n-1}(x)+\varepsilon_{n},\text{ where }\varepsilon_{n}=0\text{ or }=1,\] and \(\varepsilon_{n}=1\) if and only if there is \(\underline{\ell}_{n}\) such that \[M_{n-1}(x)=\#\{1\leq k\leq n-1:\ f_{k}(x)=\underline{\ell}_{n}\}\text{ and }f_{n}(x)=\underline{\ell}_{n}.\] We have \[M_{n}^{2}(x)=M_{n-1}^{2}(x)+c_{n},\text{ with }c_{n}=\varepsilon_{n}(1+2M_{n-1}( x))\] and \(N_{n}(x,\underline{\ell})=N_{n-1}(x,\underline{\ell})+\varepsilon_{n}^{\prime}( \underline{\ell})\), with \(\varepsilon_{n}^{\prime}(\underline{\ell})=1_{f_{n}(x)=\underline{\ell}}\) and \(\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\,\varepsilon_{n}^{\prime}( \underline{\ell})=1\). Therefore, \[V_{n}(x)=\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\,(N_{n-1}(x, \underline{\ell})+\varepsilon_{n}^{\prime}(\underline{\ell}))^{2}=V_{n-1}(x)+ 2\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\,\varepsilon_{n}^{\prime}( \underline{\ell})N_{n}(x,\underline{\ell}))+1,\] \[0\leq V_{n}(x)=V_{n-1}(x)+d_{n},\text{ with }d_{n}\leq 2M_{n}(x)+1.\] From (33) and the convergence above, it follows \(\lim_{n}\left[\frac{M_{n}^{2}(x)}{V_{n}(x)}-\frac{M_{n}^{2}(Tx)}{V_{n}(Tx)} \right]=0\). By ergodicity of \(T\), this shows the lemma ### Case of a coboundary The case when \(f\) is coboundary degenerates. Indeed, the following holds: **Proposition 2.6**.: _If \(f\) is a coboundary we have: a) \(\liminf_{n}\frac{M_{n}(x)}{n}>0\), for a.e. \(x\); b) there is a constant \(\beta>0\) such that \(\frac{1}{n^{2}}V_{n}(x)\to\beta\), for a.e. \(x\); c) for a.e. \(x\), \(\liminf_{n}\frac{M_{n}^{2}(x)}{V_{n}(x)}>0\)._ _Proof_. Suppose that \(f\) is coboundary, \(f=T\Phi-\Phi\). Since \(f\) has values in \(\mathbb{Z}^{d}\) and \(T\) is ergodic, for all component \(\Phi_{j}\) of \(\Phi\), \(e^{2\pi i\Phi_{j}}\) is a constant. It follows that \(\Phi\) has also its values in \(\mathbb{Z}^{d}\) up to an additive constant and we can assume that \(\Phi\) has values in \(\mathbb{Z}^{d}\). a) We have \(\liminf_{n}\frac{M_{n}(x)}{n}\geq\liminf_{n}\frac{1}{n}N_{n}(x,\underline{0})>0\), for a.e. \(x\). The positivity results the following simple argument: For \(R\geq 1\), let \(A_{R}\) denote the set \(\cup_{\underline{\ell}:\|\underline{\ell}\|\leq R}(\Phi=\underline{\ell})\). Since, for each \(\underline{\ell}\), by Birkhoff's theorem, \(\lim_{n}\frac{1}{n}\sum_{0\leq k\leq n-1}1_{\Phi(T^{k}x)=\underline{\ell}}= \mu(\Phi=\underline{\ell})\), it holds \[\frac{1}{n}N_{n}(x,\underline{0})\geq\sum_{\underline{\ell}\in A_{R}}1_{( \Phi=\underline{\ell})}(x)\,\frac{1}{n}\sum_{k=0}^{n-1}1_{(\Phi=\underline{ \ell})}(T^{k}x)\to\sum_{\underline{\ell}\in A_{R}}1_{(\Phi=\underline{\ell})}( x)\,\mu(\Phi=\underline{\ell}).\] Therefore, for every \(R\geq 1\), \(\liminf_{n}\frac{M_{n}(x)}{n}\geq\liminf_{n}\frac{N_{n}(x,0)}{n}\geq\sum_{ \underline{\ell}\in A_{R}}1_{(\Phi=\underline{\ell})}(x)\,\mu(\Phi=\underline{ \ell})\), and the limit when \(R\to\infty\) at right is \(>0\), for a.e. \(x\). b) For \(V_{n}\), we have: \[V_{n}(f,x) =\sum_{\underline{\ell}\in\mathbb{Z}^{d}}N_{n}^{2}(x,\underline{ \ell})=\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\,\#\{0\leq k\leq n-1:\ \Phi(T^{k}x)-\Phi(x)=\underline{\ell}\}^{2}\] \[=\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\,\#\{0\leq k\leq n-1:\ \Phi(T^{k}x)=\underline{\ell}\}^{2}=\sum_{\underline{\ell}\in\mathbb{Z}^{d}} \big{(}\sum_{0\leq k\leq n-1}1_{\Phi(T^{k}x)=\underline{\ell}}\big{)}^{2},\] hence: \(\frac{1}{n^{2}}\sum_{\underline{\ell}\in A_{R}}\big{(}\sum_{0\leq k \leq n-1}1_{\Phi(T^{k}x)=\underline{\ell}}\big{)}^{2}=\sum_{\underline{\ell} \in A_{R}}\big{(}\frac{1}{n}\sum_{0\leq k\leq n-1}1_{\Phi(T^{k}x)=\underline{ \ell}}\big{)}^{2}\to\sum_{\underline{\ell}\in A_{R}}(\mu(\Phi=\underline{\ell }))^{2}\). This implies, for every \(R\geq 1\), \[\liminf_{n}\frac{1}{n^{2}}\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\big{(}\sum _{0\leq k\leq n-1}1_{\Phi(T^{k}x)=\underline{\ell}}\big{)}^{2}\geq\lim_{n} \frac{1}{n^{2}}\sum_{\underline{\ell}\in A_{R}}\big{(}\sum_{0\leq k\leq n-1}1 _{\Phi(T^{k}x)=\underline{\ell}}\big{)}^{2}=\sum_{\underline{\ell}\in A_{R}}( \mu(\Phi=\underline{\ell}))^{2}.\] It follows: \(\liminf_{n}\frac{1}{n^{2}}\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\big{(} \sum_{0\leq k\leq n-1}1_{\Phi(T^{k}x)=\underline{\ell}}\big{)}^{2}\geq\sum_{ \underline{\ell}\in\mathbb{Z}^{d}}\,\mu(\Phi=\underline{\ell})^{2}\). For the complementary of \(A_{R}\), it holds: \[\sum_{\underline{\ell}:\|\underline{\ell}\|>R}\big{(}\sum_{0\leq k <n}1_{\Phi(T^{k}x)=\underline{\ell}}\big{)}^{2}=\sum_{0\leq j,k<n}\sum_{ \underline{\ell}:\|\underline{\ell}\|>R}1_{\Phi(T^{j}x)=\underline{\ell}}1_{ \Phi(T^{k}x)=\underline{\ell}}\] \[\leq\sum_{0\leq j,k<n}(\sum_{\underline{\ell}:\|\underline{\ell} \|>R}1_{\Phi(T^{j}x)=\underline{\ell}})\,(\sum_{\underline{\ell}:\|\underline{ \ell}\|>R}1_{\Phi(T^{k}x)=\underline{\ell}})\leq\sum_{0\leq j,k<n}1_{A_{R}^{c}( T^{j}x)}1_{A_{R}^{c}(T^{k}x)}=\big{(}\sum_{0\leq k<n}1_{A_{R}^{c}(T^{k}x)} \big{)}^{2}.\] It follows for the upper bound: \[\limsup_{n}\frac{1}{n^{2}}\sum_{\underline{\ell}\in\mathbb{Z}^{d}} \big{(}\sum_{0\leq k\leq n-1}1_{\Phi(T^{k}x)=\underline{\ell}}\big{)}^{2}\] \[\leq\lim_{n}\sum_{\underline{\ell}\in A_{R}}\big{(}\frac{1}{n} \sum_{0\leq k\leq n-1}1_{\Phi(T^{k}x)=\underline{\ell}}\big{)}^{2}+\lim_{n} \bigl{(}\frac{1}{n}\sum_{0\leq k<n}1_{A_{R}^{c}(T^{k}x)}\big{)}^{2}\] \[=\sum_{\underline{\ell}\in A_{R}}(\mu(\Phi=\underline{\ell}))^{2}+ \mu(A_{R}^{c})^{2}\underset{R\to\infty}{\rightarrow}\sum_{\underline{\ell}\in \mathbb{Z}^{d}}\,\mu(\Phi=\underline{\ell})^{2}.\] This shows b) with \(\beta=\sum_{\underline{\ell}\in\mathbb{Z}^{d}}\,\mu(\Phi=\underline{\ell})^{2}>0\). c) Follows from a) and b). **Proposition 2.7**.: _There is a constant \(\beta\geq 0\) such that, for a.e. \(x\), \(\lim_{n}\frac{V_{n}(x)}{n^{2}}=\beta\). We have \(\beta>0\) if and only if the cocycle \((T,f)\) is a coboundary._ _Proof_. The case of a coboundary follows from Proposition 2.6. Suppose now that the cocycle is not a coboundary. From (26), we can write \[\begin{array}{ll}\frac{V_{n}(x)}{n^{2}}&=\frac{1}{n}+\frac{2}{n }\sum_{k=1}^{n-1}\,\frac{1}{n}\sum_{j=0}^{n-k-1}(1_{f_{k}(T^{j}x)=\underline{0 }})\\ &\leq\frac{1}{n}+\frac{2}{n}\sum_{k=1}^{n-1}\,\frac{1}{n}\sum_{j=0 }^{n-1}(1_{f_{k}(T^{j}x)=\underline{0}})=\frac{1}{n}+\frac{2}{n}\sum_{j=1}^{n- 1}\,\frac{N_{n}(T^{j}x,0)}{n}.\end{array}\] We will show that \(\frac{1}{n}\sum_{j=0}^{n-1}\,\frac{N_{n}(T^{j}x,0)}{n}\) tend to \(0\) a.e. By the ergodic theorem of Dunford and Schwarz (in the space of infinite measure \(X\times\mathbb{Z}\)) applied to \(\tilde{T}_{f}\) and \(\phi_{0}=\mathbf{1}_{X\times\{0\}}\), which is bounded and in \(L^{p}(X\times\mathbb{Z})\), for every \(p\geq 1\), we get a function \(\tilde{\phi}_{0}(x)\) which is \(\tilde{T}_{f}\)-invariant and in \(L^{1}(X\times\mathbb{Z})\) and \[\lim_{n}\frac{N_{n}(x,0)}{n}=\tilde{\phi}_{0}(x),\,\,\text{a.s.}\] As \(f\) is not a coboundary, \(\tilde{\phi}_{0}\) is zero a.e. (cf. for instance [12].) Observe that \(\|\sup_{n\geq L}\frac{N_{n}(x,0)}{n}\|_{2}\to 0\), as \(L\) goes to \(+\infty\). Indeed, for every \(0<\varepsilon\leq 1\), letting \(A_{\varepsilon,L}:=\{x:\sup_{n\geq L}\frac{N_{n}(x,0)}{n}>\varepsilon\}\), we have \(\mu(A_{\varepsilon,L})\to 0\), when \(L\to+\infty\). Since \(\frac{N_{n}(x,0)}{n}\leq 1\), it follows, for \(L\) big enough: \[\int\bigl{(}\sup_{n\geq L}(\frac{N_{n}(x,0)}{n})\bigr{)}^{2}\,d\mu\leq \varepsilon^{2}+\mu(A_{\varepsilon,L})\leq 2\varepsilon.\] We put \(\Lambda_{n}(x):=\sup_{s\geq n}\frac{N_{s}(x,0)}{s}\). By the previous observation, we have \(\lim_{n}\|\Lambda_{n}\|_{2}=0\). Let us consider the following maximal function for the action of \(T\): \[\tilde{\Lambda}_{n}(x)=\sup_{1\leq r<\infty}\frac{1}{r}\sum_{j=0}^{r-1}\Lambda _{n}(T^{j}x)=\sup_{1\leq r<\infty}\frac{1}{r}\sum_{j=0}^{r-1}\sup_{s\geq n} \frac{N_{s}(T^{j}x,0)}{s}. \tag{34}\] From a classical maximal inequality, we have \(\|\tilde{\Lambda}_{n}\|_{2}\leq 2\|\Lambda_{n}\|_{2}\to 0\). Observe also that, from the definition of \(\tilde{\Lambda}_{n}\) in (34), the following inequalities hold: \[\tilde{\Lambda}_{n}(x)\geq\sup_{r,s\geq n}\frac{1}{r}\sum_{j=0}^{r-1}\,\frac{ N_{s}(T^{j}x,0)}{s}\geq\frac{1}{n}\sum_{j=0}^{n-1}\,\frac{N_{n}(T^{j}x,0)}{n}.\] The sequence \(\,\sup_{r,s\geq n}\frac{1}{r}\sum_{j=0}^{r-1}\,\frac{N_{s}(T^{j}x,0)}{s}\) is non negative and decreasing. Since \(\|\tilde{\Lambda}_{n}\|_{2}\to 0\), the \(L_{2}\)-norm of its limit in \((X,\mu)\) is zero. The result follows. **Remark 2.8**.: (see also section 4 and [25]) Let \((U_{\underline{\ell}})_{\underline{\ell}\in\mathbb{Z}^{d}}\) be a r.f. of square integrable r.v.'s on a probability space \((\Omega,\mathcal{F},\mathbb{P})\) stationary in the weak sense and such that \(\sum_{\underline{\ell}}|\langle U_{\underline{\ell}},U_{\underline{0}} \rangle|<+\infty\). By (4) and Proposition 2.7, if \(f\) is not a coboundary, it holds \[\frac{1}{n^{2}}\,\|\sum_{k=0}^{n-1}U_{f_{k}(x)}\|_{2}^{2}\,\leq C\,\frac{V_{n} (x)}{n^{2}}\to 0,\,\,\mbox{for $\mu$-a.e. $x$.}\] Another result of norm convergence whose proof is like the proof of Proposition 1.2 is the following. Suppose that the r.f. is stationary. Let \(\varphi\) be an observable on the dynamical system \((\Omega,\mathbb{P},\theta)\) with a spectral measure \(\nu_{\varphi}\). We have: \[\int_{\Omega}|\sum_{j=0}^{n-1}\varphi\circ\theta^{z_{j}}|^{2}\,d\mathbb{P}= \int_{\mathbb{T}^{1}}|\sum_{j=0}^{n-1}e^{2\pi iz_{j}t}|^{2}\,d\nu_{\varphi}(t).\] Assume that \(\nu_{\varphi}\) is absolutely continuous with respect to the Lebesgue measure on the torus, and let \(\rho\in L^{1}(d\underline{t})\) such that \(d\nu_{\varphi}(\underline{t})=\rho(\underline{t})d\underline{t}\). For \(\varepsilon>0\) there is \(M\) such that \(\int_{\rho>M}\,\rho\,d\underline{t}<\varepsilon\). We have \[\frac{1}{n^{2}}\,\int_{\mathbb{T}^{d}}|\sum_{j=0}^{n-1}e^{2\pi i \langle z_{j},\underline{\ell}\rangle}|^{2}\,d\nu_{\varphi}(\underline{t}) \leq \frac{M}{n^{2}}\,\int_{\mathbb{T}^{d}}|\sum_{j=0}^{n-1}e^{2\pi i \langle z_{j},\underline{\ell}\rangle}|^{2}\,d\underline{t}+\int_{\rho>M}\, \rho\,d\underline{t}\leq M\frac{V_{n}}{n^{2}}+\varepsilon.\] This shows that \(\frac{V_{n}}{n^{2}}\to 0\) implies \(\frac{1}{n^{2}}\,\int_{\Omega}|\sum_{j=0}^{n-1}\varphi\circ\theta^{z_{j}}|^{ 2}\,d\mathbb{P}\to 0\). This is satisfied by every \(\varphi\in L^{2}(\mathbb{P})\), if the dynamical system has a Lebesgue spectrum. In particular, taking \(z_{k}=f_{k}(x)\), by Proposition 2.7, if \(f\) is not a coboundary, it holds \[\frac{1}{n^{2}}\,\int_{\Omega}|\sum_{j=0}^{n-1}\varphi(\theta^{f_{j}(x)}\omega )|^{2}\,d\mathbb{P}(\omega)\to 0,\,\,\mbox{for a.e. $x$.}\] When the spectral density is square integrable, as we have seen in Proposition 1.2, the pointwise convergence holds under quantitative hypothesis on the sequence \((z_{k})\). ### Non centered cocycles In an ergodic dynamical system \((X,\mu,T)\), if \(f:X\to\mathbb{R}\) is an integrable function with \(\mu(f)>0\), by the ergodic theorem for the ergodic sums \(S_{n}^{T}f(x)=\sum_{k=0}^{n-1}f(T^{k}x)\), it holds for a.e. \(x\): \(\lim_{n}\frac{1}{n}S_{n}f(x)>0\) and therefore \(\lim_{n}S_{n}^{T}f(x)=+\infty\). If \(f\) has values in \(\mathbb{Z}\), as the process \(S_{n}^{T}f(x)\) visits finitely often each site, one can think there is a chance that the following condition is satisfied: \[\lim_{n}\frac{M_{n}^{2}(T,f,x)}{V_{n}(T,f,x)}=0. \tag{35}\] A case where (35) is satisfied is the following: let \(X\) be a topological compact space, \(T:X\to X\) a continuous map, which is uniquely ergodic with \(\mu\) as unique invariant measure. Let \(f:X\to\mathbb{Z}\) be an integrable function such that \(\mu(f)\neq 0\). Assume \(f\) to be Riemann-integrable (i.e. such that, for every \(\varepsilon>0\), there are two continuous functions \(\psi_{0},\psi_{1}\) with \(\psi_{0}\leq f\leq\psi_{1}\) and \(\mu(\psi_{1}-\psi_{0})\leq\varepsilon\)). Then, the ergodic means of \(f\) converge uniformly, and this implies the existence of \(N\) such that \(\frac{1}{n}|S_{n}^{T}f(x)|\geq\frac{1}{2}|\mu(f)|>0\) for \(n\geq N\) and every \(x\). It follows that the number of visits of \(S_{n}^{T}f(x)\) to \(0\) is \(\leq N\), for every \(x\). By remark 2.4, \(M_{n}(x)\leq N\), for every \(x\), and a fortiori (35) is satisfied. _Nevertheless, we will see that (35) may fail in non uniform cases: there are dynamical systems and sets \(B\) of positive measure such that, for \(f=1_{B}\),_ \[\limsup_{n}\frac{M_{n}^{2}(T,f,x)}{V_{n}(T,f,x)}=1. \tag{36}\] ### Counterexamples In this subsection, we construct a transient counterexample, and also a recurrent counterexample with a function \(f\) of null integral such that (36) is satisfied. To construct these counterexamples, we start by considering a general ergodic dynamical system \((X,\mu,T)\) and a measurable set \(B\subset X\) of positive measure. Let \(T_{B}\) be the induced map on \(B\), \(R(x)=R^{B}(x)=\inf\{k\geq 1:T^{k}x\in B\}\) the first return time of \(x\) in \(B\) and \(R_{n}(x)=R_{n}^{B}(x):=\sum_{k=0}^{n-1}R(T_{B}^{k}x)\) the \(n\)-th return time of \(x\) in \(B\). We take \(x\in B\). If \(f\) is a function such that \(f=0\) outside \(B\), the position of the sums up to time \(R_{n-1}(x)\) are the positions of the ergodic sums \(S_{n}^{T_{B}}f\) for the induced map up to time \(n\), that is: \[\{f(x),f(x)+f(T_{B}x),...,f(x)+f(T_{B}x)+...+f(T_{B}^{n-1}x)\}.\] For a site \(\ell\), the number of visits up to time \(R_{n-1}(x)\) of the ergodic sums for \(T\) is \[N_{R_{n-1}(x)}(x,\ell)=\sum_{k=0}^{n-1}R^{B}(T_{B}^{k}x)\,1_{S_{k}^{T_{B}}f(x )=\ell}\] and therefore \[V_{R_{n-1}(x)}(T,x)=\sum_{\ell}[\sum_{k=0}^{n-1}R^{B}(T_{B}^{k}x)\,1_{S_{k}^{T _{B}}f(x)=\ell}]^{2}. \tag{37}\] _Case \(f=1_{B}\)._ Clearly \(\sum_{k=0}^{n-1}f(T_{B}^{k}x)=n\). For the map \(T\), the ergodic sums of \(f\) are incremented by \(1\) when and only when the iterates \(T^{j}x\) visit the set B. Otherwise, they stay fixed. The times of visits in \(B\), for \(x\in B\), are \(0,R(x),R(x)+R(T_{B}x),...\). We have: \[\mbox{for }x\in B,\ \ \sum_{j=0}^{R_{n-1}(x)+t}f(T^{j}x)=n,\ \mbox{for }t=0,...,R_{n}(x)-R_{n-1}(x)-1.\] For \(N_{n}(T,x,\ell)=N_{n}(T,f,x,\ell)\), it holds: \[N_{n}(T,x,\ell) = 0,\ \mbox{if }n<R_{\ell}(x),\] \[= t,\ \mbox{if }n=R_{\ell}(x)+t,\ \mbox{with }0\leq t<R_{\ell+1}(x)-R_{ \ell}(x),\] \[= R_{\ell+1}(x)-R_{\ell}(x)=R(T_{B}^{\ell}x),\ \mbox{if }n\geq R_{\ell+1}(x).\] For \(L\geq 1\), we have for the time preceding the \(L\)-th return to the basis for \(f=1_{B}\): \[M_{R_{L}(x)-1}(T,f,x)=\max_{\ell\leq L}R(T_{B}^{\ell}x),\ V_{R_{L}(x)-1}(T,f,x )=\sum_{\ell\leq L}R^{2}(T_{B}^{\ell}x). \tag{38}\] In order to compute an explicit example, it is easier to start from a given map \(S\) and construct a special flow \(T\) over this map. Let \(\varphi:X\to\mathbb{N}\) be integrable and \(\geq 1\). The (discrete time) special map \(T=T_{\varphi}\) is defined \[\mbox{on }\tilde{X}:=\{(x,k),\ x\in X,k=0,...,\varphi(x)-1\} \subset X\times\mathbb{R},\] \[\mbox{by }T(x,k):=(x,k+1),\ \mbox{if }0\leq k<\varphi(x)-1,\ :=(Sx,0),\ \mbox{if }k= \varphi(x)-1.\] Let \(\tilde{\mu}\) be the probability measure defined on \(\tilde{X}\) by \(\tilde{\mu}(A\times\{k\})=\mu(\varphi)^{-1}\,\mu(A)\), for \(k\geq 0\) and \(A\subset\{x:k\leq\varphi(x)-1\}\). It is \(T_{\varphi}\)-invariant. The space \(X\) can be identified with the subset \(B=\{(x,0),x\in X\}\) of \(\tilde{X}\) with normalized measure. The set \(B\) is the basis and \(\varphi-1\) the roof function of the special map \(T_{\varphi}\). As for the map \(S\) we will take an ergodic rotation, the special flow \(T_{\varphi}\) will be also ergodic for the measure \(\tilde{\mu}\) on \(\tilde{X}\). Observe that the recurrence time \(R(x)=R^{B}(x)\) for the special flow in the basis \(B\) is \(\varphi(x)\) and the \(n\)-th return time of \(x\) in \(B\) is \(R_{n}(x)=R_{n}^{B}(x)=\sum_{k=0}^{n-1}\varphi(S^{k}x)\). For \(S\), let us take a rotation \(S=S_{\alpha}\) on \(X=\mathbb{T}/\mathbb{Z}\) by \(\alpha\bmod 1\), where \(\alpha\) is irrational. We denote by \(q_{n}\) the denominators of \(\alpha\). We will construct the measure preserving transformation which is the special flow (with discrete time) over \(S_{\alpha}\) with a roof function \(\varphi\) such that, for cocycle generated by \(1_{B}\) in the system \((\tilde{X},\tilde{\mu},T)\), \(\liminf_{n}\frac{V_{n}(T,x)}{M_{n}^{2}(T,x)}=1\). We will use the next lemma with \(p=p_{n},q=q_{n}\), the numerators and denominators of \(\alpha\). **Lemma 2.9**.: _Let \(p,q\geq 1\), \((p,q)=1\), be such that \(|\alpha-p/q|<1/q^{2}\). For every \(x\), there is a value \(0\leq i<q\) such that \(x+i\alpha\bmod 1\in[0,2/q]\)._ _More generally, for every interval \(I\) of length \(2/q\), for every \(x\), there is a value \(0\leq i<q\) such that \(x+i\alpha\bmod 1\in I\)._ Proof.: It is well known that there is exactly one value of \(j\alpha\bmod 1\), for \(0\leq j<q\), in each interval \([\frac{\ell}{q},\frac{\ell+1}{q}[\), \(\ell=0,...,q-1\). Let us recall a proof. For \(j=0\), \(j\alpha\in[0,1/q[\). The map \(j\to\ell_{j}=jp\bmod q\), which is injective, is a permutation of the set \(\{1,...,q-1\}\) onto itself. We have \(\alpha=p/q+\gamma\), with \(|\gamma|<1/q^{2}\). Assuming \(\gamma>0\), it follows: \(j\alpha\bmod 1\in[\frac{\ell_{j}}{q},\frac{\ell_{j}}{q}+\frac{j}{q^{2}}]\subset[ \frac{\ell_{j}}{q},\frac{\ell_{j}+1}{q}[\), for \(j=1,...,q-1\). The case \(\gamma<0\) is treated the same way. Now let us prove the first point. Let \(x\) be in \([0,1[\). There is \(i_{0}\in\{0,...,q-1\}\) such that \(x=\frac{i_{0}}{q}+\theta\), with \(0\leq\theta<1/q\). By the claim, there is \(i\in[0,q[\) such that \(i\alpha\bmod 1\in[\frac{q-i_{0}}{q},\frac{q-i_{0}+1}{q}]\). Hence \(x+i\alpha\bmod 1\in[\theta,\frac{1}{q}+\theta]\subset[0,\frac{2}{q}]\). Let \((\lambda_{n})\) be an increasing sequence of positive integers which will be subjected below to growth conditions. First we assume that it satisfies the condition: \[q_{\lambda_{n+1}}\geq 3q_{\lambda_{n}},\forall n\geq 1. \tag{39}\] Denote by \(J_{n}\) the interval \(J_{n}=[\frac{3}{q_{\lambda_{n+1}}},\frac{3}{q_{\lambda_{n}}}]\). For the roof function, we take, with \(\varepsilon_{n}=\frac{1}{n^{2}}\), \[\varphi=1+\sum_{n\geq 1}[\varepsilon_{n}q_{\lambda_{n}}]1_{J_{n}}.\] The function \(\varphi\) is integrable: \(\int\varphi d\mu\leq 1+3\sum_{n}\varepsilon_{n}\). Observe also that, by (39), the length of \(J_{n}\) is \(>2/q_{\lambda_{n}}\) and that \((\varepsilon_{n}q_{\lambda_{n}})\) is not decreasing for \(n\geq 2\). Let \(x\) be in the basis. By construction, the orbit of \(x\) under the iteration of \(T_{\varphi}\) is that of the rotation \(S_{\alpha}\) until it enters the set \(B^{c}\), complementary of \(B\) at some time. Then it stays in this set, until it reaches the roof and comes down to the basis. Then the dynamic is that of the rotation, until again \(S_{\alpha}^{j}x\) falls in the set \(\varphi>1\) and so on. Let \(W_{n}(x)\) be the first visit of \(S^{j}x\) in \(J_{n}\). By lemma 2.9, we have \(W_{n}(x)\leq q_{\lambda_{n}}\). Now we choose \(f\) to get a transient counterexample and a recurrent one. _Transient counterexample._ We take \(f=1\) on the basis and \(0\) outside. The sequence \((\lambda_{n})\) is taken such that \[q_{\lambda_{n}}\geq n\,(q_{\lambda_{n-1}})^{2},n\geq 1. \tag{40}\] By (38), we obtain (recall that now \(T_{B}\), the induced map in the basis \(B\), is the rotation \(S=S_{\alpha}\) and \(R(T_{B}^{j}x)=\varphi(S_{\alpha}^{j}x)\)): \[M_{R_{W_{n}(x)}(x)-1}(T,x) = \max_{j\leq W_{n}(x)}\varphi(S^{j}x), \tag{42}\] \[V_{R_{W_{n}(x)}(x)-1}(T,x) = \sum_{j\leq W_{n}(x)}\varphi^{2}(S^{j}x). \tag{41}\] In the above formula, \(\varphi(S^{j}x)\) is either \(1\) or (for some \(k\leq n-1\)) \(1+\lfloor\varepsilon_{k}q_{\lambda_{k}}\rfloor\leq 1+\varepsilon_{n-1}q_{ \lambda_{n-1}}\), excepted for the last term which is \(1+\lfloor\varepsilon_{n}q_{\lambda_{n}}\rfloor\). The maximum in (41) (given by the first visit to \(J_{n}\)) is \(1+\lfloor\varepsilon_{n}q_{\lambda_{n}}\rfloor\geq\varepsilon_{n}q_{\lambda_{n}}\). As we have seen, this first visit for the iterates \(S^{j}x\) occurs at a time \(\leq q_{\lambda_{n}}\). It follows by (40): \[\frac{V_{R_{W_{n}(x)}(x)-1}(T,x)}{M_{R_{W_{n}(x)}(x)-1}^{2}(T,x)} \leq q_{\lambda_{n}}\frac{(\varepsilon_{n-1}\,q_{\lambda_{n-1}})^{2} }{(\varepsilon_{n}\,q_{\lambda_{n}}-1)^{2}}+1\leq(\frac{\varepsilon_{n-1}}{ \varepsilon_{n}})^{2}\,\frac{(q_{\lambda_{n-1}})^{2}}{q_{\lambda_{n}}}\frac{1} {(1-(\varepsilon_{n}\,q_{\lambda_{n}})^{-1})^{2}}+1\] \[\leq 2\,(\frac{n}{n-1})^{2}\,\frac{(q_{\lambda_{n-1}})^{2}}{q_{ \lambda_{n}}}+1\leq\frac{4}{n}+1,\mbox{ for $n$ big enough}.\] This shows: \(\limsup_{n}\frac{M_{n}^{2}(T,f,x)}{V_{n}(T,f,x)}=1\). The result is proved for \(x\) in the basis \(B\), but is satisfied for a.e. \(x\in\tilde{X}\), since \(\limsup_{n}\frac{M_{n}^{2}(T,f,x)}{V_{n}(T,f,x)}\) is a.e. constant by ergodicity of the special flow and Lemma 2.5. Remark that \(S_{k}f(x)\to+\infty\) for every point \(x\).The sequence \((N_{n}(x,0))\) is bounded for every \(x\), but not uniformly in \(x\). _Recurrent counterexample_. In order to obtain a recurrent counterexample, we now use a special cocycle over a rotation by \(\alpha\) (with \(\alpha\) bpq) studied later (see Subsection 3.3). Let \(f\) defined on the basis by \(f(x)=1_{[0,\frac{1}{2}[}(x)-1_{[\frac{1}{2},1]}(x)\) and \(0\) outside, and \(S_{k}f(x)=\sum_{i=0}^{k-1}f(x+i\alpha\bmod 1)\). By (37), we have \[V_{R_{n-1}(x)}(T,f,x) = \sum_{\ell}[\sum_{k=0}^{n-1}\varphi(x+k\alpha)\,1_{S_{k}f(x)=\ell }]^{2}\] \[= \sum_{\ell}[\sum_{k=0}^{n-1}(1+\sum_{j}\varepsilon_{j}q_{\lambda _{j}}1_{J_{j}}(x+k\alpha))\,1_{S_{k}f(x)=\ell}]^{2}.\] Observe that for a constant \(C\), \(1+\sum_{j<n}\varepsilon_{j}q_{\lambda_{j}}1_{J_{j}}(x+k\alpha))\leq Cq_{ \lambda_{n-1}}\). Using the bounds for the special function \(f\) and \(\alpha\) bpq, this implies: \[V_{R_{W_{n-1}(x)}}(T,f,x) \leq \sum_{\ell}[\sum_{k=0}^{q_{\lambda_{n}}}(1+\sum_{j<n}\varepsilon_ {j}q_{\lambda_{j}}1_{J_{j}}(x+k\alpha))\,1_{S_{k}f(x)=\ell}]^{2}\] \[\leq C^{2}\sum_{\ell}[\sum_{k=0}^{q_{\lambda_{n}}}q_{\lambda_{n-1}}\,1 _{S_{k}f(x)=\ell}]^{2}\] \[\leq C^{2}q_{\lambda_{n-1}}^{2}\,\sum_{\ell}[\sum_{k=0}^{q_{\lambda_{ n}}}\,1_{S_{k}f(x)=\ell}]^{2}\leq C^{2}q_{\lambda_{n-1}}^{2}\,q_{\lambda_{n}}^{2}/ \sqrt{\log q_{\lambda_{n}}}.\] Put \(L_{n}=S_{W_{n}(x)}f(x)\) for the site visited by the cocycle when \(S^{j}x\) enters \(J_{n}\). We have \[M_{R_{W_{n}(x)}}(T,f,x)\geq N_{R_{W_{n}(x)}}(T,f,x,L_{n}(x))=\varepsilon_{n}q_ {\lambda_{n}}.\] Hence: \[0\leq\frac{V_{R_{W_{n}(x)}}(T,f,x)}{M_{R_{W_{n}(x)}}^{2}(T,f,x)}-1 \leq C^{2}\frac{q_{\lambda_{n-1}}^{2}\,q_{\lambda_{n}}^{2}}{\sqrt{\log q _{\lambda_{n}}}}\frac{1}{\varepsilon_{n}^{2}q_{\lambda_{n}}^{2}}=C^{2}\frac{n^ {4}q_{\lambda_{n-1}}^{2}}{\sqrt{\log q_{\lambda_{n}}}}.\] Now, we choose a growth condition on \((\lambda_{n})\) stronger than (40), such that the above bound tends to \(0\). This shows the result for \(x\) in the basis, hence on the whole space using again Lemma 2.5. ## 3. **Examples** In general, for a dynamical system \((X,\mu,T)\) and a cocycle \((T,f)\), it seems difficult to get a precise estimate of the quantities \(N_{n}(x,\underline{\ell}),M_{n}(x),V_{n}(x)\). In this section we present two types of cocycles for which this is possible, first in the case of strong stochastic properties, in particular for the classical case of random walks, then when they are generated by step functions over rotations. ### Random walks _1-dimensional cocycle satisfying the LIL._ We start be a remark on the the law of iterated logarithm (LIL). Suppose that \((T,f)\) is a 1-dimensional cocycle which satisfies the LIL. Then for a constant \(c_{1}>0\), for a.e. \(x\), the inequality \(|f_{n}(x)|>c_{1}\,(n\ \ln\,n)^{\frac{1}{2}}\) is satisfied only for finitely many values of \(n\). This implies that, for a.e. \(x\), there is \(N(x)\) such that \(|f_{n}(x)|\leq(c_{1}\,n\ \ln\,n)^{\frac{1}{2}}\), for \(n\geq N(x)\); so that, for \(N(x)\leq k<n\), \(|f_{k}(x)|\leq(c_{1}\,k\ \ln\,k)^{\frac{1}{2}}\leq(c_{1}\,n\ \ln\,n)^{\frac{1}{2}}\). Therefore we have \(\mbox{Card}(\mathcal{R}_{n}(x))\leq c_{2}(x)\,(n\ \ln\,\ln\,n)^{\frac{1}{2}}\), with an a.e. finite constant \(c_{2}(x)\). In dimension 1, by (25), we get that for a.e. \(x\) there is \(c(x)>0\) such that \[V_{n}(x)\geq C(x)\,n^{\frac{3}{2}}\,(\ln\ln\,n)^{-\frac{1}{2}}.\] The case where a LIL is valid includes the case of a 1-dimensional r.w. centered with finite variance, but also the class of cocycles for which a martingale method can be used. _Random walks._ Now we consider sequences given by a random walk. For random walks in \(\mathbb{Z}^{d}\), the quantities \(V_{n}(x),M_{n}(x)\) have been studied in many papers since the 50's. \(M_{n}(x)\) is called "maximal multiplicity of points on a random walk" by Erdos and Taylor [16]. Below, we give a brief survey of several results for r.w.s. First we recall some definitions. Let \((\zeta_{i})_{i\geq 0}\) be a sequence of i.i.d. random vectors on a probability space \((X,\,\mu)\) with values in \(\mathbb{Z}^{d}\) and common probability distribution \(\nu\). The associated _random walk_ (r.w.) \(Z=(Z_{n})\) in \(\mathbb{Z}^{d}\) starting from \(\underline{0}\) is defined by \(Z_{0}:=\underline{0}\), \[Z_{n}:=\zeta_{0}+...+\zeta_{n-1},n\geq 1.\] A r.w. can be seen as a special case of cocycle. Indeed, the r.v.'s \(\zeta_{i}\) can be viewed as the coordinate maps on \((X,\,\mu)\) obtained as \((\mathbb{Z}^{d})^{\mathbb{Z}}\) equipped with the product measure \(\nu^{\otimes\mathbb{Z}}\) and with the shift \(T\) acting on the coordinates. We have \(\zeta_{i}=\zeta_{0}\circ T^{i}\) and the cocycle relation \(Z_{n+n^{\prime}}=Z_{n}+Z_{n^{\prime}}\circ T^{n},\forall n,n^{\prime}\geq 0\). Let \(\mathcal{S}:=\{\underline{\ell}\in\mathbb{Z}^{d}:\mathbb{P}(\zeta_{0}= \underline{\ell})>0\}\) be the support of \(\nu\) and \(L\) the sub-lattice of \(\mathbb{Z}^{d}\) generated by \(\mathcal{S}\). Let \(D\) be the sub-lattice of \(\mathbb{Z}^{d}\) generated by \(\{\underline{\ell}-\underline{\ell}^{\prime},\underline{\ell},\underline{ \ell}^{\prime}\in\mathcal{S}\}\). For simplicity (and without loss of generality) in what follows we will assume that the random walk \(Z\) is _aperiodic_\((L=\mathbb{Z}^{d})\). We exclude also the "deterministic" case (i.e., when \(\mathbb{P}(\zeta_{0}=\underline{\ell})=1\) for some \(\underline{\ell}\in\mathbb{Z}^{d}\)) in dimension \(1\) (the deterministic case in higher dimension is excluded by aperiodicity). Notice that all the pointwise limits or bounds mentioned now for random walks are _a.s._ statements. These bounds will show that conditions (22), (21) are satisfied by \(V_{n}(x),M_{n}(x)\) a.s. for on random walks under mild assumptions. _Recurrence/transience_. Recall that a r.w. \(Z=(Z_{n})\) is recurrent if \(\sum_{n=1}^{\infty}\mu(Z_{n}=\underline{0})=+\infty\) and otherwise transient. Recurrence occurs if and only if \(\mu(Z_{n}=\underline{0}\) infinitely often\()=1\), and transience if and only if \(\mu(Z_{n}=\underline{0}\) infinitely often\()=0\) (cf. [8], [9]). For an aperiodic r.w. \(Z\) in dimension \(d\) with a moment of order \(2\) (for \(d=1\), a moment of order \(1\) suffices), for \(d=1,2\), \(Z\) is recurrent if and only if it is centered. For \(d\geq 3\), it is always transient. _Variance._ Let \((X_{\underline{\ell}},\underline{\ell}\in\mathbb{Z}^{d})\) be a stationary centered r.f. with summable correlation and spectral density \(\rho\). We have \[\frac{1}{n}\|\sum_{k=1}^{n-1}X_{Z_{k}(x)}\|_{2}^{2}=\int_{\mathbb{T}^{d}} \frac{1}{n}|\sum_{k=0}^{n-1}e^{2\pi i(Z_{k}(x),\underline{\ell})}|^{2}\,d \underline{\ell}=\int_{\mathbb{T}^{d}}\frac{1}{n}K_{n}(x,\underline{\ell})\, \rho(\underline{\ell})\,d\underline{\ell},\] where, using (17) with \(z_{k}=Z_{k}(x)\) and \(Z_{k}(x)-Z_{j}(x)=Z_{k}(T^{j}x)\), \(\frac{1}{n}K_{n}\) reads \[\frac{1}{n}K_{n}(x,\underline{\ell})\ \ \ \ =1+2\sum_{\underline{\ell}}(\sum_{k=1}^{n-1 }\,\frac{1}{n}\sum_{j=0}^{n-k-1}1_{Z_{k}(T^{j}x)=\underline{\ell}})\,e^{2\pi i (\underline{\ell},t)}. \tag{43}\] As already recalled, the existence of the asymptotic variance \(\lim_{n}V_{n}(x)^{-1}\int|\sum_{k=0}^{n-1}X_{Z_{k}(x)}|^{2}d\mu\) has been shown in [11] and the positivity of the limit has been discussed. The asymptotic variance may be zero in case a coboundary condition is satisfied. An interesting situation is that of the sums along a transient (non deterministic) r.w., where the asymptotic variance is always \(>0\). Below we will recall briefly a proof. ### Transient case For a transient random walk we use the following general result (Lemma 3.14 in [11]): **Lemma 3.1**.: _If \((X,\mu,T)\) is an ergodic dynamical system and \((\varphi_{k})_{k\geq 1}\) a sequence of functions in \(L^{1}(X,\mu)\) such that \(\sum_{k\geq 1}\|\varphi_{k}\|_{1}<\infty\), then_ \[\lim_{n}\frac{1}{n}\sum_{k=1}^{n-1}\sum_{j=0}^{n-k-1}\varphi_{k}(T^{j}x)=\sum_ {k=1}^{\infty}\int\varphi_{k}\,d\mu,\mbox{ for a.e. }x. \tag{44}\] Therefore, for a transient random walk, we obtain for \(\mu\)-a.e. \(x\): \[\lim_{n}\frac{V_{n}(x)}{n}=1+2\lim_{n}\sum_{k=1}^{n-1}\,\frac{1}{n}\sum_{j=0}^ {n-k-1}1_{Z_{k}(T^{j}x)=\underline{0}}=1+2\sum_{k=1}^{\infty}\mu(Z_{k}= \underline{0})<+\infty\] and the normalisation for the variance is by \(n\) up to a finite constant factor. _Variance in the non deterministic transient case_. Now we recall the proof of the positivity of the asymptotic variance. Let \(\Psi(\underline{t})=\mathbb{E}[e^{2\pi i\langle\zeta_{0},\underline{t}\rangle }]\), \(\underline{t}\in\mathbb{T}^{d}\). Observe that \(\Psi(\underline{t})\neq 1\) for \(\underline{t}\neq\underline{0}\) in \(\mathbb{T}^{d}\), when the r.w. is aperiodic and \(|\Psi(\underline{t})|<1\), for \(\underline{t}\not\in\Gamma_{1}\), where \(\Gamma_{1}\) is the closed subgroup \(\{\underline{t}\in\mathbb{T}^{d}:e^{2\pi i\langle\underline{r},\underline{t} \rangle}=1,\forall\underline{r}\ \in D\}\). We put, for \(\underline{t}\in\mathbb{T}^{d}\backslash\{\underline{0}\}\) and \(0\leq\lambda<1\), \[\Phi(\underline{t}):=\frac{1-|\Psi(\underline{t})|^{2}}{|1-\Psi( \underline{t})|^{2}}=\Re e[\frac{1+\Psi(\underline{t})}{1-\Psi(\underline{t})}],\] \[\Phi_{\lambda}(\underline{t}):=\frac{1-\lambda^{2}|\Psi( \underline{t})|^{2}}{|1-\lambda\Psi(\underline{t})|^{2}}=-1+2\sum_{k=0}^{ \infty}\lambda^{k}\Re e(\Psi(\underline{t})^{k})=-1+2\sum_{k=0}^{\infty} \lambda^{k}\mu(Z_{k}=\underline{\ell})\,\cos(2\pi\langle\underline{\ell}, \underline{t}\rangle),\] where the last relation follows from \(\Re e(\Psi(\underline{t})^{k})=\Re e(\mathbb{E}[e^{2\pi i\langle Z_{k}, \underline{t}\rangle}])=\sum_{\underline{\ell}}\mu(Z_{k}=\underline{\ell})\, \cos(2\pi\langle\underline{\ell},\underline{t}\rangle)\). We put \(\Phi(\underline{0})=0\).The function \(\Phi\) is even, non-negative and \(\Phi(\underline{t})=0\) only on \(\Gamma_{1}\), which is \(\neq\mathbb{T}^{d}\) when the r.w. is non deterministic (if the r.w. is deterministic, \(\mu(\zeta_{0}=\underline{\ell})=1\) for some \(\underline{\ell}\in\mathbb{Z}^{d}\) and this implies \(|\Psi(\underline{t})|\equiv 1\), but this case is excluded). Therefore \(\Phi\) is \(\neq 0\) a.e. for the Lebesgue measure on \(\mathbb{T}^{d}\). **Proposition 3.2**.: _(cf. [28]) Let \(Z=(Z_{n})\) be a transient aperiodic random walk in \(\mathbb{Z}^{d}\). There is a non-negative constant \(M\) such that the Fourier coefficients of \(\frac{1}{n}K_{n}\) converges to those of \(\Phi+M\delta_{\underline{0}}\) and \(\lim_{n}\int\frac{1}{n}K_{n}\,\rho\,d\underline{t}>0\)._ _Proof_. We use that, if \((Z_{n})\) is a transient, for all \(\underline{\ell}\in\mathbb{Z}^{d}\), we have \(\sum_{k=1}^{\infty}\,\mu(Z_{k}=\underline{\ell})<+\infty\). Therefore, the series \(I(\underline{\ell}):=-1_{\underline{\ell}=\underline{0}}+\sum_{k=0}^{\infty} \,[\mu(Z_{k}=\underline{\ell})+\mu(Z_{k}=-\underline{\ell})]\) converges and by (43) and Lemma 3.1, the even functions \(\frac{1}{n}K_{n}(x,.)\) satisfy: \[\int_{\mathbb{T}^{d}}\frac{1}{n}K_{n}(x,.)\,\cos 2\pi\langle\underline{\ell},. \rangle\,d\underline{t}\ \ \ \ =-1_{\underline{\ell}=\underline{0}}+\sum_{k=0}^{n-1}\,\frac{1}{n}\sum_{j=0}^{n-k- 1}[1_{Z_{k}(T^{j}x)=\underline{\ell}}+1_{Z_{k}(T^{j}x)=-\underline{\ell}}] \underset{n\to\infty}{\rightarrow}I(\underline{\ell}).\] Note that above the sum over \(k\) is written starting from \(0\). By letting \(n\) tend to infinity in the relation \[-1_{\underline{\ell}=\underline{0}}+\sum_{k=0}^{\infty}\,\lambda^{k }[\mu(Z_{k}=\underline{\ell})+\mu(Z_{k}=-\underline{\ell})]\] \[= \int_{\mathbb{T}^{d}}\cos 2\pi\langle\underline{\ell},.\rangle \,[-1+2\Re e(\frac{1}{1-\lambda\Psi(.)})]\,d\underline{t}\,=\int_{\mathbb{T} ^{d}}\cos 2\pi\langle\underline{\ell},\underline{t}\rangle\,\Phi_{\lambda}(.) \,d\underline{t},\] we get since the left sum tends to \(I(\underline{\ell})\): \[I(\underline{\ell})=\lim_{\lambda\uparrow 1}\int_{\mathbb{T}^{d}}\cos 2\pi \langle\underline{\ell},\underline{t}\rangle\,\Phi_{\lambda}(\underline{t}) \,d\underline{t}.\] Taking \(\underline{\ell}=\underline{0}\) in the previous formula, it follows from Fatou's lemma: \[I(\underline{0})=1+2\sum_{k=1}^{\infty}\,\mu(Z_{k}=\underline{0})=\lim_{ \lambda\uparrow 1}\int_{\mathbb{T}^{d}}\,\Phi_{\lambda}(\underline{t})\,d \underline{t}\geq\int_{\mathbb{T}^{d}}\lim_{\lambda\uparrow 1}\Phi_{ \lambda}(\underline{t})\,d\underline{t}=\int_{\mathbb{T}^{d}}\Phi(\underline{ t})\,d\underline{t}.\] This shows the integrability of \(\Phi\) on \(\mathbb{T}^{d}\) and we can write with a constant \(M\geq 0\) \[I(\underline{0})=\lim_{\lambda\uparrow 1}\int_{\mathbb{T}^{d}}\,\Phi_{ \lambda}(\underline{t})\,d\underline{t}=\int_{\mathbb{T}^{d}}\lim_{\lambda \uparrow 1}\Phi_{\lambda}(\underline{t})\,d\underline{t}+M=\int_{\mathbb{T}^{d }}\Phi(\underline{t})\,d\underline{t}+M.\] Let \(U_{\eta}\) be the ball of radius \(\eta>0\) centered at \(\underline{0}\). By aperiodicity of the r.w., \(\Psi(\underline{t})\neq 1\) for \(\underline{t}\) in \(U_{\eta}^{c}\), the complementary in \(\mathbb{T}^{d}\) of \(U_{\eta}\), This implies \(\sup_{\underline{t}\in U_{\eta}^{c}}\sup_{\lambda<1}\Phi_{\lambda}(\underline {t})<+\infty\). Therefore, we get: \(\lim_{\lambda\uparrow 1}\int_{U_{\eta}^{c}}\cos 2\pi\langle \underline{\ell},\underline{t}\rangle\,\Phi_{\lambda}(\underline{t})\,d \underline{t}=\int_{U_{\eta}^{c}}\cos 2\pi\langle\underline{\ell},\underline{t} \rangle\,\Phi(\underline{t})\,d\underline{t}\), hence: \[I(\underline{\ell})=\int_{U_{\eta}^{c}}\cos 2\pi\langle\underline{\ell}, \underline{t}\rangle\,\Phi(\underline{t})\,d\underline{t}+\lim_{\lambda \uparrow 1}\int_{U_{\eta}}\cos 2\pi\langle\underline{\ell},\underline{t} \rangle\,\Phi_{\lambda}(\underline{t})\,d\underline{t},\,\forall\eta>0,\] which can be be written: \[-\int_{U_{\eta}}\cos 2\pi\langle\underline{\ell},.\rangle\,\Phi\,d \underline{t}=I(\underline{\ell})-\int_{\mathbb{T}^{d}}\cos 2\pi\langle \underline{\ell},.\rangle\,\Phi\,d\underline{t}-\lim_{\lambda\uparrow 1} \int_{U_{\eta}}\cos 2\pi\langle\underline{\ell},.\rangle\,\Phi_{\lambda}\,d \underline{t}. \tag{45}\] Let \(\varepsilon>0\). By positivity of \(\Phi_{\lambda}\), we have, for \(\eta(\varepsilon)\) small enough: \[(1-\varepsilon)\int_{U_{\eta(\varepsilon)}}\,\Phi_{\lambda}\,d \underline{t}\leq\int_{U_{\eta(\varepsilon)}}\cos 2\pi\langle\underline{\ell},. \rangle\,\Phi_{\lambda}\,d\underline{t}\leq(1+\varepsilon)\int_{U_{\eta( \varepsilon)}}\,\Phi_{\lambda}\,d\underline{t};\] By subtracting \(\int_{U_{\eta}(\varepsilon)}\cos 2\pi\langle\underline{\ell},\underline{t} \rangle\,\Phi(\underline{t})\,d\underline{t}\) in the previous inequalities and (45), we get: \[(1-\varepsilon)\int_{U_{\eta(\varepsilon)}}\Phi_{\lambda}\,d \underline{t}-\int_{U_{\eta(\varepsilon)}}\cos 2\pi\langle\underline{\ell},. \rangle\,\Phi\,d\underline{t}\] \[\leq I(\underline{\ell})-\int_{\mathbb{T}^{d}}\cos 2\pi \langle\underline{\ell},.\rangle\,\Phi\,d\underline{t}-\lim_{\lambda\uparrow 1} \int_{U_{\eta}(\varepsilon)}\cos 2\pi\langle\underline{\ell},.\rangle\,\Phi_{ \lambda}\,d\underline{t}+\int_{U_{\eta(\varepsilon)}}\cos 2\pi\langle\underline{\ell},. \rangle\,\Phi_{\lambda}\,d\underline{t}\] \[\leq(1+\varepsilon)\int_{U_{\eta(\varepsilon)}}\,\Phi_{\lambda}\,d \underline{t}-\int_{U_{\eta(\varepsilon)}}\cos 2\pi\langle\underline{\ell},. \rangle\,\Phi\,d\underline{t};\] As we can chose \(\lambda\) such that \[|-\lim_{\lambda\uparrow 1}\int_{U_{\eta}(\varepsilon)}\cos 2\pi\langle\underline{ \ell},.\rangle\,\Phi_{\lambda}\,d\underline{t}+\int_{U_{\eta(\varepsilon)}}\cos 2\pi \langle\underline{\ell},.\rangle\,\Phi_{\lambda}\,d\underline{t}|\leq\varepsilon,\] we obtain: \[-\varepsilon+(1-\varepsilon)\int_{U_{\eta(\varepsilon)}}\Phi_{ \lambda}\,d\underline{t}-\int_{U_{\eta(\varepsilon)}}\cos 2\pi\langle \underline{\ell},.\rangle\,\Phi\,d\underline{t}\] \[\leq I(\underline{\ell})-\int_{\mathbb{T}^{d}}\cos 2\pi\langle \underline{\ell},.\rangle\,\Phi\,d\underline{t}\leq\varepsilon+(1+\varepsilon) \int_{U_{\eta(\varepsilon)}}\Phi_{\lambda}\,d\underline{t}-\int_{U_{\eta( \varepsilon)}}\cos 2\pi\langle\underline{\ell},.\rangle\,\Phi\,d\underline{t}\] For \(\varepsilon\) small enough, \(\int_{U_{\eta(\varepsilon)}}\cos 2\pi\langle\underline{\ell},.\rangle\,\Phi\,d \underline{t}\) can be made arbitrary small, as well as \(\varepsilon\sup_{\lambda<1}\int_{U_{\eta}}\Phi_{\lambda}\,d\underline{t}\), since \(\Phi\) is integrable and \(\sup_{\lambda<1}\int_{\mathbb{T}^{d}}\Phi_{\lambda}\,d\underline{t}<\infty\). This shows that \(I(\underline{\ell})-\int_{\mathbb{T}^{d}}\cos 2\pi\langle\underline{\ell},. \rangle\,\Phi\,d\underline{t}-\int_{U_{\eta(\varepsilon)}}\Phi_{\lambda}\,d \underline{t}\) can be made arbitrarily small for \(\varepsilon>0\) small and \(\lambda\) close to \(1\). The same is true for \(\underline{\ell}=0\) and also for the difference \[[I(\underline{\ell})-\int_{\mathbb{T}^{d}}\cos 2\pi\langle\underline{\ell},. \rangle\,\Phi\,d\underline{t}-\int_{U_{\eta(\varepsilon)}}\Phi_{\lambda}\,d \underline{t}]-[I(\underline{0})-\int_{\mathbb{T}^{d}}\Phi\,d\underline{t}- \int_{U_{\eta(\varepsilon)}}\Phi_{\lambda}\,d\underline{t}]\] \[=[I(\underline{\ell})-\int_{\mathbb{T}^{d}}\cos 2\pi\langle \underline{\ell},.\rangle\,\Phi\,d\underline{t}]-[I(\underline{0})-\int_{ \mathbb{T}^{d}}\Phi\,d\underline{t}]=[I(\underline{\ell})-\int_{\mathbb{T}^{d} }\cos 2\pi\langle\underline{\ell},.\rangle\,\Phi\,d\underline{t}]-M].\] Therefore \(I(\underline{\ell})=\int_{\mathbb{T}^{d}}\cos 2\pi\langle\underline{\ell}, \underline{t}\rangle\,\Phi(\underline{t})\,d\underline{t}+M\) for all \(\underline{\ell}\) and the Fourier coefficients of \(\frac{1}{n}K_{n}\) converges to those of \(\Phi+M\delta_{\underline{0}}\). As the non-negative sequence \((\frac{1}{n}K_{n})\) is bounded in \(L^{1}\)-norm and the density \(\rho\) is continuous, this proves \(\int\frac{1}{n}K_{n}\rho\,d\underline{t}\to\int\Phi\rho\,d \underline{t}+M\rho(\underline{0})\). Moreover, the limit is \(>0\) since both \(\Phi\) and \(\rho\) are not \(0\) a.e. It is shown in [28] that \(M=0\) for \(d>1\). _Behaviour of \(M_{n}(x)\)._ In the transient case (\(d\geq 3\)) (at least for a simple r.w.), Erdos and Taylor (1960) proved that for a constant \(\gamma>0\) depending on the dimension, \[\lim_{n}\frac{M_{n}(x)}{\log n}=\gamma.\] **Recurrent case** In dimension \(1\), H. Kesten has shown that \(\limsup_{n}\frac{M_{n}}{\sqrt{n\,\ln\ln n}}=\sqrt{2}/\sigma\). Therefore in dimension \(1\), we have the following lower and upper bounds for \(V_{n}\): \[C_{1}(x)\,n^{\frac{3}{2}}\,(\ln\ln\,n)^{-\frac{1}{2}}\leq V_{n}(x)\leq C_{2}( x)\,n^{\frac{3}{2}}(\ln\ln n)^{\frac{1}{2}}.\] _Dimension \(d=2\)._ There is a deterministic rate (law of large numbers): for a constant \(C_{0}\). \[\frac{\int V_{n}\,d\mu}{n\log n}\to C_{0}\text{ and }\frac{V_{n}(x)}{n\log n }\to C_{0},\text{ for a.e. }x.\] For a planar simple random walk, Erdos and Taylor [16] have shown: \[\limsup_{n}\,\frac{M_{n}(x)}{(\log n)^{2}}\leq\frac{1}{\pi}. \tag{46}\] The result has been extended by Dembo, Peres, Rosen and Zeitouni [15], who proved for an aperiodic centered random walk on \(\mathbb{Z}^{2}\) with moments of all orders: \[\lim_{n}\frac{M_{n}(x)}{(\log n)^{2}}=\frac{1}{2\pi\det(\Gamma)^{\frac{1}{2}}},\] where \(\Gamma\) is the covariance matrix associated to the random walk. As shown in the proof in [15], it suffices to suppose that the 2-dimensional r.w. is aperiodic. Moreover, the proof for the upper bound is based on the local limit theorem which uses only the existence of the moment of order 2. Therefore, assuming the existence of the moment of order 2, the upper bound (46) holds. It follows in this case: there exist \(C(x)\) a.e finite such that: \[\frac{M_{n}^{2}(x)}{V_{n}(x)}\leq C(x)\frac{(\log n)^{3}}{n}.\] ### Extensions of the r.w. case _1) Consequence of the Local Limit Theorem (LLT)._ The Local Limit Theorem, when it is satisfied by the cocycle \((T,f)\), gives some pointwise information on \(V_{n}(x)\). For example, if \(d=2\), the following lemma holds: **Lemma 3.3**.: _Suppose that the LLT holds and \(d=2\). Then, for every \(\varepsilon>0\), there is an integrable function \(C\) (depending on \(\varepsilon\)), such that:_ \[N_{n}(x,0)\leq C(x)(\ln n)^{2+\varepsilon},\ V_{n}(x)\leq C(x)\,n\,(\log n)^{2 +\varepsilon}. \tag{47}\] Proof.: By the LLT, it holds, for \(n\geq 1\), \[\int N_{n}(.,0)d\mu=\sum_{k=1}^{n}\int 1_{f_{k}=0}\,d\mu\leq C\sum_{k=1}^{n} \frac{1}{k}\leq C\ln n.\] Let \(\varepsilon\) be a positive constant. Putting \(\Gamma(x)=\sum_{n=1}^{\infty}n^{-(2+\varepsilon)}N_{2^{n}}(x,0)\), we have: \[\int\Gamma(x)\,d\mu(x)\leq C\sum_{n=1}^{\infty}n^{-(2+\varepsilon)}n=C\sum_{n =1}^{\infty}n^{-(1+\varepsilon)}<+\infty,\] so that \(N_{2^{n}}(x,0)\leq\Gamma(x)\,n^{2+\varepsilon}\), where \(\Gamma\) is integrable. If \(2^{k_{n}}\leq n<2^{(k_{n}+1)}\), then with \(p=2+\varepsilon\), we have for \(n\) big enough, \[\frac{N_{n}(x,0)}{(\log_{2}n)^{p}}\leq\frac{N_{2^{(k_{n}+1)}}(x,0)}{k_{n}^{p} }\leq\frac{(k_{n}+1)^{p}}{k_{n}^{p}}\frac{N_{2^{(k_{n}+1)}}(x,0)}{(k_{n}+1)^{p }}=(1+1/k_{n})^{p}\,\Gamma(x)\leq 2\Gamma(x).\] For \(V_{n}(x)\), by (27) we have: \[\int V_{n}(x)\,d\mu(x)=2\sum_{k=1}^{n-1}\int N_{n-k}(x,0)\,d\mu(x)+n=O(n\log n).\] As above for \(N_{n}(x,0)\), the pointwise bound (47) follows. Among example of cocycles satisfying a LLT, there are the r.w.'s (but with more precise results as recalled above), but also cocycles generated by functions with values in \(\mathbb{Z}^{d}\) depending on a finite number of coordinates over a sub-shift of finite type endowed with a Gibbs measure ([19]), ([18]). _2) Functions depending on a finite number of coordinates on a Bernoulli scheme._ Now we try to bound \(M_{n}(x)\) in situation which extends slightly that of random walks. Suppose that \((X,\mu,T)\) is a Bernoulli scheme with \(X=I^{\mathbb{N}}\), where \(I\) is a finite set. Let \(f:x\to f(x_{1},...,x_{r})\) be a centered function from \(X\) to \(\mathbb{Z}^{d}\), \(d\geq 1\), depending on a finite number of coordinates. Let us consider the generalized random walk \((Z_{n})\) defined by the sequence of ergodic sums \(Z_{n}(x)=f_{n}(x)=X_{0}(x)+...+X_{n-1}(x)\), where \(X_{k}(x)=f(T^{k}x)\). **Lemma 3.4**.: _For all \(m\geq 1\) and for constants \(C_{m},C^{\prime}_{m}\) independent of \(\underline{\ell}\), we have:_ \[\int N_{n}^{m}(.,\underline{\ell})\,d\mu=\int[\sum_{k=1}^{n}1_{f_{k}= \underline{\ell}}]^{m}\,d\mu \leq C_{m}n^{m/2},\ \text{\rm for}\ d=1,\] \[\leq C^{\prime}_{m}(\operatorname{Log}n)^{m},\ \text{\rm for}\ d=2.\] _Proof_. We bound the sum \(\sum_{1\leq k_{1}<k_{2}<...<k_{m}\leq n}\mu(f_{k_{1}}=\underline{\ell},f_{k_{2 }}=\underline{\ell},...,f_{k_{m}}=\underline{\ell})\). For \(r\leq k_{1}<k_{2}<...<k_{m}\leq n\), writing \(X_{k}\) instead of \(T^{k}f\), we have: \[\mu(X_{0}+...+X_{k_{1}-1}=\underline{\ell},X_{k_{1}}+...+X_{k_{2} -1}=\underline{0},...,X_{k_{m-1}}+...+X_{k_{m}-1}=\underline{0})\] \[=\sum_{a_{1,1},...,a_{r,1},...,a_{1,m},...,a_{r,m}\in S}\mu[\] \[X_{0}+...+X_{k_{1}-r}=\underline{\ell}-(a_{1,1}+...+a_{r,1}),X_{ k_{1}-r+1}=a_{1,1},...,X_{k_{1}-1}=a_{r,1},\] \[X_{k_{1}}+...+X_{k_{2}-r}=-(a_{1,2}+...+a_{r,2}),X_{k_{2}-r+1}= a_{1,2},...,X_{k_{2}-1}=a_{r,2},...\] \[X_{k_{m-1}}+...+X_{k_{m}-r}=-(a_{1,m}+...+a_{r,m}),X_{k_{m}-r+1}= a_{1,m},...,X_{k_{m}-1}=a_{r,m}];\] which is less than: \[\sum_{a_{1,1},...,a_{r,1},...,a_{1,m},...,a_{r,m}\in S}\ \mu[X_{0}+...+X_{k_{1}-r}= \underline{\ell}-(a_{1,1}+...+a_{r,1}),\] \[X_{k_{1}}+...+X_{k_{2}-r}=-(a_{1,2}+...+a_{r,2}),...,\ X_{k_{m-1}}+..+X_{k_{m}-r}=-(a_{1,m}+...+a_{r,m})].\] Now inside \([.]\) the events are independent. By independence and stationarity, the preceding sum is \[\sum_{a_{1,1},...,a_{r,1},...,a_{1,m},...,a_{r,m}\in S} \mu[X_{0}+...+X_{k_{1}-r}=\underline{\ell}-(a_{1,1}+...+a_{r,1})]\] \[\mu[X_{0}+...+X_{k_{2}-k_{1}-r}=-(a_{1,2}+...+a_{r,2})]\;...\] \[\mu[\;X_{0}+...+X_{k_{m}-k_{m-1}-r}=-(a_{1,m}+...+a_{r,m})].\] With \(\tau_{n}(k)=\sup_{\underline{\ell}}\mu(X_{0}+...+X_{k-1}=\underline{\ell})\,1_ {[0,n](k)}\), we get the following bound \[\sum_{1\leq k_{1}<k_{2}<...<k_{m}\leq n}\mu(f_{k_{1}}=\underline{ \ell},f_{k_{2}}=\underline{\ell},...,f_{k_{m}}=\underline{\ell})\] \[=\sum_{r\leq k_{1}<k_{2}<...<k_{m}\leq n}\mu(X_{1}+...+X_{k_{1}-1 }=\underline{\ell},X_{k_{1}}+...+X_{k_{2}-1}=\underline{0},...,X_{k_{m-1}}+...+X_{k_{m-1}}=\underline{0})\] \[\leq s^{rm}\sum_{r\leq k_{1}<k_{2}<...<k_{m}\leq n}\tau_{n}(k_{1} -r)\,\tau_{n}(k_{2}-k_{1}-r)\,...\,\tau_{n}(k_{m}-k_{m-1}-r)\] \[\leq Cs^{rm}\sum_{k}(\tau_{n}*\tau_{n}*...*\tau_{n})(k)\leq Cs^{ rm}(\sum_{k}\tau_{n}(k))^{m}.\] We have \(s^{rm}\) terms for the first sum, where the cardinal \(|S|\) is denoted by \(s\),. Now we can use, as for the usual r.w., convolution and the local limit theorem. From the lemma, it follows easily in the recurrent case that for a.e. \(x\), for all \(\varepsilon>0\), \[\text{if }d=1,M_{n}(x)=o(n^{\frac{1}{2}+\varepsilon})\text{ and, if }d=2,M_{n}(x)=o(n^{ \varepsilon}).\] In the transient case, if there is a moment of order \(\eta\) for some \(\eta>0\), then \(M_{n}(x)=o(n^{\varepsilon})\) for all \(\varepsilon>0\). For these estimates, in both cases, see [11]. A question if to extend the previous results to a larger class of functions depending weakly on the far coordinates. For such an extension is that explicit bounds in the LLT are not always available. ### Step functions over rotations Now we take \(X=\mathbb{T}^{r}\), \(r\geq 1\) endowed with \(\mu\), the uniform measure and we consider cocycles over rotations. When they are centered, such cocycles are strongly recurrent and therefore the associated quantities \(V_{n}\) and \(M_{n}\) are big. The difficult part is to bound them from above. We will give an example where an upper bound can be obtained. Let \(T_{\alpha}\) be the rotation by an irrational \(\alpha\). For \(f:X\to\mathbb{Z}^{d}\), recall that the cylinder map (cf. Subsection 2.1) is \(\tilde{T}_{f,\alpha}=\tilde{T}_{\alpha}:X\times\mathbb{Z}^{d}\to X\times \mathbb{Z}^{d}\) defined by \(\tilde{T}_{\alpha}(x,\underline{\ell})=(x+\alpha,\underline{\ell}+f(x))\). _Non centered step cocycles over a rotation_. Let \(f\) be a non centered function with a finite number of values values in \(\mathbb{Z}^{d}\). Suppose that \(f\) is Riemann integrable, which amounts to assume that, for the uniform measure of the torus, the measure of the set of discontinuity points of \(f\) is zero. Then by a remark in Subsection 2.2, \(M_{n}(x)\) is bounded uniformly in \(x\) and \(n\). Therefore, for \(V_{n}(x)\), the bounds \(n\leq V_{n}(x)\leq Cn\) are satisfied. #### Centered step cocycles over a 1-dimensional rotation. The interesting situation is that of centered functions. We will consider the case \(r=1\) and when the irrational number \(\alpha\) has _bounded partial quotients_. Recall that an irrational \(\alpha\) with continued fraction expansion \([0;a_{1},a_{2},...,a_{n},...]\) is said to have bounded partial quotients (bpq) if \(\sup_{n}a_{n}<+\infty\). The set of bpq numbers has Lebesgue measure zero and Hausdorff dimension 1. In the sequel of this subsection, \(\alpha\) will be an irrational bpq number (for instance a quadratic irrational) and \(f\) a centered function with values in \(\mathbb{Z}\) and bounded variation. By Denjoy-Koksma inequality, there is a logarithmic bound for the cocycle \((T_{\alpha},f)\): \(|f_{n}(x)|\leq C\ln n\), for a constant \(C\). The cocycle is strongly recurrent to 0 (and this is true for \(d\geq 1\) if \(f\) centered has values in \(\mathbb{Z}^{d}\), when its components have bounded variation). This makes the corresponding maximum \(M_{n}(x)\) big. Nevertheless, we will see that condition (18) is satisfied, at least for a special example. #### Lower bound. #### Lower bound for \(V_{n}\) and variance, case \(d=1\). For a general sequence \((z_{k})\), we can obtain a lower bound for \(V_{n}\) by an elementary method when there is an upper bound for the variance defined below. **Lemma 3.5**.: _Defining the mean \(m_{n}\) and the variance \(\sigma_{n}^{2}\) by_ \[m_{n}=\frac{1}{n}\sum_{k=1}^{n}z_{k},\ \sigma_{n}^{2}=\frac{1}{n}\sum_{k=1}^{n}( z_{k}-m_{n})^{2},\] _we have_ \[V_{n}\geq\frac{1}{9}\,\frac{n^{2}}{\sigma_{n}},\ \text{if}\ \sigma_{n}>1. \tag{48}\] Proof.: Suppose that \(\sigma_{n}>0\). For \(\lambda>1\), let \(\Delta_{\lambda}:=[-\lambda\sigma_{n}+m_{n},\,\lambda\sigma_{n}+m_{n}]\bigcap \mathbb{Z}\). We have: \[\sigma_{n}^{2}\geq\frac{1}{n}\sum_{k=0}^{n-1}(z_{k}-m_{n})^{2}1_{z_{k}\in \Delta_{\lambda}^{c}}\geq\frac{1}{n}\sum_{k=0}^{n-1}(1_{z_{k}\in\Delta_{ \lambda}^{c}})\,\lambda^{2}\sigma_{n}^{2}.\] Therefore: \(\sum_{k=0}^{n-1}1_{z_{k}\in\Delta_{\lambda}}\geq n(1-\lambda^{-2})\). As \(\text{Card}(\Delta_{\lambda})\leq 2\lambda\sigma_{n}+1\). It follows by (24): \[V_{n}\geq\frac{(1-\lambda^{-2})^{2}}{2\lambda\sigma_{n}+1}\,n^{2}.\] For \(\lambda=2\) we get: \(V_{n}\geq\ \frac{\frac{9}{16}}{4\sigma_{n}+1}\,n^{2}\geq\frac{9}{80}\,\frac{n^{2}}{ \sigma_{n}},\ \text{if}\ \sigma_{n}>1\); hence (48). If \(z_{k}\) is given by ergodic sums, i.e., \(z_{k}=f_{k}(x)\), let \[m_{n}(x):=\frac{1}{n}\sum_{k=1}^{n}f_{k}(x),\ \sigma_{n}^{2}(x)=\frac{1}{n}\sum_{k =1}^{n}(f_{k}(x)-m_{n}(x))^{2}.\] By [5, Proposition 13], for \(\alpha\) bpq and \(f\) with bounded vartion, it holds \(\sigma_{n}^{2}(x)\leq C\ln n\). Using (48) and \(V_{n}(x)\leq nM_{n}(x)\), this gives a lower bound for \(V_{n}(x)\) and \(M_{n}(x)\): \[V_{n}(x)\geq c\,\frac{n^{2}}{\sqrt{\ln n}},\ M_{n}(x)\geq c\,\frac{n}{\sqrt{ \ln n}}. \tag{49}\] Below we will get an estimate from above in the following example. **Example 3.6**.: \(f=\mathbf{1}_{[0,\frac{1}{2})}-\mathbf{1}_{[\frac{1}{2},1)}\) and \(\alpha\) bpq. _Upper bound for the example (3.6)._ For \(f\) as above and \(\alpha\) bpq, we have by [1], for some constant \(C_{1}>0\), \[\|N_{n}(\cdot,0)\|_{\infty}=\|\tilde{S}_{n}(\mathbf{1}_{\mathbb{T}^{1}\times \{0\}})(\cdot,0)\|_{\infty}\leq\frac{C_{1}n}{\sqrt{\log n}}. \tag{50}\] Remark that the bound (50) is obtained in [1] as the limit of \(\|N_{n}(\cdot,0)\|_{p}\), the \(L^{p}\)-norm of \(N_{n}(\cdot,0)\), as \(p\) goes to \(\infty\). Therefore the bound holds for the norm \(\|.\|_{esssup}\), but it can be easily replaced by the uniform norm as written above. Indeed, for any \(x\), there is a neighborhood \(V(x)\) of \(x\), such that for \(y\in V(x)\), \(|N_{n}(x,0)-N_{n}(y,0)|\leq 1\) (at most one jump in \(V(x)\)). As one can find \(y\in V(x)\) satisfying \(N_{n}(y,0)\leq\frac{C_{1}n}{\sqrt{\log n}}\), the same inequality holds for \(x\), with \(C_{1}\) replaced by \(2C_{1}\). Using Remark 2.4, it follows: \[M_{n}(x)\leq C_{1}\frac{n}{\sqrt{\log n}}. \tag{51}\] By (51) and since \(V_{n}(x)\leq n\,M_{n}(x)\), we obtain \[V_{n}(x)\leq C_{1}\frac{n^{2}}{\sqrt{\log n}}. \tag{52}\] From (49), (51) and (52), it follows: \(V_{n}(x)\asymp n^{2}/\sqrt{\log n}\) and \(M_{n}(x)\asymp n/\sqrt{\log n}\), where \(a_{n}\asymp b_{n}\) for two sequences \((a_{n})\) and \((b_{n})\) means \(c\,a_{n}\leq b_{n}\leq C\,a_{n},\forall n\geq 1\), with two positive constants \(c,C\). Therefore we get in this special example 3.6: \[\frac{M_{n}^{2}(x)}{V_{n}(x)}\leq(\frac{C_{1}n}{\sqrt{\log n}})^{2}/\frac{cn^ {2}}{\sqrt{\log n}}=\frac{C_{1}^{2}}{c}\,\frac{1}{\sqrt{\log n}}\to 0. \tag{53}\] Condition (18) of Theorem 1.9 is satisfied in this example, as well as the condition of Theorem 1.6 a), hence a Glivenko-Cantelli theorem along \((S_{n}f(x))\) for i.i.d. r.v.'s. But the sufficient conditions for the Glivenko-Cantelli theorems 1.5, 1.6 b), 1.8 are not satisfied by this cocycle and more generally, in view of the lower bound (49), by a cocycle defined by step functions over a bpq irrational rotation. ## 4. **About limit theorems along ergodic sums** ### Glivenko-Cantelli theorem along ergodic sums The Glivenko-Cantelli theorem recalled in the introduction is a (pointwise) law of large numbers uniform over a set of functions (here the indicators of intervals). When the r.v.'s \(X_{k}\) are i.i.d., the proof is an easy consequence of the strong law of large numbers applied to the sequence of i.i.d. bounded r.v.'s \((1_{X_{k}\leq s})\). Using Birkhoff's ergodic theorem, the Glivenko-Cantelli theorem has been extended to the setting of a strictly stationary sequence \((X_{k})\) of random variables. More precisely, formulated in terms of dynamical systems, the following holds: Let \((Y,\mathcal{A},\nu)\) be a probability space and \(S\) an ergodic measure preserving transformation on \(Y\). For any measurable function \(\varphi:Y\to\mathbb{R}\), let us consider the strictly stationary sequence \((X_{k})\) defined by \(X_{k}=\varphi\circ S^{k},k\geq 0\). Then the sequence of empirical distribution functions satisfies: for \(\nu\) a.e. \(y\in Y,\sup_{s}|\frac{1}{n}\sum_{k=0}^{n-1}1_{X_{(}y)k\leq s}-F(s)|\to 0\), where \(F(s)=\nu(\varphi\leq s)\). Observe that the result is an application of Birkhoff's theorem and Lemma 1.1 recalled in Section 1. Its extension to the non ergodic case has been formulated by Tucker [29], the distribution function \(F(s)\) being replaced by the conditional distribution function \(\mathbb{E}(1_{\varphi\leq s}|\mathcal{J})\), where \(\mathcal{J}\) is the \(\sigma\)-algebra of \(S\)-invariant sets. In others words, we have: \[\text{for $\nu$ a.e. }y\in Y,\ \lim_{n\to\infty}\ \sup_{s}|\frac{1}{n}\sum_{k=0}^{n-1} 1_{\varphi(S^{k}y)\leq s}-\mathbb{E}(1_{\varphi\leq s}|\mathcal{J})(y)|=0.\] The above formula relies on the ergodic decomposition which can be used in the proof. In the previous framework, for a process, a Glivenko-Cantelli like theorem sampled along a sequence generated by a dynamical system can be obtained as follows: As in Subsection 2, let \(T\) be an ergodic measure preserving transformation on a probability space \((X,\mathcal{B},\mu)\) and \(f\) a measurable function on \(X\) with values in \(\mathbb{Z}^{d}\), \(d\geq 1\). Let us take a second system \((\Omega,\mathbb{P},\theta)\), where \(\theta=(\theta\ell)_{\underline{\ell}\in\mathbb{Z}^{d}}\) is a \(\mathbb{Z}^{d}\)-action preserving \(\mathbb{P}\). The skew product associated to the cocycle \((T,f)\) and \(\theta\) is the map: \(T_{\theta,f}:(x,\omega)\to(Tx,\theta^{f(x)}\omega)\) from \(X\times\Omega\) to itself. By iteration we get: \[T_{\theta,f}^{k}(x,\omega)=(T^{k}x,\theta^{f_{k}(x)}\omega).\] For example, as \(\mathbb{Z}^{d}\)-action, we can take a \(\mathbb{Z}^{d}\)-Bernoulli shift \((\Omega,\mathbb{P},(\theta\underline{\ell})_{\underline{\ell}\in\mathbb{Z}^{d }})\), with \(\mathbb{P}\) a product measure and \(\theta\) the shift on the coordinates. If \(X_{0}\) is the first coordinate map, then \((X_{\underline{\ell}})=(X_{0}\circ\theta\underline{\ell})\) is a family of i.i.d. r.v.'s indexed by \(\mathbb{Z}^{d}\). In general, let \({\mathcal{I}}_{\theta,f}\) denote the conditional expectation with respect to the \(\sigma\)-algebra of \(T_{\theta,f}\)-invariant sets. The ergodic theorem for \(T_{\theta,f}\) shows that, for \(\psi\in L^{1}(\mu\times{\mathbb{P}})\), \[\lim_{n}\frac{1}{n}\,\sum_{k=0}^{n-1}\psi(T^{k}x,\theta^{f_{k}(x)}\omega)={ \mathcal{I}}_{\theta,f}(\psi)(x,\omega),\mbox{ for }\mu\times{\mathbb{P}}\mbox{-a.e.}(x, \omega). \tag{54}\] If \(\varphi\) is a measurable function on \(\Omega\), putting \(\psi_{s}(x,\omega)={\bf 1}_{I_{s}}(\varphi(\omega))\), where \(I_{s}\) is the half-line \(]-\infty,s]\), we have \[\psi_{s}(T_{\theta,f}^{k}(x,\omega))={\bf 1}_{I_{s}}(\varphi(\theta^{f_{k}(x)} \omega)).\] By the quoted Tucker's result, the convergence in (54) for each \(\psi_{s}\), \(s\in{\mathbb{R}}\), can be strengthened into a uniform convergence with respect to \(s\): \[\mbox{ for }\mu\times{\mathbb{P}}\mbox{-a.e }(x,\omega),\,\tfrac{1}{n}\,\sup_{s} \,|\sum_{k=0}^{n-1}\,{\bf 1}_{I_{s}}(\varphi(\theta^{f_{k}(x)}\omega))-{ \mathcal{I}}(\psi_{s})(x,\omega)|\to 0.\] Therefore, by the Fubini theorem, there is a "sampled" version of the Glivenko-Cantelli theorem for the empirical process of a stationary sequence: **Proposition 4.1**.: _For \(\mu\)-a.e \(x\), we have_ \[|\sup_{s}\tfrac{1}{n}\,\sum_{k=0}^{n-1}\,{\bf 1}_{I_{s}}(\varphi(\theta^{f_{k} (x)}\omega))-{\mathcal{I}}(\psi_{s})(x,\omega)|\to 0,\mbox{ for }{\mathbb{P}}\mbox{-a.e }\omega.\] When \(T_{\theta,f}\) is ergodic, if \(\psi\in L^{1}(\mu\times{\mathbb{P}})\), we have \({\mathcal{I}}_{\theta,f}(\psi)(x,\omega)=\int\psi\,d\mu\,d{\mathbb{P}}\), for \(\mu\times{\mathbb{P}}\mbox{-a.e.}\,(x,\omega)\), and the centering \({\mathcal{I}}(\psi_{s})(x,\omega)\) is given by the distribution function \(F(s)=\mu(\varphi\leq s)\). In this case, for a.e. \(x\), a Glivenko-Cantelli theorem with the usual centering holds for the empirical process sampled along the sequence \((z_{n})\) given by \(z_{n}=S_{n}f(x)\) (with a set of \(\omega\)'s of \({\mathbb{P}}\)-measure \(1\) depending on \(x\)). The lemma below shows, as it is known, that ergodicity of the cylinder map \(\tilde{T}_{f}\) implies ergodicity of the skew map \(T_{\theta,f}\). Let us sketch a proof. **Lemma 4.2**.: _Suppose that the cocycle \((T,f)\) is recurrent and the map \(\tilde{T}_{f}\) ergodic. If the action of \({\mathbb{Z}}^{d}\) by \(\theta\) on \((\Omega,{\mathbb{P}})\) is ergodic, then \(T_{\theta,f}\) is ergodic on \((X\times\Omega,\mu\times{\mathbb{P}})\)._ Proof.: : Let \(\Phi\) be a \(T_{\theta,f}\) invariant measurable function on \(X\times\Omega\): \[\Phi(Tx,\theta^{f(x)}\omega)=\Phi(x,\omega),\mbox{ for a.e. }(x,\omega).\] For a.e. \(x\), there is a set \(\Omega_{x}^{0}\) of full \({\mathbb{P}}\)-measure in \(\Omega\) such that \(\Phi(Tx,\theta^{f(x)}\omega)=\Phi(x,\omega)\), for all \(\omega\in\Omega_{x}^{0}\). As \({\mathbb{Z}}^{d}\) is countable, for a.e. \(x\), there is a set \(\Omega_{x}\) of full measure such that \[\Phi(Tx,\theta^{f(x)}\theta^{\underline{\ell}}\omega)=\Phi(x,\theta^{ \underline{\ell}}\omega),\mbox{ for all }\omega\in\Omega_{x}.\] Let \(\omega\in\Omega_{x}\). The function \(\varphi_{\omega}(x,\underline{\ell}):=\Phi(x,\theta^{\underline{\ell}}\omega)\) on \(X\times{\mathbb{Z}}^{d}\) is measurable, \(\tilde{T}_{f}\)-invariant: \[\varphi_{\omega}(\tilde{T}_{f}(x,\underline{\ell})) = \varphi_{\omega}(Tx,\underline{\ell}+f(x))=\Phi(Tx,\theta^{ \underline{\ell}+f(x)}\omega)\] \[= \Phi(Tx,\theta^{f(x)}\theta^{\underline{\ell}}\omega)=\Phi(x, \theta^{\underline{\ell}}\omega)=\varphi_{\omega}(x,\underline{\ell}).\] It follows from the ergodicity of \(\tilde{T}_{f}\) that there is a constant \(c_{\omega}\) such that \(\varphi_{\omega}(x,\underline{\ell})=c_{\omega}\) for a.e. \(x\). Therefore \(\Phi\) coincides a.e. with a function \(\psi\) on \(\Omega\) which is \(\theta\)-invariant, hence a constant by the assumption of ergodicity of the action of \({\mathbb{Z}}^{d}\) on \(\Omega\) With Fubini's argument, we get a Glivenko-Cantelli theorem for a.e. \(x\), if we can show that the skew map \(T_{\theta,f}\) is ergodic. There are many examples cylinder flows \(\tilde{T}_{f}\) which are shown to be ergodic in the literature and so providing examples via Lemma 4.2. For instance, we can take for \(T\) an irrational rotation and \(f=\mathbf{1}_{[0,\frac{1}{2})}-\mathbf{1}_{[\frac{1}{2},1)}\). The cocycle \((T,f)\) is ergodic and the above version of Glivenko-Cantelli theorem applies for any stationary sequence \((X_{k})\) (with a conditional distribution if the stationary sequence is not ergodic). See also examples for which the skew map is ergodic in [25]. ### Discussion: universal sequences The weakness in the approach of the previous subsection for a sampled Glivenko-Cantelli theorem along ergodic sums \((S_{k}f(x),k\geq 0)\) is that it yields a set of \(x\)'s of \(\mu\)-measure \(1\) depending on the dynamical system \((\Omega,\mathbb{P},\theta)\) and on \(\varphi\). One can try to reinforce the statement by introducing a notion of "universal property". In this direction, the LLN for sums sampled along ergodic sums is closely related in the following way to the random ergodic theorems which have been studied in several papers. First, let us call "universally good" a sequence \((z_{k})\) such that, for every dynamical system \((\Omega,\mathbb{P},\theta)\), for every \(\varphi\in L^{1}(\mathbb{P})\), the sequence \(\frac{1}{n}\sum_{k=0}^{n-1}\varphi\circ\theta^{z_{k}}\) converges \(\mathbb{P}\)-a.e. We say that \((T,f)\) a "(pointwise) good averaging cocycle" (or a universally representative sampling scheme) if, for \(\mu\)-a.e. \(x\), the sequence \((S_{k}f(x))\) is universally good, i.e., for every dynamical system \((\Omega,\mathbb{P},\theta)\), for every \(\varphi\in L^{1}(\mathbb{P})\), \(\frac{1}{n}\sum_{k=0}^{n-1}\varphi\circ\theta^{S_{k}f(x)}\) converges \(\mathbb{P}\)-a.e. The definition of a "mean good averaging cocycle" is similar, changing the above convergence into convergence in \(L^{2}(\mathbb{P})\)-norm, for every \(\varphi\) in \(L^{2}(\mathbb{P})\). A question which has been studied is to find mean or pointwise good averaging cocycles. In the first direction, examples and counterexamples of mean good averaging \(1\)-dimensional cocycles are studied in [25], For pointwise convergence, there are \(1\)-dimensional examples given by cocycles with a drift. In [23], the following result is shown: the cocycle defined by a random walk with a moment of order \(2\) is a pointwise good averaging cocycle if and only if it is not centered. Moreover it is shown that any ergodic integrable integer-valued stochastic process with nonzero mean is universally representative for bounded stationary processes. The proofs are based on the recurrence time theorem ([6]). Notice that a related, but different, notion can be introduced by restricting the dynamical system \((\Omega,\mathbb{P},\theta)\) to belong to a given class \(\mathcal{C}\) of dynamical systems. Let us call "pointwise good for a class \(\mathcal{C}\) of dynamical systems", a sequence \((z_{k})\) such that, for every dynamical system \((\Omega,\mathbb{P},\theta)\) in the class \(\mathcal{C}\), for every \(\varphi\in L^{1}(\mathbb{P})\), \(\lim_{n}\frac{1}{n}\sum_{k=0}^{n-1}\varphi\circ\theta^{z_{k}}=\int\varphi\,d \mathbb{P}\), \(\mathbb{P}\)-a.e. There is a similar property for the mean convergence. This can be also expressed for a class of random fields satisfying a condition on the decay of correlations. For example, by Remark 2.8, every cocycle with values in \(\mathbb{Z}^{d}\) which is not a coboundary is a mean good averaging cocycle for the stationary r.f.s on \(\mathbb{Z}^{d}\) such that \(\sum_{\underline{\ell}}|\langle U_{\underline{\ell}},U_{\underline{0}}\rangle| <+\infty\). If \((z_{k})\) is pointwise universally good for a class \(\mathcal{C}\), clearly we get the Glivenko-Cantelli property for any dynamical system \((\Omega,\mathbb{P},\theta)\) in \(\mathcal{C}\) and every measurable function \(\varphi\), i.e.: \[\sup_{s}|\tfrac{1}{n}\,\sum_{k=0}^{n-1}\,\mathbf{1}_{I_{s}}(\varphi(\theta^{z_ {k}}\omega))-\mathbb{P}(\varphi\leq s)|\to 0,\text{ for }\mathbb{P}\text{-a.e }\omega. \tag{55}\] As we see, there are two different approaches of the notion of universal sequences for a law of large numbers: either we ask for a LLN along such a sequence for every dynamical system \((\Omega,\mathbb{P},\theta)\) and all functions in \(L^{1}(\mathbb{P})\) or we fix a class of dynamical systems, or a class of functions in \(L^{1}(\mathbb{P})\). In the latter case, the condition on the sequence \((z_{k})\) may be expressed in a quantitative way. Let us give a known example and recall the proof. **Proposition 4.3**.: _Let \((z_{k})\) be a strictly increasing sequence of positive integers. If the sequence satisfies: for a finite constant \(C\), \(z_{k}\leq Ck,\forall k\geq 1\), then \((z_{k})\) is a pointwise good averaging sequence for the class \(\mathcal{C}\) of dynamical systems \((\Omega,\mathbb{P},\theta)\) with Lebesgue spectrum._ Proof.: There is a dense set of functions \(\varphi\in L^{1}(\mathbb{P})\) such that \[\frac{1}{n}\sum_{k=0}^{n-1}\varphi(\theta^{z_{k}}\omega)\text{ converges }\mathbb{P}\text{-a.e.} \tag{56}\] Indeed, by the SLLN for orthogonal random variables, (56) is satisfied by \(\varphi\in L^{2}(\mathbb{P})\) such that \(\langle\varphi,\varphi\circ\theta^{k}\rangle=0,\forall k\). The Lebesgue spectrum property implies that such functions span a dense linear space in \(L^{2}(\mathbb{P})\), hence in \(L^{1}(\mathbb{P})\). Moreover, the space of functions \(\varphi\) such that (56) holds is closed by the ergodic maximal lemma in view of the assumption on \((z_{k})\). Therefore (56) is satisfied by every \(\varphi\in L^{1}(\mathbb{P})\). To finish, we recall the following example which shows that the behaviour may depend on the properties of the dynamical system \((\Omega,\mathbb{P},\theta)\) (cf. [13]): Let \((\Omega,\mathcal{F},\mathbb{P})\) be the interval \([0,1]\) endowed with the Borel \(\sigma\)-algebra and the Lebesgue measure and take \(f=1_{[0,\frac{1}{2}]}\). Denote by \(\mathcal{T}\) the class of invertible measure preserving transformations on this space. It can be shown that there are increasing sequences \((z_{k})\) of positive integers satisfying the conditions of the previous proposition such that, for a dense \(G_{\delta}\) of elements in \(\mathcal{T}\) with continuous spectrum, the ergodic means of \(f\) along \((z_{k})\) do not converge \(\mathbb{P}\)-a.e.
2308.05008
Study of Jupiter's Interior with Quadratic Monte Carlo Simulations
We construct models for Jupiter's interior that match the gravity data obtained by the Juno and Galileo spacecrafts. To generate ensembles of models, we introduce a novel quadratic Monte Carlo technique that is more efficient in confining fitness landscapes than affine invariant method that relies on linear stretch moves. We compare how long it takes the ensembles of walkers in both methods to travel to the most relevant parameter region. Once there, we compare the autocorrelation time and error bars of the two methods. For a ring potential and the 2d Rosenbrock function, we find that our quadratic Monte Carlo technique is significantly more efficient. Furthermore we modified the walk moves by adding a scaling factor. We provide the source code and examples so that this method can be applied elsewhere. Here we employ our method to generate five-layer models for Jupiter's interior that include winds and a prominent dilute core, which allows us to match the planet's even and odd gravity harmonics. We compare predictions from the different model ensembles and analyze how much an increase of the temperature at 1 bar and ad hoc change to the equation of state affects the inferred amount of heavy elements in atmosphere and in the planet overall.
Burkhard Militzer
2023-08-09T15:08:34Z
http://arxiv.org/abs/2308.05008v1
# Study of Jupiter's Interior with Quadratic Monte Carlo Simulations ###### Abstract We construct models for Jupiter's interior that match the gravity data obtained by the _Juno_ and _Galileo_ spacecrafts. To generate ensembles of models, we introduce a novel _quadratic_ Monte Carlo technique that is more efficient in confining fitness landscapes than affine invariant method that relies on linear stretch moves. We compare how long it takes the ensembles of walkers in both methods to travel to the most relevant parameter region. Once there, we compare the autocorrelation time and error bars of the two methods. For a ring potential and the 2d Rosenbrock function, we find that our quadratic Monte Carlo technique is significantly more efficient. Furthermore we modified the _walk_ moves by adding a scaling factor. We provide the source code and examples so that this method can be applied elsewhere. Here we employ our method to generate five-layer models for Jupiter's interior that include winds and a prominent dilute core, which allows us to match the planet's even and odd gravity harmonics. We compare predictions from the different model ensembles and analyze how much an increase of the temperature at 1 bar and _ad hoc_ change to the equation of state affects the inferred amount of heavy elements in atmosphere and in the planet overall. ## 1 Introduction Since the _Juno_ spacecraft inserted into orbit around Jupiter in 2016, it has provided us with unprecedented data for the planet's magnetic field, gravity, and atmospheric abundances (Bolton et al., 2017). For this article, the improvement in the precision of the gravity measurements are particularly important. While, for example, the gravity harmonic \(J_{4}\) had been determined to be \(J_{4}\times 10^{6}=-587\pm 5\) with data from _Pioneer_ and _Voyager_ mission, it is now known with much higher precision, \(J_{4}\times 10^{6}=-586.6085\pm 0.0024\)(Durante et al., 2020). This has also led to a revision among the methods and assumptions that go into modelling the planet's interior structure (Stevenson, 1982; Hubbard et al., 2002; Hubbard and Militzer, 2016; Wahl et al., 2017; Ni, 2018; Nettelmann et al., 2021) but the small error bars have made sampling the available space with interior models much more challenging. So here we generate ensembles of models for Jupiter's interior with a novel Monte Carlo (MC) method. We employ a number of different model assumption starting from our reference ensemble of five layer models (Militzer et al., 2022) that invoke a prominent dilute core that reach out to \(\sim\)60% of the planet's radius as well as contributions from winds that we derived by solving thermal wind equation (Kaspi, 2013) in an oblate geometry. Interior and wind parameters are optimized simultaneously, which enabled us to improve upon solutions by Wahl et al. (2017) and match the _Juno_ gravity measurements exactly. In our second ensemble we raise the 1 bar temperature to 170 K from our reference value of 166.1 K that was determined _in situ_ by the _Galileo_ entry probe by matching the temperature-pressure data points to a dry adiabat (Seiff et al., 1998). While this fit has a very small temperature uncertainty, it is not certain to what degree this measurement represents the planet's global average because the entry probe fell into a 5 \(\mu\)m hot spot and thus local weather effects may have played a role. However, one should not expect deviations to be too large because radio occultation measurements by the _Voyager_ spacecrafts determined the 1 bar temperature to be 165 \(\pm\) 5 K (Lindal et al., 1981). These remote observations very recently re-analyzed by Gupta et al. (2022) who determined higher temperatures of 167\(\pm\)4 and 170\(\pm\)4 K for latitudes of 6\({}^{\circ}\)S and 12\({}^{\circ}\)N respectively. The temperature increase was primarily caused by including the chemical species CH\({}_{4}\), Ne, Ar, and PH\({}_{3}\) when the molecular weight of atmosphere was calculated while the original value of 165 \(\pm\) 5 K was derived by assuming a hydrogen-helium atmosphere that is free of heavier species. Given these uncertainties, we constructed an ensemble with \(T_{\rm 1bar}=170\) K here while other authors have consider similar or even higher values. Kerley (2004) constructed models with \(T_{\rm 1bar}\)= 169 K. Recently, Nettelmann et al. (2021) constructed models with \(T_{\rm 1bar}\)= 175 and 180 K. Miguel et al. (2022) made the 1 bar temperature a free Monte Carlo parameter and obtained the best match to the _Juno_ data while using 1 bar temperatures between 177 and 188 K. Such a temperature increase may be very appealing because it increase the entropy of the isentrope and there lowers the density everywhere in the planet. This makes it easier to match the _Juno_ measurements of the gravity coefficients \(J_{4}\) and \(J_{6}\) and more importantly introduces additional flexibilities into the model to move heavy elements from one layer to another. Eventually, however, the temperature will be so high that the isentrope no longer intersect the immiscibility region of hydrogen-helium mixtures (Morales et al., 2013), which provides the basis for the helium rain argument that explains why the _Galileo_ entry probe measured Jupiter's atmospheric helium abundance [\(Y/(X+Y)=0.238\pm 0.005\) von Zahn et al. (1998)] to be depleted compared to the protosolar value of \(Y_{0}/(X_{0}+Y_{0})=0.2777\)(Lodders, 2010). Based on the semi-analytical equation of state (EOS) by Saumon et al. (1995) and _ab initio_ EOS by Militzer and Hubbard (2013), we estimate a value for 1 bar temperature of 180 K for helium rain to have started. However, we derived this value exclusively with theoretical methods while the first experimental work, that indirectly inferred the conditions of H-He phase separation at megabar pressures, placed the onset for this process at much higher temperatures (Brygoo et al., 2021). In our third ensemble, we modify the EOS that we derived with _ab initio_ computer simulations and lowered the density by 3% (Militzer and Hubbard, 2023) in the pressure interval from 10 to 100 GPa where Militzer et al. (2022) found the models to be particularly sensitive. Such _ad hoc_ EOS corrections have been introduced many times in the past when the modeling assumptions by themselves did not yield a good match to the observations. Nettelmann et al. (2021) lowered the density from 30-200 GPa because without a dilute core nor winds, the _Juno_ gravity data could not be reproduced. There is no reason to assume that the _ab initio_ EOS calculations are accurate to 1% level that is typically assumed to required to model giant planet interiors accurately. One reason for this level of accuracy is that one aims to estimate the abundance of heavy elements relative to the protosolar value of 1.53% (Lodders, 2010). To the gravity coefficients \(J_{4}\) and \(J_{6}\), we assume in all three ensembles that Jupiter's core has been substantially diluted with hydrogen and helium. The heavily elements, that were essential to trigger Jupiter's formation, make up only \(\sim\)18% by mass. Core dilution is plausible because _ab initio_ computer simulations have shown that all typical core materials such as water, silicates and iron are soluble in metallic hydrogen at megabar pressures (Wilson and Militzer, 2012, 2012; Wahl et al., 2013; Gonzalez-Cataldo et al., 2014). It is less clear whether the convection in Jupiter's interior is sufficiently strong to bring up the heavy elements against the forces of gravity (Guillot et al., 2004). Moll et al. (2017), Muller et al. (2020), and Helled et al. (2022) studied the interior convection and the evolution of a primordial, compact core that was originally compose to 100% of heavy elements. Liu et al. (2019) studied whether Jupiter core could be diluted by a giant impact. It is conceivable that a small compact core exists in inside the dilute core but it could not be very massive because that would take away from the dilute core effect that enabled us to match \(J_{4}\) and \(J_{6}\). Militzer et al. (2022) placed an upper limit of 3 Earth masses (1% of Jupiter's mass) on the compact core. Various papers have investigated the effects that different EOSs have on the inferred properties of Jupiter (Saumon and Guillot, 2004; Miguel et al., 2016). Because there are uncertainties in the EOS, we constructed ensembles of models for which we have lower density in a pressure window from \(P^{*}\) to \(10\times P^{*}\) and then moved across the entire pressure range of Jupiter's interior in order to determine on which interval the model predictions depends most sensitively. It was our goal to provide some guidance to future experimental and theoretical work on where to expect the biggest impact for giant planet physics. We analyze how such EOS change affect the heavy element abundance that is inferred for the planet's outer envelope. Constructing models with subsolar or even with a "negative" abundance of heavy elements has enabled previous works to match or nearly match the _Juno_ measurements for \(J_{4}\) and \(J_{6}\) without invoking a dilute core or winds (Hubbard and Militzer, 2016). On the other hand if one makes the assumptions that Jupiter form via core accretion from a well mixed protosolar nebula, the heavy elements in its atmosphere should occur in at least solar abundances. The small number of measurements and remote observations that exist for the atmospheric composition of giant planets have been reviewed in Atreya et al. (2019). With the exception of neon, the _Galileo_ entry probe measured the nobel gases to be three-fold enriched compared to solar. Carbon has found to to be \(4\times\) solar in Jupiter and \(9\times\) solar in Saturn. If the same enrichment applied to oxygen and if these measurements were representative of the Jupiter's entire envelope, it would pose a major challenge to all modeling activities because most models that match _Juno_'s \(J_{4}\) and \(J_{6}\) only yield heavy elements abundance in approximately solar proportions. (The same challenge exists for Saturn, for which typical models (Militzer et al., 2019) predict up to \(4\times\) solar abundance for heavy elements, which is well below the nine-fold solar measurements for carbon.) The biggest unknown, however, is the concentration of oxygen, the most abundant element besides hydrogen and helium. Its abundance informs us about water which crucial for understanding where and how Jupiter formed (Helled and Lunine, 2014). The _Galileo_ entry probe measured oxygen to be half solar bringing the total heavy element mass fraction to 1.7% before the probe stopped functioning at a pressure of 22 bar. More recently Li et al. (2020) used _Juno's_ microwave measurements to infer an oxygen abundance between one and five times solar. A more precise determination was not possible because the water signal is small compared to that of ammonia and its radiative properties at relevant conditions are not sufficiently well understood, which provides us with ample motivation to analyze the amount of heavy element that emerge from our model assumptions. In this article, we construct three ensembles of model of Jupiter's interior by introducing a novel Markov chain Monte Carlo methods the relies on _quadratic_ rather affine (or linear) moves that are employed by Goodman and Weare (2010). We show that our method is more efficient in confining geometries that are difficult to sample with linear moves. Since its inception, the affine invariance sampling method has gained a remarkable level of acceptance in various fields of science including astronomy and astrophysics where one often needs to determine posterior distributions of model parameters that are compatible with observational data that carry uncertainties. For example, the affine sampling method has been employed to detect stellar companions in radial velocity catalogues (Price-Whelan et al., 2018), to study the relationship between dust disks and their host stars (Andrews et al., 2013), to examine the first observations of the Gemini Planet Imager (Macintosh et al., 2014), to analyze photometry data of Kepler's K2 phase (Vanderburg and Johnson, 2014), to study the mass distribution in our Milky Way galaxy (McMillan, 2017), to identify satellites of the Magellanic Clouds (Koposov et al., 2015), to analyze gravitational-wave observations of a binary neutron star merger (De et al., 2018), to constrain Hubble constant with data of the cosmic microwave background (Bernal et al., 2016), or to characterize the properties of M-dwarf stars (Mann et al., 2015) to name a few applications. On the other hand, Huijser et al. (2022) demonstrated that the affine invariant method exhibits undesirable properties when the multivariate Rosenbrock density is sampled for more than 50 dimensions. Goodman and Weare (2010) chose to perform their Markov chain Monte Carlo simulations with an entire ensemble of walkers (or states) rather than propagating just a single walker. The distribution of walkers in the ensemble helps one to propose favorable moves that have an increased chance of being accepted without the need for a detailed investigation of the local fitness landscape as the traditional Metropolis-Hastings Monte Carlo method requires. Many extensions of the Metropolis-Hastings approach have been advanced (Andrieu and Thoms, 2008). For example, Haario et al. (2001) use the entire accumulated history along the Monte Carlo chain of states to adjust the shape of the Gaussian proposal function. Ensembles of walkers have been employed long before Goodman and Weare (2010) in various types of Monte Carlo methods that were designed for specific applications. In the fields of condensed matter physics and quantum chemistry, ensembles of walkers are employed in _variational_ Monte Carlo (VMC) calculations (Martin et al., 2016) that optimize certain wavefunction parameters with the goal of minimizing the average energy or its variance (Foulkes et al., 2001). Ensembles are used to vectorize or parallelize the VMC calculations. They are also employed generate the initial set of configurations for the walkers in _diffusion_ Monte Carlo (DMC) simulations. In DMC calculations, one samples the groundstate wave function by combining diffusive moves with birth and death processes. An ensemble of walkers is needed to estimate the average local energy so that the birth and death rates lead to a stable population size. Walkers with a low energy are favored and thus more likely to be selected to spawn additional walkers. Walkers in areas of high energy are likely to died out. The birth and death concepts in DMC have a number of features in common with genetic algorithms that employ a population of individuals (similar to an ensemble of walkers). The best individuals are selected and modified with a variety of approaches to generate the next generation of individuals (Schwefel, 1981; Militzer et al., 1998). The population is needed to establish a fitness scale that enables one to make informed decisions which individuals should be selected for procreation. This scale will change over time as the population migrates towards for favorable regions in the parameter space. This also occurs in DMC calculations as the walker population migrates towards regions of low energy, the average energy in the population stabilizes, and the local energy approaches the ground state energy of the system. Ensembles of individuals/walkers are not only employed in genetic algorithm but are used in many different stochastic optimization techniques. These methods have primarily been designed for the goal of finding the best state in a complex fitness landscape, or a state that is very close to it, rather than sampling a well-defined statistical distribution function as Monte Carlo method do. Therefore these optimization are much more flexible than Monte Carlo algorithms that typically need to satisfy the detailed balance relation for every move (Kalos & Whitlock, 1986). The particle swarm optimization method (J. Kennedy & Eberhart, 1997, 2001) employs an ensemble (or swarm) of walkers and successively updates their locations according to a set of velocities. The velocities are updated stochastically using an inertial term and drift terms favor migration towards the best individual in the population and/or towards the global best ever generated. Furthermore, the downhill simplex method (Press et al., 2001) employs an ensemble of \(N+1\) walkers in \(N\) dimensions. The optimization algorithm successively moves the walker with the highest or second highest energy in the ensemble in the direction of the center of mass of the other walkers. The ensemble of walkers thereby migrates step by step to more favorable locations in the fitness landscape without the need to ever compute a derivative of the fitness function, which makes this algorithm very appealing in situations where the fitness function is complex and its derivates cannot be derive with reasonable effort. In general, efficient Monte Carlo methods are required to have two properties. They need to migrate efficiently in parameter space towards the most favorable region. The migration (or convergence) rate is typically measured in Monte Carlo time (or steps). Once the favorable region has been reached and average properties among walkers have stabilized, the Monte Carlo method needs to efficient sample the relevant parameter space. The efficiency of the algorithm is typically measured in terms of the autocorrelation time or the size of the error bars. While in typical applications, algorithms that have fast migration rates also have a short autocorrelation time, there is no guarantee that both are linked because the properties of fitness landscape may differ substantially between the initial and the most favorable regions of the parameter space. For this reason, we measure the migration rate and autocorrelation time separately when we evaluate the performance of the quadratic Monte Carlo method that we introduce in this article. This article is organized as follows. In section 2, we introduce our quadratic Monte Carlo technique and compare it with the affine invariance method. We also describe how we construct models for Jupiter's interior. In section 3 we present four sets of results. First we compare how the two methods perform for a ring potential problem and for the Rosenbrock density, then construct different ensembles of Jupiter's interior, and finally study the consequences of various corrections to the assumed EOS for the inferred heavy element abundances in Jupiter's outer molecular layer. In section 4, we conclude. In the appendix, we show that our quadratic Monte Carlo satisfy the condition of detailed balance. ## 2 Methods ### Quadratic Moves We divide our Markov chain MC calculations into \(N_{b}\) blocks, each consisting of \(N_{S}\) steps. During every step, we attempt to move each of \(N_{W}\) walkers in the ensemble once. A quadratic MC move proceeds as follows. In addition to the moving walker \(i\), we select two other walkers \(j\) and \(k\) from the ensemble at random. Then we perform a quadratic Lagrange interpolation/extrapolation to sample new parameters, \(\vec{r}_{i}^{\prime}\), for walker \(i\), \[\vec{r}_{i}^{\prime}=w_{i}\vec{r}_{i}+w_{j}\vec{r}_{j}+w_{k}\vec{r}_{k} \tag{1}\] The interpolation weights \(w\) are chosen from, \[w_{i} = L(t_{i}^{\prime}\,;\,t_{i},t_{j},t_{k}), \tag{2}\] \[w_{j} = L(t_{i}^{\prime}\,;\,t_{j},t_{k},t_{i}),\] (3) \[w_{k} = L(t_{i}^{\prime}\,;\,t_{k},t_{i},t_{j}),\] (4) \[L(x\,;\,x_{0},x_{1},x_{2}) \equiv \frac{x-x_{1}}{x_{0}-x_{1}}\frac{x-x_{2}}{x_{0}-x_{2}} \tag{5}\] The function \(L\) is the typical Lagrange weighting function that guarantees a proper quadratic interpolation so that \(\vec{r}_{i}^{\prime}=\vec{r}_{i}\) if \(t_{i}^{\prime}=t_{i}\); \(\vec{r}_{i}^{\prime}=\vec{r}_{j}\) if \(t_{i}^{\prime}=t_{j}\); and \(\vec{r}_{i}^{\prime}=\vec{r}_{k}\) if \(t_{i}^{\prime}=t_{k}\). We always set \(t_{j}=-1\) and \(t_{k}=+1\) to introduce a scale into the parameter space, \(t\). To satisfy the detailed balance condition, \(T(\vec{r}_{i}\rightarrow\vec{r}_{i}^{\prime})=T(\vec{r}_{i}^{\prime}\to \vec{r}_{i})\), it is key that we sample the parameters \(t_{i}\) and \(t_{i}^{\prime}\) from the same distribution \(\mathcal{P}(t)\). (We do not set \(t_{i}=0\) but sample it in the same way as \(t_{i}^{\prime}\).) The acceptance probability then becomes, \[A(\vec{r}_{i}\rightarrow\vec{r}_{i}^{\prime})=\min\left[1,\frac{\pi(\vec{r}_{ i}^{\prime})}{\pi(\vec{r}_{i})}\,\left|w_{i}\right|^{N}\right]\,. \tag{6}\] The factor \(|w_{i}|^{N}\) is needed because we sample the one-dimensional \(t\) space but then switch to the \(N\)-dimensional parameter space, \(\vec{r}\). It plays the same role as the \(\lambda^{\alpha}\) factor of the affine transformation that we discuss below. In appendix A, we derive this factor rigorously from the generalized detailed balance equation by Green & Mira (2001). For the sampling distribution, \({\cal P}(t)\), one has a bit of a choice. Our applications have shown that the precise shape is not important but the width the distribution affects the MC efficiency in the usual way. If one tries to make too large steps in parameter space, too many moves are rejected. If the steps are chosen too small, most moves will be accepted but the resulting states are highly correlated and the parameter space is not explored efficiently either. So we introduce a constant scaling parameter, \(a\), that controls the width of our sampling functions \({\cal P}(t)\). Besides the number of walkers, \(N_{W}\), this is the _only_ parameter a user of our quadratic MC method needs to adjust. \(a=1.5\) is a perfectly fine choice. Only if a lot of computer time is to be invested, one may want to compare the MC efficiency for various \(a\) values as we do in the next section. For the sampling functions, \({\cal P}(t)\), we propose two options: a) We sample \(t_{i}\) and \(t^{\prime}_{i}\), uniformly from the interval \([-a,+a]\) or b) we draw them independently from a Gaussian distribution that we center around zero and set the standard deviation equal to \(a\). In Fig. 1, we given an illustration for why quadratic moves tend to perform well in confined geometries. The move of walker \(i\) is guided by the positions of walkers \(j\) and \(k\), which both reside in the narrow channel. Large moves become possible as long as the channel curvature does not change too rapidly. If it does, one may reduce the parameter \(a\). For \(t_{i}\) values that are sampled from the interval \([-a,+a]\), the parameter \(a\) controls the probability that we choose the new walker location, \(\vec{r}_{i}^{\prime}\), by interpolating between \(\vec{r}_{j}\) and \(\vec{r}_{k}\) (\(|t^{\prime}_{i}|\leq 1\)) or by extrapolating from these two points (\(|t^{\prime}_{i}|>1\)). Figure 1: Illustration of quadratic and affine stretch moves in a confining channel that is represented by the dashed lines. The circles indicate the locations of walkers in the ensemble. For the quadratic move, two helper points \(\vec{r}_{j}\) and \(\vec{r}_{k}\) are employed to sample a new location for walker \(i\). Conversely, for the stretch move, only one additional point \(\vec{r}_{j}\) is used and walker \(i\) may thus not travel as far in a single step in a curved channel. In Fig. 1, we also illustrate the affine stretch moves (Goodman & Weare, 2010) for comparison. To sample the new location for walker \(i\), the position of only one other walker, \(j\), is employed to construct this linear transformation, \[\vec{r}_{i}^{\prime}=\vec{r}_{j}+\lambda(\vec{r}_{i}-\vec{r}_{j}) \tag{7}\] To make such moves reversible, the stretch factor, \(\lambda\), must be sampled from the interval \(\left[\frac{1}{a},a\right]\). For the sampling function, \(T(\lambda)\), one has a bit of choice. Goodman & Weare (2010) followed Christen (2007) when they chose a function that satisfies, \[T_{1}(\lambda) = \frac{1}{\lambda}\;T_{1}(\frac{1}{\lambda}) \tag{8}\] \[T_{1}(\lambda) \propto \frac{1}{\sqrt{\lambda}}\;\text{if}\;\lambda\in\left[\frac{1}{ \text{a}},\text{a}\right]\;\;. \tag{9}\] This function can be sampled by choosing a random number, \(\eta\), uniformly in [0,1] and transforming it according to, \[\lambda=\frac{(\eta-d)^{2}}{d^{2}a}\;\;\text{with}\;\;\text{d}=\frac{1}{1- \text{a}} \tag{10}\] Alternatively, we can sample \(\lambda\) in Eq. 7 uniformly from the interval \(\left[\frac{1}{a},a\right]\), \[T_{2}(\lambda)=\frac{a}{a^{2}-1}=\text{constant}\;\text{if}\;\lambda\in\left[ \frac{1}{\text{a}},\text{a}\right]\text{and}\;\text{T}_{2}(\lambda)=0\text{ elsewhere}. \tag{11}\] For both sampling functions, a factor, \(\lambda^{\alpha}=\frac{\left|\vec{r}_{i}^{\prime}-\vec{r}_{j}^{\prime}\right| ^{\alpha}}{\left|\vec{r}_{i}-\vec{r}_{j}\right|^{\alpha}}\), must be introduced to the acceptance probability, \[A(\vec{r}_{i}\rightarrow\vec{r}_{i}^{\prime})=\min\left[1,\frac{\pi(\vec{r}_ {i}^{\prime})}{\pi(\vec{r}_{i})}\lambda^{\alpha}\right]\;. \tag{12}\] For the uniform distribution, \(T_{2}(\lambda)\), one sets \(\alpha=N-2\) while one sets \(\alpha=N-1\) for \(T_{1}(\lambda)\). Both factors are caused by the fact that in \(N\) dimensions, the area of a sphere around the anchor point \(\vec{r}_{j}\) scales with \(|\vec{r}_{i}-\vec{r}_{j}|^{N-1}\). The uniform distribution, \(T_{2}(\lambda)\) already stretches the interval of \(a\) values automatically and therefore \(\alpha\) is set to \(N-2\) rather than \(N-1\). A derivation for these factors is provided in appendix A. As a first, very basic test whether any of these methods works correctly, we applied them to sample the Boltzmann distribution, \[\pi(\vec{r})\propto\exp\left\{-\frac{V(\vec{r})}{k_{B}T}\right\} \tag{13}\] for a harmonic potential in \(N\) dimensions, \(V(\vec{r})=\sum_{d=1}^{N}r_{d}^{2}\), in order to verify that the resulting average potential energy, \(\langle V\rangle\), agrees with the exact value of \(NT/2\) within error bars. (We set the Boltzmann constant, \(k_{B}\), to 1 throughout this paper.) This is also a reasonable first test whether the factors in the acceptance ratios in Eqs. 6 and 12 are set correctly. As a second test in section 3.1, we compared the average potential energy that we obtained with the affine and our quadratic MC method for a computationally more challenging ring potential. ### Modified Walk Moves Goodman & Weare (2010) also introduced an alternate sampling method: _walk_ moves. To move walker \(k\) from \(\vec{r}_{k}\) to \(\vec{r}_{k}=\vec{r}_{k}+W\), one chooses at random a subset, \(S\), of \(N_{S}\) guiding walkers. \(k\) is excluded from \(S\) so that the positions in the subset are independent of \(\vec{r}_{k}\). The subset size, \(N_{S}\), is a free parameter that one needs to choose within \(2\leq N_{S}<N_{W}\). We typically keep \(N_{S}\) constant for an entire MC chain but we have also performed calculations with a flexible subset size, for which we selected walkers for the subset according to a specified probability, \(p_{S}\), but we found no advantages in using a flexible \(N_{S}\) number over a fixed value. We follow Goodman & Weare (2010) in computing the average location all walkers in the subset, \[\langle\vec{r}\rangle=\frac{1}{N_{S}}\sum_{j\in S}\vec{r}_{j}\quad. \tag{14}\] but we then modify their formula for computing the step size, \(W\), by introuding a scaling factor \(a\): \[W=a\sum_{j\in S}Z_{j}\left(\vec{r}_{j}-\langle\vec{r}\rangle\right)\quad. \tag{15}\] \(Z_{j}\) are univariate standard normal random numbers. By setting \(a=1\), one obtains the original walk moves, for which the covariance of the step size, \(W\), is the same as the covariance of subset \(S\). However, the new scaling parameter, \(a\), enables us to make smaller (or larger) steps in situations where the covariance of the instantaneous walker distribution is a not an optimal representation of local structure of the sampling function. We will show later that the scaling factor \(a\) enables us to significantly improve the sampling efficiency of the Rosenbrock function and for the ring potential in high dimensions. ### Equation of State The EOS of hydrogen-helium mixtures plays a crucial role in the modeling Jupiter's interior structure because both gases make up the bulk of the planet. We derive he EOS by combining the Saumon et al. (1995) predictions at low pressure and with results from _ab initio_ computer simulations at high pressure (P \(\geq\) 5 GPa) (Militzer & Hubbard, 2013). For a given composition and entropy, both EOSs provide a \(\rho(P)\) relationship. One can gradually switch from one to the other as function of pressure. Still there are two primary sources of uncertainty to consider: (1) First, current _ab initio_ calculations are based on the density functional theory and employed on the PBE functional (Perdew et al., 1996) while other choices are possible. Currently we lack experimental data to determine how accurately any of the existing functionals (Clay et al., 2016) characterize liquid hydrogen at megabar pressures. X-ray diffraction experiments of solid materials at room temperature have shown that the PBE functional underestimates the density of materials by a few % while the earlier local density approximation, that was constructed from results by Ceperley & Alder (1980), overestimates the density of solids. However, simulations based on the PBE functional are in very good agreement with the shock wave measurements (see Knudson & Desjarlais (2017) and Militzer et al. (2016)) that measured the density of deuterium at megabar pressure more accurately than previously possible. Still accurate density measurements of liquids remain a challenged because X-ray diffraction measurements cannot be applied. On the other hand, quantum Monte Carlo calculation (Mazzola et al., 2018) have predicted hydrogen to be more dense than the PBE predictions. However, a higher-than-PBE density relationship would make the modeling of Jupiter's interior more difficult and likely lead to subsolar heavy element abundance in the other envelope, as we will discuss in the results section of this manuscript. (2) The second EOS uncertainty arise from the temperature profile of the isentropes. Locally this is characterized by the Gruneisen parameter, \(\gamma=-\left.\frac{\partial\ln T}{\partial\ln V}\right|_{S}=T\left.\frac{ \partial P}{\partial T}\right|_{V}/\left.\frac{\partial E}{\partial T}\right|_ {V}\)(Militzer & Hubbard, 2007). Globally, one can state that the temperature at 1 bar defines an entropy value that determines \(P\)-\(T\) relationship for the entire thickness of layer in the planet as long as it is homogeneous and convective. Different predictions from local and global approaches have led to very different predictions how hot Jupiter's interior is (Militzer et al., 2008; Nettelmann et al., 2008). With the global approach, one determine the absolute entropy for a grid of \(\rho\)-\(T\) point with the thermodynamic integration method (Morales et al., 2009; Militzer, 2013) and then find the isentrope through interpolation. We favor this approach (Militzer & Hubbard, 2009) because every \(S(\rho,T)\) point is independent. So if one particular calculation were inaccurate, it would not affect the results elsewhere. With the local approach, one needs a very dense grid EOS points to numerically compute the derivatives that are needed trace an isentrope by computing \(\gamma\) at every step. A second and important reason for the disagreement on Jupiter's interior temperature profile was that the local approach requires a reliably starting point for the isentrope and _ab initio_ simulations do not work at 1 bar because the density is too low. ### Modeling Jupiter's Interior We model Jupiter's interior with five distinct layers that we illustrate in Fig. 2. The outer layer contains a mixture of molecular hydrogen, helium, and heavier elements. We derive its EOS by following Hubbard & Militzer (2016). We keep the entropy of this layer fixed by specifying the 1 bar temperature, 166.1 or 170 K. The helium mass fraction is held constant at the _Galileo_ value of \(\tilde{Y}_{1}=Y/(X+Y)=0.238\)(von Zahn et al., 1998). \(Z_{1}\) represents the mass fraction of the heavy elements. The parameters \(P_{\rm rain,1}\) and \(P_{\rm rain,2}\) mark the beginning and ending pressures of the helium rain layer where the helium fraction, \(Y/(X+Y)\), gradually rises from \(\tilde{Y}_{1}\) to a higher value \(\tilde{Y}_{2}\). Following Militzer et al. Figure 2: Upper panel: Five layer models of Jupiter’s interior. Lower panel: Effect of an EOS perturbation on heavy element abundance, \(Z_{1}\). For 18 separate MC calculations, we lowered the density of our H-He EOS by 3% over a pressure interval from \(P^{*}\) to \(10\times P^{*}\) and studied how \(Z_{1}\) increased. The small circles show individual \(Z_{1}\) points while the large circle represent the ensemble average. The horizontal bar indicate the interval from \(P^{*}\) to \(10\times P^{*}\). The horizontal lines mark protosolar and twice protosolar abundances. The vertical lines A trough D are mark pressures of 68, 1005, \(5\times 10^{4}\), and \(10^{6}\) bar (1 Mbar = 100 GPa) that are also pointed out in the upper panel. (All points were calculate in the same way. Different colors were just introduced for clarity.) (2022), we adopt this functional form, \[\tilde{Y}(P)\!=\!\tilde{Y}_{1}+x^{\alpha}\left[\tilde{Y}_{2}-\tilde{Y}_{1}\right] \ \ \text{with}\ \ x=\frac{\log(P/P_{\text{rain},1})}{\log(P_{\text{rain},2}/P_{\text{rain},1})} \tag{16}\] The \(\tilde{Y}_{2}\) value is adjusted so that the planet overall (excluding heavy elements) has a helium fraction equal to the protosolar value of \(Y_{0}/(X_{0}+Y_{0})=0.2777\)(Lodders, 2010). Inside of this layer is a thick, homogeneous, and convective layer of mostly metallic hydrogen that extends down to the core transition layer. The parameters \(P_{\text{core},1}\) and \(P_{\text{core},2}\) determine the beginning and ending pressures of this layer. We assume it to be stably stratified because the heavy element fraction increases gradually from \(Z_{1}\) to \(Z_{2}\). \(Z_{2}\) is the heavy element abundance in the dilute core, which we assume to be homogeneous and convective. Together with the metallic hydrogen layer is contribution to generating Jupiter's magnetic field (see analysis by Moore et al. (2022)). To compare the different models, we define the core mass, \(M_{\text{core}}\), to be the mass inside of the pressure level, \(P_{\text{core},2}\). The mass of the envelope, \(M_{\text{env}}\), is the mass outside the pressure level, \(P_{\text{core},1}\). The remaining mass in between both pressures, is the mass of the core transition layer, \(M_{\text{trans}}\). We employ the concentric Maclaurin spheroid (CMS) method (Hubbard, 2013) to construct a hydrostatic solution of a uniformly rotating oblate planet and then use the thermal wind equation to compute the contributions from the zonal winds. The CMS technique treats the effects of rotation nonperturbatively and is thus significantly more accurate than the traditional theory of figures (Zharkov and Trubitsyn, 1978) that starting from a nonrotating planets and then adds rotational effects using an expansion of different orders (Saumon and Guillot, 2004; Nettelmann et al., 2021). We employ our quadratic Monte Carlo method to construct ensembles of Jupiter models by accepting and rejecting moves according to the \(\exp(-\chi^{2}/2)\) function that includes four different terms, \(\chi^{2}=\chi^{2}_{J}+\chi^{2}_{\text{H-He}}+\chi^{2}_{\text{wind}}+\chi^{2}_{ \text{guide}}\). The most important one measures the deviations of even and odd gravity harmonics between model predictions and the _Juno_ measurements (Durante et al., 2020), \[\chi^{2}_{J}=\sum_{i=1}^{10}\left[\frac{J_{i}^{\text{model}}-J_{i}^{\text{Juno }}}{\delta J_{i}^{\text{Juno}}}\right]^{2}\ \ \, \tag{17}\] where \(\delta J_{i}^{\text{Juno}}\) are the 1-\(\sigma\) uncertainties of the measurements. While Eq. 17 is certainly the most important model generation criterion, there are a number of other well motivated constraints to consider (Militzer et al., 2019). For example, one would want to favor models with \(P_{\text{rain},1}\) and \(P_{\text{rain},2}\) value that are broadly compatible with phase diagram of H-He mixtures as derived by Morales et al. (2013). From the assumed molecular and metallic adiabats, we can infer the temperatures \(T_{1}\) and \(T_{2}\) that correspond to both pressures. For both pairs \(P_{\text{rain},1}\)-\(T_{1}\) and \(P_{\text{rain},2}\)-\(T_{2}\), we find the closest points on the immiscibility curve, \(P_{1}^{*}\)-\(T_{1}^{*}\) and \(P_{2}^{*}\)-\(T_{2}^{*}\), that minimize the following immiscibility penalty function, \[\chi^{2}_{\text{H-He}}=\sum_{i=1}^{2}C_{P}\left|\frac{P_{i}^{*}-P_{i}}{P_{i}} \right|+C_{T}\left|\frac{T_{i}^{*}-T_{i}}{T_{i}}\right|\ \ \, \tag{18}\] before we add the resulting minimum value to the total \(\chi^{2}\). \(C_{P}\) and \(C_{T}\) are weights that must be balanced with those in other \(\chi^{2}\) terms. We set \(C_{T}/C_{P}=2\). Implicitly the \(\chi^{2}_{\text{H-He}}\) term also introduces a penalty for metallic adiabats that are too hot to be compatible with the assumed immiscibility curve. We chose not to square the individual terms in Eq. 18 because there is currently no agreement between theoretical and experimental results where in pressure-temperature space, hydrogen and helium become immiscible. Vorberger et al. (2007) had shown with _ab initio_ simulation that hydrogen and helium are miscible at 8000 K. With more careful _ab initio_ Gibbs free energy calculations, Morales et al. (2013) predicted hydrogen and helium to phase separate at approximately 6500 K for a pressure of 1.5 Mbar. Recent shock wave experiments by Brygoo et al. (2021) that combined Doppler interferometry and reflectivity measurements placed the onset of immiscibility at a much higher temperature of 10 200 K at 1.5 Mbar. Based on Militzer and Hubbard (2013), this corresponds to an entropy of 8.3 k\({}_{B}\)/electron and imply that helium rain would set in as soon as a giant planet's 1 bar temperature cools to 360 K. (_Ab initio_ methods predict 180 K.) Helium rain would begin much earlier and cover a longer fraction of a giant planet's lifetime. Fortney and Hubbard (2004) for example estimated that Jupiter's 1 bar temperature only cooled by 10 K during the last 1.5 billion years. Also according to Wahl et al. (2021), helium rain would have already started on hot exoplanets in 9 day orbits like Kepler-85b but not on exoplanets in 1 and 3 days orbits such as WASP-12b and CoRoT-3b. Because the deviations of the _ab initio_ predictions are unexpectedly large and these findings to not yet been reproduced with other laboratory measurements, we will employ the Morales et al. (2013) results when we evaluate the \(\chi^{2}_{\rm H-He}\) term in Eq. 18 for this manuscript. Conversely, Miguel et al. (2022) does not invoke a term like Eq. 18 or a gradual change as in Eq. 16. Instead they employ a sharp transition from the molecular to the metallic hydrogen layer without incorporating predictions from _ab initio_ simulations. This transition occur between 2 and 5 Mbar in most models. Third we add a penalty term (Militzer et al., 2022), \[\chi^{2}_{\rm wind}=\frac{1}{m}\sum_{i=1}^{m}\begin{cases}\left[H(\mu_{i})-H_{ \rm max}\right]^{2}&\qquad\text{if }H(\mu_{i})>H_{\rm max}\\ 0&\qquad\text{if }H_{\rm min}\leq H(\mu_{i})\leq H_{\rm max}\\ \left[H_{\rm min}-H(\mu_{i})\right]^{2}&\qquad\text{if }H(\mu_{i})<H_{\rm min }\end{cases}\quad, \tag{19}\] that keeps the depth of our winds, \(H\), within perscribed limits of \(H_{\rm min}=1500\,\text{km}\) and \(H_{\rm max}=4500\,\text{km}\) to keep them broadly compatible with earlier predictions (Guillot et al., 2018). We evaluate them at \(m=61\) equally spaced \(\mu\) points between -1 and +1 with \(\mu=\cos(\theta)\) and \(\theta\) being the colatitude. We directly use the observed cloud-level winds from Tollefson et al. (2017) but then assume the wind depth to be latitude dependent. Alternatively one can allow the winds on the visible surface to deviate from the observations and keep the wind depth the same for all latitudes. Both types of wind solutions are compared in Militzer et al. (2022). We solve the thermal wind equation (Kaspi et al., 2016) to derive the density perturbation, \(\rho^{\prime}\), \[\frac{\partial\rho^{\prime}}{\partial s}=\frac{2\omega}{g}\frac{\partial}{ \partial z}\left[\rho u\right]\quad, \tag{20}\] for a rotating, oblate planet (Cao and Stevenson, 2017) in geostrophic balance. \(z\) is the vertical coordinate that is parallel to the axis of rotation. \(s\) is the distance from the equatorial plane along a path on an equipotential. \(\rho\) is static background density and \(g\) is the local acceleration. We obtain both from our CMS calculations of a particular model, which means our wind model and the interior structure are selfconsistent. \(u\) is the differential flow velocity with respect to the uniform rotation rate, \(\omega\). We represent \(u\) as a product of the surface winds, \(u_{s}\), from Tollefson et al. (2017) and a decay function of \(\sin^{2}(x)\) form from Militzer et al. (2019) that keeps the wind speeds initially constant before they decay over a small depth interval. This is consistent with assumptions made by Dietrich et al. (2021) and Galanti and Kaspi (2021) while in Kaspi et al. (2018) and Miguel et al. (2022) a gradual decay of the wind speed with depth is assumed. We integrate the density perturbation, \(\rho^{\prime}\), to determine the dynamic contributions to the gravity harmonics before combining them with the static gravity harmonics that we have obtained from the CMS calculation, \(J^{\rm model}_{n}=J^{\rm static}_{n}+J^{\rm dynamic}_{n}\). The resulting harmonics are then compared with the _Juno_ measurements in Eq. 17. We work with the error bars of the _Juno_ measurements, \(\delta J^{\rm Juno}_{i}\) directly since we construct selfconsistent models in which wind terms can compensate for variations in the interior structure. This is one of the main differences to the recent work by Miguel et al. (2022) who performed interior and wind calculations separately and increased the _Juno_ error bars by a factor of 30 to represent an unknown contribution to even harmonics that comes from the winds. The other main difference is that we used the nonperturbative CMS approach while Miguel et al. (2022) relied on the 4th order theory of figures method but then compute a correction for a subset of models. Finally we add the penalty term, \[\chi^{2}_{\rm guide}=C\begin{cases}\left[p_{\rm min}-p\right]^{2}&\text{if }p<p_{\rm min}\\ 0&\text{otherwise}\end{cases}\quad, \tag{21}\] that help us guide the Monte Carlo ensemble to reach and remain in parameter region with \(p\geq p_{\rm min}\) that we consider physical. (Similar terms can assure \(p\leq p_{\rm max}\).) We find such a soft approach to work better then a hard constraint that would reject any model that violates the condition, \(p\geq p_{\rm min}\). Still we set \(C\) to a high value like 1000 to assure compliance. We verify that \(\chi^{2}_{\rm guide}=0\) for models that we publish. \(Z_{1}\geq Z_{\rm protosolar}\) is an obvious condition to satisfy but we also require \(Z_{2}\geq Z_{1}\) and \(S_{2}\geq S_{1}\). The _Juno_ gravity measurements (Folkner et al., 2017; Iess et al., 2018; Durante et al., 2020) have reached a very high degree of accuracy and the fact that we employed the error bars directly, rather than inflating them, underlines the need to an efficient sampling method that we provide with our quadratic Monte Carlo approach. Some interior parameters are allowed to vary freely during the Monte Carlo calculations while others are others constrained by observations. For example, we do not vary the helium fractions, \(Y_{1}\) and \(Y_{2}\) because \(Y_{1}\) is constrained by measurements of the _Galileo_ entropy probe and \(Y_{2}\) is derive so that the planet overall has a protosolar \(\tilde{Y}\). The heavy elements fractions, \(Z_{1}\) and \(Z_{2}\) are employed so that the model matches the planet's mass and \(J_{2}\). During the Monte Carlo procedure, we only vary the four pressures, \(P_{\rm rain,1}\), \(P_{\rm rain,2}\), \(P_{\rm core,1}\), \(P_{\rm core,2}\), the helium rain exponent \(\alpha\), the entropy of the deep interior, \(S_{2}\), and the depth of the winds, \(H(\mu_{i})\). We do not introduce a prior distribution or apply any hard constraints to these four pressure values. Their posterior distribution is just a result of the different \(\chi^{2}\) terms that we have described in this section. ## 3 Results ### Application to Ring Potential In order to study how our QMC method performs in confined geometries, we constructed the following ring potential, \[V(\vec{r})=(2m)^{2m}\left[(\rho-R)^{2m}+\sum_{i=3}^{N}r_{i}^{2m}\right]-Cr_{1} \ \, \tag{22}\] where \(\vec{r}=\{r_{1},\ldots,r_{N}\}\) is a vector in the \(N\geq 2\) dimensional parameter space. \(\rho=\sqrt{r_{1}^{2}+r_{2}^{2}}\) is the distance from the origin in the \(r_{1}\)-\(r_{2}\) plane. The first term ensures that the potential is only small along a ring of radius, \(R\), in \(r_{1}\)-\(r_{2}\) space as we illustrate in Fig. 3. The second term keeps the magnitude of all remaining parameters, \(r_{3\ldots N}\) small. Increasing the positive integer, \(m\), allows us to make the potential more confining by making the potential walls around the ring steeper. Finally we introduce the last term to break axial symmetry. Typically we set C to small value like \(0.01\) so that the potential minimum is approximately located at point \(\vec{A}=(+R,0,\ldots)\) while the potential is raised at opposing point \(\vec{B}=(-R,0,\ldots)\). The prefactor of the first term in Eq. 22 is introduced so that the location of potential minimum does not shift much with increasing \(m\). For this test case, we insert the ring potential into the Boltzmann distribution in Eq. 13. If we initialize an ensemble of walkers in the vicinity of point \(\vec{B}\), the algorithm has no choice but to travel along the ring until it reaches the area of point \(\vec{A}\) where the sampling probability is highest, the most relevant states will be sampled, and only then the block averages will start to stabilize. Figure 3: Illustration of the ring potential, \(V(\vec{r}=(x,y,z))\), that we constructed to study how well different MC algorithms work in confined geometries. By construction, the potential becomes small if the distance \(\rho\) equals a given radius, \(R\). We slightly tilted the ring to illustrate the effect of the last term in Eq. 22 that breaks the axial symmetry by lowering the potential for positive \(x\) values. In Figs. 4 and 5, we compare the performance the affine MC and our quadratic MC methods under different conditions. As our baseline case, we set \(N=6,m=6,R=1,C=0.01,T=0.01\), and \(a=2.5\). In every block, we attempt to make \(10^{3}\) individual moves. In most cases, we initialize the ensemble of MC walkers near point \(\vec{B}\), which means average block energy will decrease as the ensemble travels towards the potential minimum near point \(\vec{A}\) (see Fig. 3). In Fig. 4a, we compare how long that takes for different values \(m\). Increasing \(m\) makes the potential walls steeper, which causes both methods to converge more slowly. However, in comparison, the QMC method perform significantly better. For \(m=10\), it only takes 54 blocks for it converge within 2\(\times 10^{-4}\) of the final energy while it takes 308 blocks for the affine MC method to do so. In Fig. 4a, we also show results from simulations that initialized the Figure 4: Performance comparison between affine MC and our quadratic MC methods. The average potential energy in a MC block is plotted as function of block number in order to illustrate how long it takes for either method to converge. For all parameters considered here, the QMC method does so more efficiently. In panel (a), two curves are plotting for every method. Tose that converge from above represent MC ensembles that were initialized from the high-energy point \(\vec{B}\) (see Fig. 3) while those converge from below where started from the low-energy point \(\vec{A}\). In all following panels, we only show ensembles that were initialized near \(\vec{A}\). In panel (b), the final, converged energy has been subtracted for clarity. In panel (f), the energy has been divided by constant \(C\). (To reduce the noise, 1000 independent MC simulations have been averaged to generate each curve.) ensemble of walker at the low-energy point \(\vec{A}\). The convergence rates are similar to those before but block averages now converge to the final block energy from below. In Fig. 4b, we compare the performance of both method for different ring radii, \(R\). For a very small value of 0.2, both methods converge equally fast. With increasing radius, it takes the affine MC method much longer than our QMC method to do so. When we lower the temperature from 0.01 to 0.001, we find a similar behavior in Fig. 4c. A lower temperature makes the potential appear more confining, which delays the convergence of the affine MC method dramatically. In Fig. 4d, we compare the convergence for different spatial dimensions \(N\). For \(N=3\) and 6, the QMC method converges faster but for \(N=10\), the behavior is fairly similar to that of the affine MC method, and it takes both methods longer to converge than for smaller \(N\). In Fig. 4e, we test the dependence on the scaling parameters \(a\). For the QMC method, values between \(a=1.1\) and 1.5 yield optimal results. For the affine methods, \(a\approx 1.3\) is optimal but even then it converges only half as fast approximately as our quadratic method. Finally in Fig. 4f, we vary the temperature that enters the MC calculation via the Boltzmann factor. Since we are interested in the effects of ring term in Eq. 22, we set the constant \(C\) equal temperature \(T\) for this particular analysis. A change in \(C=T\) recalibrates the strength of the ring term in Eq. 22 in relation to the linear term. For small \(C=T\) values, the confining effect of the potential increases, which foremost delays the convergence of the affine method. Summarizing one found that for our ring potential, our QMC method performed significantly better than the affine MC method for most conditions. In a few cases like large spatial dimension \(N\), the performance was found to be similar. In Fig. 5, we study autocorrelations of the block energy for our base case parameters, \(N_{W}\)=7 walkers, and the two stretch parameter \(a=1.5\) and 2.5. For this analysis, we removed the transient part of the calculation (see Fig. 4) where the block energies have not yet converged. Fig. 5a shows that the autocorrelation functions of affine MC energies decay much more slowly than that of the QMC energies. Based on the integrals under the plotted curves, we estimate the autocorrelation time to be 79000 and 123000 MC moves for the affine method with a=1.5 and 2.5 respectively but only 12000 and 19000 MC moves for the QMC method (with linear sampling of \(P(t)\)). We also performed a block analysis for these four calculations (Allen and Tildesley, 1987; Martin et al., 2016). In Fig. 5b, we plot the error bars that emerged from the blocking analysis when individual block energies are combined into longer and longer blocks. All curves show a plateau that indicates that the blocks were chosen to be sufficiently long for the block averages to be uncorrelated. The affine MC method yielded an energy of \((-3.02\pm 0.12)\times 10^{-4}\) and \((-3.11\pm 0.17)\times 10^{-4}\) for \(a=1.5\) nd 2.5 respectively. With the QMC method, we obtained \((-3.007\pm 0.051)\times 10^{-4}\) and \((-3.013\pm 0.063)\times 10^{-4}\) for the two \(a\) values respectively. All averages are compatible with one another. For the same calculation duration, the QMC method yielded an error bar that is 2.5 smaller. This is in agreement with observation that its autocorrelation time is approximately six times shorter. In Fig. 5c, we compare the performance of the affine method with that of our QMC method using linear and Gaussian \(t\) sampling. It is our goal of this quantitative analysis is give some guidelines how the stretch factor, \(a\), and the numbers of walkers, \(N_{W}\), should be chosen. Goodman and Weare recommended setting \(a=1.5\) and did not make a recommendation for \(N_{W}\) besides choosing it to be large. (E.g. Miguel et al. (2022) employed 512 walkers to sample a 7 dimensional parameter space.) For the ring potential with \(T=0.01\), \(N=6\), \(m=6\), \(R=1\), we consider value of the stretch factor \(a\) from 0.3 to 2.5 to explore the perform of all three methods even though we consider value \(a<0.5\) and \(a>1.5\) poor choices. (The affine method requires \(a>1\) while the others do not.) A large number of walkers introduces diversity, which helps to explore the parameter space. On the other hand, if the number of walkers is chose to be too be large, one would expect to algorithm to have difficulties to explore all relevant areas of the parameter space efficiently. In principle one would expect the number of walkers scale with the dimensionality of the space. In our view, an efficient MC method should have two properties. It should travel effectively from improbable parameter regions to the relevant ones. Once there, it should yield to small error bars for the estimated averages. The two axes of Fig. 5c measure both properties. On the Y axis, we plot the error bar that we obtained with the blocking analysis for long MC calculations (one per pair of \(a\) and \(N_{W}\) parameters) with \(10^{10}\) moves. We initialized the ensemble near low-energy point \(\vec{A}\) because, for the error bar calculations, we are are not interest in the time it take the ensemble around the ring. To determine the travel time, we performed \(10^{3}\) separate but shorter calculations with \(10^{7}\) moves starting from point \(\vec{B}\). Every time, we recorded the ring travel time that we define to be average number of MC moves that are required for the energy in the ensemble to reach the mid value between the initial potential energy and final converged value that we quoted above. The ring travel time and the MC error bar both have statistical uncertainties, which introduces noise into Fig. 5c. Still a number of trends emerge clearly. If a unreasonably number of walkers like \(N_{W}=200\) is chosen for the affine method, the ring travel time becomes very large and approaches \(10^{6}\) MC moves. To a lesser degree, this trend is also seen for the QMC method. On the other hand, the computed error bars do not suffer from choosing \(N_{W}\) very large. So employing a very large ensemble of walkers yields comparable but not smaller error bars than employing more modest numbers of walkers. If only 7 walkers are used for the affine method, the ring travel times become very reasonable but resulting MC error bar becomes very large. The best performance is seen for \(N_{W}=9\ldots 19\) and \(a\)=1.5, which yields an average ring travel time of \(8\times 10^{4}\) moves and an error bar of \(2.5\times 10^{-6}\). For the same number of MC moves, our QMC method yields error bars half the size of the affine method and ring travel times that are approximately half as long. We see no particular advantage of using the Gaussian \(t\) sampling method. The linear \(t\) sampling method yields the very good results over a wide range \(a\) values from \(a=0.3\ldots 1.5\) and Figure 5: Autocorrelation function (panel a) and error bar from blocking analysis (panel b) of affine and quadratic MC calculations. In panel (c), we compare the Monte Carlo error bar from the block analysis and the time it take the ensemble to travel around the ring. The number of attempted MC moves was the same in all cases. The symbols correspond to different stretch factors, \(a\), given in the legend. Results for various numbers of walkers are shown, \(N_{W}\)=7, 9, 11, 15, 19, 31, 51, and 200. The blue symbols show results derive with the affine method for \(a\)=1.2, 1.5, 2.0, and 2.5. The thin, medium thick, and thick blue lines represent results with \(N_{W}\)=7, 19, and 200 walkers, respectively. The remaining symbols show results from our quadratic MC method with uniform (red) and Gaussian (green) sampling of the \(t\) space. The medium thick and thick red lines show the best QMC results for \(N_{W}\)=11 and 15, respectively. Compared the affine method, our QMC method requires a approximately travel time half as long and leads to error bars half as large if \(N_{W}=7\ldots 19\) and \(a=0.3\ldots 1.5\) are used. \(N_{W}\)=7... 19. The average travel time is only \(4\times 10^{4}\) moves and the error bar is \(1.1\times 10^{-6}\). Based on this analysis, we recommend setting \(N_{W}=2N+1\ldots 3N+1\) in general. ### Comparison between Quadratic Sampling and Walk Moves In Fig. 6, we compare the travel and autocorrelation times from the walk method for the ring potential in \(N=6\), \(10\), \(18\) and \(24\) dimensions with results obtained affine and quadratic methods using linear and Gaussian \(t\) sampling. For every dimension \(N\), we performed independent calculations for \(N_{W}=N+2,3N/2\), and \(2N\). For the affine method, we fix \(a=1.5\) but for the quadratic MC method, we considered \(a=\{0.1,0.2,0.3,0.4,0.5,0.7,1.0\}\) for the linear \(t\) sampling and \(a=\{0.3,0.5,0.7\}\) for the Gaussian \(t\) sampling. For original walk method, we chose \(N_{S}=3,4,5\), and \(6\) for the size of the subset of guiding walkers. We noticed that choosing \(N_{S}\) larger made such calculations very inefficient since it led to drastic increases in the travel and autocorrelation times for \(N\geq 10\) as panels (b)-(d) of Fig. 6 illustrate. For lower dimension of \(N=6\), however, the results of the original walk methods are very good. Panel (a) shows that the travel time can be up to \(25\%\) shorter than that of the quadratic sampling method. Already for \(N=10\) dimensions results from the original walk method fall behind those of the quadratic sampling method. For \(N=18\) and \(24\), this trend continues and for a subset size of \(N_{S}=6\), the original walk method yields longer travel and autocorrelation times than even the affine method, regardless what ensemble size, \(N_{W}\), is employed. This increase in travel and autocorrelation times led us to introduce the scaling factor \(a\) into Eq. 15. Choosing small Figure 6: Travel and autocorrelation times computed for the ring potential in dimensions \(N=6\), \(10\), \(18\) and \(24\). An efficient algorithm makes both as short as possible. Panels (a)-(d) compare results from the affine, quadratic and the original walk method (\(a=1\)). For a low dimension of \(N=6\), the walk moves perform best regardless whether subset size, \(N_{S}=3\ldots 6\), is chosen but the original walk moves are not competitive for \(N>10\). Panels (e) and (f) include results from the modified walk method for \(N=18\) and \(24\) because we found that choosing a scale factor \(a\ll 1\) increases the sampling efficiency. Symbols were chosen consistently across all panels. So QMC (L) and (G) label results from the quadratic MC method with linear and Gaussian \(t\) sampling respectively. The symbols distinguish results that were obtained with different \(a\) parameters. values of \(a=0.1\) or \(0.3\) enabled us to obtained travel and autocorrelation times with the walk method that are at par or shorter than those of the quadratic sampling method as panels (e) and (f) of Fig. 6 illustrate. In the next section, we will analyze how valuable our scaling factor \(a\) can be for the sampling of the Rosenbrock density. ### Sampling the Rosenbrock Density Following Goodman and Weare (2010), we also applied our methods to sampling the 2d Rosenbrock density, \[\pi(x_{1},x_{2})\propto\exp\left\{-\frac{A\left(x_{2}-x_{1}^{2}\right)+(1-x_{ 1})^{2}}{B}\right\}\quad, \tag{23}\] which carves a narrow curved channel into the \((x_{1},x_{2})\) landscape. \(B\) effectively plays the role of temperature. First we set \(A=100\) and \(B=5\) to be consistent with Goodman and Weare (2010) but then we also increase \(A\) to \(10000\), while leaving \(B\) unchanged, which makes the channel even narrower and makes sampling it yet more challenging. For both \(A\) values, we performed a series of independent MC calculations with \(10^{7}\) blocks, each consisting of \(10^{3}\) individual moves. We compared the performance of ensembles with \(N_{W}=\{3,4,5,6,8,10,20\}\) walkers for the following four methods: For the affine method, we compared the \(a\) values \(\{1.2,1.5,2.0,2.5\}\), for the quadratic MC with linear and Gaussian \(t\) sampling we considered \(a=\{0.3,0.5,0.7,1.0,1.2,1.5,2.0\}\) respectively. For the modified walk moves, we studied the combined ranges of \(a=\{0.1,0.3,0.5,1.0,1.2,1.5,2.0,3.0\}\) and \(N_{S}=\{4,5,6,10\}\) under the condition \(N_{S}<N_{W}\). The results are summarized in Fig. 7 where we plot the autocorrelation time, \(\tau\), and the error bar, \(\sigma\), that we computed with the blocking method. Both were derived from average energy that we computed for everyone of the last \(80\%\) of the \(10^{7}\) blocks. We define an energy for the Rosenbrock density, \(E(x_{1},x_{2})=-\ln\pi(x_{1},x_{2})\), using the analogy between Eq. 23 and the Boltzmann factor with \(k_{b}T=1\). Despite considerable noise in Fig. 7, one can identify the expected scaling of \(\tau\sim\sigma^{2}\) between the auto correlation time, \(\tau\), and the computed error bar, \(\sigma\). An optimal algorithm would make both as small as possible. As expected, both values increase considerably for all algorithms if one raises \(A\) from \(100\) to \(10000\) because it narrows the channel of the Rosenbrock density, which makes sampling it yet more difficult. We find the affine method yields the largest energy error bars among all methods regardless of which \(a\) value is employed. The performance of the modified walk method strongly depends on the choice of \(a\), which renders the our modification in Eq. 15 important. For the sampling of the Rosenbrock density, we find that \(a\) values larger than 1 perform the best, even though they yield an rather low acceptance ratio of only \(2{\times}10^{-3}\) as the lowest panel of Fig. 7 illustrates. However, if \(a\) is choosen too large, the acceptance ratio decreases below \(10^{-3}\) and the autocorrelation time increases because too few of the large steps get accepted. We found that the quadratic MC method samples the Rosenbrock density most efficiently. For \(A=10000\), the shortest autocorrelation time were approximately four times short than the best results that we obtained with the modified walk method. In Fig. 7, we highlighted some of the most favorable results that were obtained with Gaussian \(t\) sampling for \(a=1.5\) and linear \(t\) sampling for \(a=2.0\). The acceptance ratio were again rather low and ranged from \(0.01\) to \(0.03\) only. This means for challenging sampling problems like the Rosenbrock density, one may want invest in determining an optimal or at least a reasonable choice for the scaling parameter \(a\). ### Predictions for Jupiter's Interior We applied our QMC algorithm to generate three different ensembles of interior models under the assuptions in Sec. 2.4. The resulting posterior distributions are shown in Figs. 8 and 9 while averages and standard deviations of different parameters are given in Tab. 1. The three ensembles are: 1. This is our reference ensemble of the five-layer models from Militzer et al. (2022). 2. We increased the interior entropy by increasing the temperature at 1 bar from the _Galileo_ measurements of 166.1 K to 170 K, which reduces the density of H-He mixtures in the molecular layer. At the lowest pressures, where H-He mixture behaves like an ideal gas, this translates into a density reduction of \(2.3\%\). At higher pressure, the reduction is smaller because the systems is more electronically degenerate. 3. Finally we made a change in our equation of state of H-He mixture and reduce the density by \(3\%\) in the region from \(P^{*}=10\) to \(100\) GPa but employ a 1 bar temperature of 166.1 K. Figure 7: Energy autocorrelation time, energy error bar and acceptance ratios derived for the Rosenbrock density being sampled for \(A=100\) and \(10000\) with the affine method, the quadratic MC method with linear (QMC L) and Gaussian (QMC G) \(t\) sampling as well as with the modified walk method. The symbols distinguish results that were obtained with different \(a\) parameters. Most notably these two density changes increase the mount of heavy elements, \(Z_{1}\), but they also introduce additional flexibility into our models and thereby widen the allowed region of other model parameters, as the larger standard deviations in Tab. 1 confirm. One finds that an increase of the 1 bar temperature from 166.1 to 170 K leads to an modest increase in \(Z_{1}\) from \(\sim\)1.6% to \(\sim\)2.0% while the 3% density reduction leads to a much larger increases \(Z_{1}\) to \(\sim\)3.3%, effectively doubling the amount. We find increases of similar magnitude for the heavy elements abundance of the dilute core region, \(Z_{2}\), from 18.3% to 19.5% to 20.6% when the three ensembles are compared. Conversely, the ending pressure for the helium rain layer, \(P_{\rm rain,2}\), decreases from \(\sim\)445 in our reference ensemble to \(\sim\)315 GPa in other two ensembles. Fig. 8 shows that \(Z_{1}\) is positively correlated with \(P_{\rm rain,2}\) because an increase in \(P_{\rm rain,2}\) means helium is sequestered to deeper layers and the resulting density reduction over 100-300 GPa pressure interval is compensated by a modest increase in \(Z_{1}\). In comparison, the correlation between \(Z_{1}\) and the starting pressure of helium rain layer, \(P_{\rm rain,1}\), is rather weak because typical values for helium rain exponent are \(\alpha\gtrsim 3\) so that the helium concentration does not vary much near \(P_{\rm rain,1}\). This is also the reason why \(P_{\rm rain,1}\) does not strongly correlate with other model parameter. Fig. 8 further shows that, within a given ensemble, \(Z_{1}\) does not correlate strongly with \(Z_{2}\) nor with core pressures, \(P_{\rm core,1}\) and \(P_{\rm core,2}\). Still, \(Z_{1}\) is positively correlated with the magnitudes of \(|J_{4}|\), and \(|J_{6}|\) while there is no apparent correlation with \(|J_{8}|\). \(Z_{2}\) and \(P_{\rm core,1}\) correlate in the same way with these three gravity coefficients but strengths of their correlation are much higher. The extended dilute core is the main feature of our feature of our five layer models that enables us to fit \(J_{4}\) and \(J_{6}\) by distributing heavy elements over a wider range of radii than was possible with compact core assumption. So one expects strong correlations of \(J_{4}\) and \(J_{6}\) with \(Z_{2}\) and \(P_{\rm core,1}\) that control the heavy element distribution in the core region. \(Z_{2}\) positively correlates with \(P_{\rm core,2}\) because an increase of \(P_{\rm core,2}\) effectively shrinks the size of the dilute core which is then compensated by an increase in \(Z_{2}\). As expected, one finds that \(P_{\rm core,1}\) and \(P_{\rm core,2}\) are negatively correlated so the combined mass of heavy elements in the core and core transition layer is kept approximately constant. In the bottom row of Fig. 8 we compare the _Juno_ measurements of gravity harmonics \(J_{4}\)-\(J_{8}\) with the histogram of the computed ensembles. \(J_{4}\) is well matched by all three ensembles, which is a consequence of adopting a dilute core. Matching \(J_{6}\) is still not straightforward. Models that reduce the density by 3% are symmetrically distributed around the measured \(J_{6}\) value. There is also good overlap with models that adopted a 1 bar temperature of 170 K. Most models with a 1 bar temperature of 166.1 K exhibit a larger \(J_{6}\) value than was measured. Still as we have shown in \begin{table} \begin{tabular}{c c c c} \hline \hline Parameter & Reference & \(T_{\rm 1bar}=170\,\)K & 3\% density \\ & ensemble & ensemble & reduction \\ (1) & (2) & (3) & (4) \\ \hline \(Z_{1}\) [\%] & 1.56 \(\pm\) 0.05 & 2.03 \(\pm\) 0.06 & 3.27 \(\pm\) 0.04 \\ \(P_{\rm rain,1}\) [GPa] & 98 \(\pm\) 16 & 107 \(\pm\) 15 & 95 \(\pm\) 11 \\ \(P_{\rm rain,2}\) [GPa] & 445 \(\pm\) 19 & 314 \(\pm\) 19 & 315 \(\pm\) 13 \\ \hline \(Z_{2}\) [\%] & 18.3 \(\pm\) 0.3 & 19.5 \(\pm\) 0.3 & 20.6 \(\pm\) 0.3 \\ \(P_{\rm core,1}\) [GPa] & 786 \(\pm\) 38 & 979 \(\pm\) 44 & 1389 \(\pm\) 48 \\ \(P_{\rm core,2}\) [GPa] & 2054 \(\pm\) 106 & 1946 \(\pm\) 96 & 1811 \(\pm\) 63 \\ \hline \(M_{\rm Z,total}\) [\(M_{E}\)] & 25.08 \(\pm\) 0.06 & 25.92 \(\pm\) 0.06 & 26.90 \(\pm\) 0.05 \\ \hline \(M_{\rm core}\) [\(M_{J}\)] & 0.20 \(\pm\) 0.02 & 0.22 \(\pm\) 0.02 & 0.25 \(\pm\) 0.01 \\ \(M_{\rm trans}\) [\(M_{J}\)] & 0.34 \(\pm\) 0.03 & 0.25 \(\pm\) 0.03 & 0.10 \(\pm\) 0.02 \\ \(M_{\rm env}\) [\(M_{J}\)] & 0.49 \(\pm\) 0.02 & 0.53 \(\pm\) 0.02 & 0.65 \(\pm\) 0.01 \\ \hline \end{tabular} \end{table} Table 1: Ensemble averages and standard deviations of different interior model parameters. Machine-readable data files for a representative model of each ensemble are included in the supplemental material. Militzer et al. (2022), there are models in the 166.1 K ensemble that match \(J_{6}\) exactly but there are also many others that yield higher values. In comparison, matching \(J_{8}\) poses no challenge. In Fig. 9, we investigate correlations between the core mass fraction, \(Z_{2}\), the planet's total budget of heavy elements, and the masses of the three layers. (Combined they match the planet's total mass, \(M_{J}\).) When we increase the 1 Figure 8: Posterior distribution of the three different QMC ensembles: 1) The red circles represent our reference ensemble of five layer models with a dilute core. 2) We made Jupiter’s interior slightly hotter by increasing temperature at 1 bar from _Galileo_ measurement of 166.1 to 170 K (blue symbols). 3) We reduced the density of the H-He mixture by 3% over the pressure interval from 10 to 100 GPa (orange circles). \(Z_{1}\) and \(Z_{2}\) are mass fractions of heavy element. The four pressure values are given in units of GPa. The values of the gravity harmonics, \(J_{4}\ldots J_{8}\), have all been multiplied by \(10^{6}\). The vertical dashed lines indicate the _Juno_ gravity measurements. bar temperature (or lower the density by 3% in the 10-100 GPa region), the total amount of heavy elements increase modestly from 25 to 26 (or 27) Earth masses. This is a modest increase compared to 8-39 Earth mass range that Saumon & Guillot (2004) had obtained by considering a plethora of tabulated EOS models for hydrogen. Our heavy element abundances are a bit lower than the 28-32 Earth mass range that Nettelmann et al. (2012) because a compact core and a higher interior temperature profile were assumed. Figure 9: Posterior distribution for the three different QMC ensembles that we show in Fig. 8. Here we plot correlations between heavy elements fraction in the core, \(Z_{2}\), the total amount of heavy elements in the planet, \(M_{\rm Z,total}\), and the masses of the core, the core transition layer and the envelope. (The values of three masses were divided by the planet’s total mass so that they add up to 1.) The probability distributions of all variables are shown in the bottom panels on a logarithmic scale. When we switch between our three ensembles from 1 to 2 (or to 3), the mass of the dilute core increases from 0.20 to 0.22 (or to 0.25) \(M_{J}\). The mass of the core transition layer shrinks drastically from 0.34 to 0.25 (or to 0.10) \(M_{J}\) as \(P_{\rm core,1}\) increases and \(P_{\rm core,2}\) decreases. By definition, a rise in \(P_{\rm core,1}\) also increases the mass envelope (that include molecular, helium rain, and metallic hydrogen layers) from 0.49 to 0.53 (or to 0.65) \(M_{J}\). Switching from ensemble 1 to 2 or 3 leads to a reduction in the size of the dilute core because one lowers the density in the outer region of the planet. This is consistent with earlier modeling work that predicted small or negative heavy elements abundances (Hubbard & Militzer, 2016) because no dilute core was considered. Fig. 9 illustrates that, within each ensemble, the mass of the transition layer negatively correlates with the masses of the core and that of the envelope. It provides a way to match the total mass of the planet. More surprising is, however, the masses of the core and the envelope are positively correlated. This is consistent with the trend one sees in the first column in Fig. 9. When \(Z_{2}\) increases within a particular ensemble, the core mass drops, and the envelope mass increases slightly while the mass of the transition layer remains approximately unchanged. ### Equation of state perturbations Equations of state of materials at high pressure have been studied with laboratory measurements (Brygoo et al., 2015) and _ab initio_ computer simulations (Militzer, 2009; Hu et al., 2011; McMahon et al., 2012; Militzer et al., 2021). At the same time, it has been a major challenge to the match Jupiter's \(J_{4}\) and \(J_{6}\) with interior models that an rely on a physical equation of state for H-He mixtures and, for the molecular envelope, yield at least a protosolar abundance of heavy elements of \(Z_{\rm protosolar}=1.53\%\) according to Lodders (2010) who derived the present-day solar abundances in Tab. 2 by combining spectroscopic measurements of the solar photosphere with laboratry measurements of CI chondrite meteorites. Over time, heavy elements diffuse slowly towards a stars interior because of gravitational forces. Lodders (2010) represent this process by applying a uniform factor of \(10^{0.053}\) to obtain the protosolar from the solar abundances. Most of the heavy elements mass comes from just 7 elements that are listed in Tab. 2. In Jupiter's atmosphere, the noble gas neon has been measured to be nine-fold depleted (Mahaffy et al., 2000) compared to the protosolar abundance. It is assumed that neon partitions strongly into the helium droplets when hydrogen and helium phase separate at megabar pressures (Roulston & Stevenson, 1995; Wilson & Militzer, 2010). \begin{table} \begin{tabular}{c c c c c c} \hline \hline & Present-day & Inferred & 3-fold proto- & 4-fold CO & Galileo \\ Element & solar & protosolar & solar model & model for & entry \\ & abundances & abundances & for Jupiter & Jupiter & probe \\ \hline O & 0.63 & 0.71 & 2.13 & 1.50 & 0.29 \\ C & 0.22 & 0.25 & 0.75 & 1.00 & 1.06 \\ \hline Ne & 0.17 & 0.19 & 0.02 & 0.02 & 0 \\ \hline Fe & 0.12 & 0.14 & 0.41 & 0.14 & 0 \\ N & 0.07 & 0.08 & 0.24 & 0.08 & 0.35 \\ Si & 0.07 & 0.08 & 0.24 & 0.08 & 0 \\ Mg & 0.06 & 0.07 & 0.20 & 0.07 & 0 \\ Others & 0.07 & 0.08 & 0.24 & 0.08 & 0 \\ \hline Total & 1.41 & 1.53 & 4.2 & 2.9 & 1.7 \\ \hline \end{tabular} Note. – Some rounding errors are to be expected. \end{table} Table 2: Mass fractions in % of different heavy elements according to measurements and various models. The second and third column lists the solar and protosolar abundances from Lodders (2010). The columns four and five show two compositional models for Jupiter. The last column lists one possible interpretation (Hubbard & Militzer, 2016) of measurement of the _Galileo_ entry probe (Wong et al., 2004). While the helium depletion is important for interior models, neon only contribute 11% to the solar heavy element budget. While there is significant uncertainty in the data that have been obtained for heavy element abundances in Jupiter's atmosphere, one can make a number of plausible assumptions and then compare them with the predictions from interior models (Nettelmann et al., 2012). Here we compare the predictions from our interior models with three abundance models in Tab. 2: (1) First one can assume all heavy elements are uniformly enriched to their 3-fold protosolar abundance (Owen et al., 1999) while neon has been 9-fold depleted. This yields \(Z^{\rm(3fold)}\approx 4.2\%\). This assumes the measured enrichment of carbon, nitrogen, and sulfur applied to all heavy elements even though their respective condensation temperatures are very different, which may pose a challenging if one assumes they were delivered along with solid planetesimals. On the other hand, the near uniform enrichment of the noble gases suggests that direct capture of nebula gas may have played a role (Lodders, 2004). Laboratory condensation experiments (Notesco et al., 2003) showed the preferred way to condense noble gases is to trap them in amorphous ice (Bar-Nun et al., 2007) but these measurements also demonstrated the corresponding trapping rates are nonuniform. (2) It has also been proposed that oxygen and carbon atoms were delivered in equal numbers in form of carbon monoxide (Helled and Lunine, 2014). If one assumes the measured 4 times protosolar abundance of carbon of reflects this delivery processes, we obtain \(Z^{\rm(CO)}\approx 2.9\%\) while we have included all other elements, except neon, in protosolar proportions. (3) Finally we can take the measurements of the _Galileo_ entry probe with its subsolar water abundance at face value, \(Z^{\rm(Gal)}\approx 1.7\%\). While one expects Jupiter's oxygen abundance to be at least solar, subsolar abundances cannot be ruled out if Jupiter formed inside the ice line in a region that was starved of icy planetesimals (Lodders, 2004). Recently Cavalie et al. (2023) predicted a subsolar oxygen abundance for Jupiter's interior based on thermochemical models for the atmospheres. In Fig. 2, we studied how the heavy element abundance in the atmosphere is affected by an EOS change. We lowered the H-He density from Militzer and Hubbard (2013) by 3% over a pressure interval from \(P^{*}\) to \(10\times P^{*}\). The strongest response is found for a \(P^{*}\) range from 0.1 to 3 Mbar, which represent density reductions over broad range of pressure (0.1 to 30 Mbar) and includes the transition from molecular to metallic hydrogen. The resulting models can accommodate more than double the protosolar abundances in the upper layer. Such an EOS correction can accomodate the \(Z\) abundances of our CO model and get fairly close to matching the \(Z^{\rm(3fold)}\). Figure 2 also shows that the inferred \(Z_{1}\) value is rather insensitive to the density change above 3 Mbar where helium rain layer has ended in most models. This pressure range is also relatively close to onset of the dilute core, so any change in the H-He EOS may be compensated by a change in the heavy \(Z\) abundance in the core region. Given this flexibility and the fact that we need the density to _increase_ in this pressure interval to match \(J_{4}\) and \(J_{6}\) with a dilute core, explains why \(Z_{1}\) is rather insensitive to a density correction at such higher pressures. In Fig. 2, the vertical lines A and B mark the pressures where the density of the SC EOS deviates from that of an ideal gas by respectively 1% and 10% because of interaction effects. A density reduction lead to a modest increase in \(Z_{1}\) only because this region does not contain a large fraction of the planet's mass. This leaves the B-to-C region (1-50 kbar). A density reduction by 3% there increases \(Z_{1}\) to up 1.75 times the protosolar value. This is surprising because this region has not yet been studied in sufficient detail. We are still relying on the SC EOS because the existing density functional molecular dynamics simulations are not applicable in this region for two reasons. First, the simulation cells become very large which makes the expansion of the electronic orbitals in plane waves very expensive. Second, hydrogen molecules and helium atoms do not collide very often, which makes it very difficult to establish a thermodynamic equilibrium within the picosecond time scale of a typical simulations. Still, Fig. 2 underline this region should be carefully investigated with theoretical and experimental method because the predict \(Z_{1}\) is surprisingly sensitive to the EOS in this pressure region. ## 4 Conclusions We introduced a novel quadratic Monte Carlo method that performs significantly better in confined geometries than the earlier affine (linear) Monte Carlo by Goodman and Weare (2010). Both methods rely on an ensemble of walkers to can adapt to different geometries of the fitness landscape without manual intervention to guide or improve the Monte Carlo sampling. There are a number of reasons for why one might want to switch to our quadratic Monte Carlo method. For a ring potential, we show that our quadratic Monte Carlo algorithm yields error bars that are half as large as that of the affine method, which implies that only one quarter of the computer time is needed to achieve comparable results. Also our QMC method takes half as long to travel the most relevant region of parameter space. The discrepancy in efficiency remains present even after the two adjustable sampling parameters, the number of walkers in the ensemble, \(N_{W}\), and the stretch factor, \(a\), have been optimized for the both methods. We recommend setting \(N_{W}\) between \(2N+1\) and \(3N+1\) with \(N\) being dimensionality of the search space. We found that choosing \(N_{W}\) much larger increases the time it takes the ensemble to travel from unfavorable to favorable regions of the parameter space. Our QMC method is general and very simple to implement into any existing MC code. It requires only a few lines of code that we have made available online along with examples (Militzer, 2023). At the same time, all applications are different and it remains to be seen whether the improvements that we report here for the ring potential and Rosenbrock density carry over to other applications. We also modified the _walk_ moves that Goodman and Weare (2010) had presented as an alternative to the affine invariant moves. We introduce a new scaling factor, \(a\), that enables us to make smaller (or larger) steps in situations where the covariance of the instantaneous walker distribution is a not an optimal representation of local structure of the sampling function. We showed that this factor improves the sampling efficiency of the Rosenbrock density. Given the curvature of the its fitness landscape, sampling this density is particularly challenging for the affine method. The autocorrelation time of our quadratic Monte Carlo method is two orders of magnitude shorter. We apply our quadratic Monte Carlo method to construct five layer models of Jupiter's interior that match data from _Juno_ and _Galileo_ space missions under one set of physical assumptions. Assuming a dilute core to extends to \(\sim\)60% of the planet's radius enables us to match the gravity field as measured by the _Juno_ spacecraft while assuming the helium abundance and 1 bar temperature from the _Galileo_ entry probe. Constructing models with a 3-fold enrichment of heavy elements in the planet's atmosphere remains a challenge unless one invokes an _ad hoc_ decrease in the density of hydrogen-helium mixture in pressure range from 0.1 to 3 megabar where the model predictions are found to be fairly sensitive. So provide a motivation to revisit the accuracy of the equations of state of hydrogen and helium with novel experimental and theoretical methods in the pressure range. On the other hand, an increase of the 1 bar temperature from 166.1 to 170 K as recently suggested by Gupta et al. (2022) yields only a modest increase in the inferred heavy element abundances. This work was supported by NASA mission _Juno_ and by the National Science Foundation's Center for Matter at Atomic Pressures. ## Appendix A Proof of Detailed Balance Assuming ergodicity, Monte Carlo simulations are guaranteed to sample the function, \(\pi(\vec{r})\), in the limit of large step numbers if the condition of detailed balance is satiesfied (see for example Ceperley (1995)). This condition is often formulated for transitions between two individual states \(\vec{r}\) and \(\vec{r}^{\prime}\), \[\pi(\vec{r})P(\vec{r}\rightarrow\vec{r}^{\prime})=\pi(\vec{r}^{\prime})P( \vec{r}\rightarrow\vec{r}^{\prime})\] (A1) but here we follow the work by Green and Mira (2001) who formulated a generalized condition for detailed balance, \[\int\pi(d\vec{r})P(\vec{r}\to d\vec{r}^{\prime})=\int\pi(d\vec{r}^{ \prime})P(\vec{r}^{\prime}\to d\vec{r})\] (A2) where one integrates over states \((\vec{r},\vec{r}^{\prime})\in A\times B\) that have been drawn from Borels sets \(A\) and \(B\), which will be \(\mathcal{R}^{N}\) for our purposes. The notation \(\int\pi(d\vec{r})\) refers to the integral, \[\int\ldots\pi(d\vec{r})\equiv\int\ldots p(\vec{r})d\vec{r}\quad,\] (A3) where \(p(\vec{r})\) is the normalized probability density for the unnormalized distribution function \(\pi(\vec{r})\) (see for example C.J.Geyer (1995)). Green & Mira (2001) showed that the acceptance probability for a move from \(\vec{r}\) to \(\vec{r}^{\prime}\) is given by, \[A(\vec{r}\to\vec{r}^{\prime})=\min\left\{1,\frac{\pi(\vec{r}^{\prime})T^{\prime }(\vec{\lambda}^{\prime})}{\pi(\vec{r}^{\prime})T(\vec{\lambda})}\left|\frac{ \partial(\vec{r}^{\prime},\vec{\lambda}^{\prime})}{\partial(\vec{r},\vec{ \lambda})}\right|\right\}\quad,\] (A4) where a vector, \(\vec{\lambda}\), of \(m\) random numbers were drawn from a density, \(T\), to generate the new state \(\vec{r}^{\prime}\) from \(\vec{r}\). Similarly, \(\vec{\lambda}^{\prime}\) refers the \(m\) random numbers that are required to generate the reverse move from \(\vec{r}^{\prime}\) back to \(\vec{r}\). In this article, we always use the same functions for both directions, \(T=T^{\prime}\). The last factor in Eq. A4 refers to the absolute value of Jacobian determinant for the transformation from \((\vec{r},\vec{\lambda})\) to \((\vec{r}^{\prime},\vec{\lambda}^{\prime})\) in the product space of states and random numbers. This term leads to factors \(\lambda^{\alpha}\) in Eq. 12 and \(\left|w_{i}\right|^{N}\) in Eq. 6 as we will now show. For the affine invariant moves, Eq. 7 employs a single random number, \(\lambda\), to move from \(\vec{r}_{i}\) to \(\vec{r}_{i}^{\prime}\). For reverse move, one needs to set \[\lambda^{\prime}=\frac{1}{\lambda}\quad.\] (A5) For the uniform \(\lambda\) sampling, one finds \(T_{2}(\lambda^{\prime})/T_{2}(\lambda)=1\) but for the sampling function \(T_{1}\) in Eq. 9, one derives the factor \[T_{1}(\lambda^{\prime})/T_{1}(\lambda)=\lambda\quad.\] (A6) To derive Jacobian determinant, we introduce \(r_{ia}\) and \(r_{ib}^{\prime}\) label the \(N\) individual elements of state vectors \(\vec{r}_{i}\) and \(\vec{r}_{i}^{\prime}\). From Eqs. 7 and A5, one finds, \[\frac{\partial r_{ib}^{\prime}}{\partial r_{ia}}=\lambda\delta_{ab}\quad,\quad \frac{\partial\lambda^{\prime}}{\partial\lambda}=\frac{-1}{\lambda^{2}}\quad, \quad\frac{\partial\lambda^{\prime}}{\partial r_{ia}}=0\quad\text{and}\quad \frac{\partial r_{ib}^{\prime}}{\partial\lambda}=r_{ib}-r_{jb}\quad.\] (A7) So the absolute value of the Jacobian determinant becomes \(\lambda^{N-2}\), which explains why one needs to set \(\alpha=N-2\) for the sampling function \(T_{2}\). Because of Eq. A6, one needs to set \(\alpha=N-1\) for the sampling function \(T_{1}\). We now use the same approach to derive the factor \(\left|w_{i}\right|^{N}\) in Eq. 6 that specifies the acceptance ratio for a move from \(\vec{r}_{i}\) to \(\vec{r}_{i}^{\prime}\) according to Eq. 1. The forward move requires two independent random numbers, \(\vec{\lambda}=(t_{i},t_{i}^{\prime})\), while their roles are interchanged for the reverse move, \(\vec{\lambda}^{\prime}=(t_{i}^{\prime},t_{i})\), which implies \[\frac{\partial\lambda_{1}^{\prime}}{\partial\lambda_{1}}=0\quad,\quad\frac{ \partial\lambda_{2}^{\prime}}{\partial\lambda_{2}}=0\quad,\quad\frac{\partial \lambda_{1}^{\prime}}{\partial\lambda_{2}}=1\quad\text{and}\quad\frac{ \partial\lambda_{2}^{\prime}}{\partial\lambda_{1}}=1\quad.\] (A8) The Jacobian becomes a \((N+2,N+2)\) matrix: \[J=\frac{\partial(\vec{r}_{i}^{\prime},\lambda_{1}^{\prime},\lambda_{2}^{ \prime})}{\partial(\vec{r}_{i},\lambda_{1},\lambda_{2})}=\left(\begin{array}[ ]{c}\frac{\partial r_{ib}^{\prime}}{\partial r_{ia}}=w_{i}\delta_{ab}&\frac{ \partial r_{ib}^{\prime}}{\partial\lambda_{1}}&\frac{\partial r_{ib}^{\prime} }{\partial\lambda_{2}}\\ \frac{\partial\lambda_{1}^{\prime}}{\partial r_{ib}}=\frac{\partial\lambda_{2}} {\partial r_{ia}}&\frac{\partial\lambda_{1}}{\partial\lambda_{1}}=0&\frac{ \partial\lambda_{1}^{\prime}}{\partial\lambda_{2}^{\prime}}=1\\ \frac{\partial\lambda_{2}^{\prime}}{\partial r_{ia}}=\frac{\partial\lambda_{1}} {\partial r_{ia}}&\frac{\partial\lambda_{2}^{\prime}}{\partial\lambda_{1}}=1& \frac{\partial\lambda_{2}^{\prime}}{\partial\lambda_{2}}=0\end{array}\right)\quad,\] (A9) and its determinant is given by a sum over permutations, \(\sigma_{k}\), \[\left|J\right|=\sum_{\sigma_{1}\cdots\sigma_{N}}\prod_{k=1}^{N}\frac{\partial r _{i,\sigma_{k}}^{\prime}}{\partial\lambda_{2}}\frac{\partial\lambda_{2}}{ \partial r_{i,k}}+\prod_{k=1}^{N}\frac{\partial r_{i,\sigma_{k}}^{\prime}}{ \partial\lambda_{1}}\frac{\partial\lambda_{1}}{\partial r_{i,k}}-\prod_{k=1}^{N }w_{i}\delta_{k,\sigma_{k}}=\sum_{\sigma_{1}\cdots\sigma_{N}}\prod_{k=1}^{N}w_{i }\delta_{k,\sigma_{k}}=w_{i}^{N}\quad,\] (A10) which explains the factor in Eq. 6.
2301.09339
Computer Vision for a Camel-Vehicle Collision Mitigation System
As the population grows and more land is being used for urbanization, ecosystems are disrupted by our roads and cars. This expansion of infrastructure cuts through wildlife territories, leading to many instances of Wildlife-Vehicle Collision (WVC). These instances of WVC are a global issue that is having a global socio-economic impact, resulting in billions of dollars in property damage and, at times, fatalities for vehicle occupants. In Saudi Arabia, this issue is similar, with instances of Camel-Vehicle Collision (CVC) being particularly deadly due to the large size of camels, which results in a 25% fatality rate [4]. The focus of this work is to test different object detection models on the task of detecting camels on the road. The Deep Learning (DL) object detection models used in the experiments are: CenterNet, EfficientDet, Faster R-CNN, and SSD. Results of the experiments show that CenterNet performed the best in terms of accuracy and was the most efficient in training. In the future, the plan is to expand on this work by developing a system to make countryside roads safer.
Khalid Alnujaidi, Ghadah Alhabib
2023-01-23T09:45:31Z
http://arxiv.org/abs/2301.09339v1
# Computer Vision for a Camel-Vehicle Collision Mitigation System ###### Abstract As the population grows and more land is being used for urbanization, ecosystems are disrupted by our roads and cars. This expansion of infrastructure cuts through wildlife territories, leading to many instances of Wildlife-Vehicle Collision (WVC). These instances of WVC are a global issue that is having a global socio-economic impact, resulting in billions of dollars in property damage and, at times, fatalities for vehicle occupants. In Saudi Arabia, this issue is similar, with instances of Camel-Vehicle Collision (CVC) being particularly deadly due to the large size of camels, which results in a 25% fatality rate [4]. The focus of this work is to test different object detection models on the task of detecting camels on the road. The Deep Learning (DL) object detection models used in the experiments are: CenterNet, EfficientDet, Faster R-CNN, and SSD. Results of the experiments show that CenterNet performed the best in terms of accuracy and was the most efficient in training. In the future, the plan is to expand on this work by developing a system to make countryside roads safer. Wildlife-Vehicle Collision, Camel-Vehicle Collision, Deep Learning, Object Detection, Computer Vision. 12 Khalid AlNujaidi 10.5121/jci.2023.12011 11 ## 1 Introduction Wildlife-vehicle collision is a global issue. This issue presents itself in a similar manner through the involvement of different species throughout the continents around the world. In North America, as well as some parts of Europe, it is deer that are the main cause of wildlife-related traffic accidents; kangaroos in Australia; and camels in the Middle East and North African regions [1]. This global issue of Wildlife-Vehicle Collision (WVC) has been continuously on the rise throughout the past century, this being a consequence of the human population increase, urbanization of countryside, and the pavement of new roads and highways. This issue is expected to continue to grow as fast as the human population continues to grow. Occurrences of WVCs result in a wide range of losses such as property damage, disturbance of the ecosystem, and morbidity and mortality for those involved. It is recorded that in the United States, within a year, a total of 247,000 WVCs occurred involving deer, resulting in 200 human fatalities and $1.1bn in property damage [2]. Deer being on the smaller side of wild animals, it is with larger animals that harsher consequences arise. Moose, being a larger animal, constitute more fatal outcomes for the occupants of the vehicles. In Sweden, 4092 WVCs involving moose occurred in a year, resulting in around a 5% fatality rate, accompanied by other serious permanent injuries and a large amount of property damage [3]. A larger and one of the most fatal animals to be involved in WVCs is the camel. Camel-Vehicle Collision (CVC) is considered extremely fatal due to the physical nature of the animal, as in almost all cases the animal tends to fall through the windshield of the colliding vehicle. CVCs result in as high as a 25% fatality rate [4]. 22,897 camel-related accidents have been recorded over the years 2015-2018 [5]. It is unfortunate, however, that detailed data on CVC regarding the frequency of occurrences, location, and extent of property damage is not readily available. However, a simple web search on the topic always results in relatively recent news headlines with a new occurrence of CVC, along with the graphic details and imagery associated with these accidents. There has been, and continues to be, effort put into deploying countermeasures to reduce WVC. The most commonly deployed tactics currently in place are conventional ones such as fencing and reflective warning signs. As effective as they may have been, signs can go unnoticed by drivers, and animals have found ways through placed fences [6]. These methods require significant funds and labour to set up and maintain. Therefore, it is necessary to consider developing smarter, technologically advanced, and autonomous methods to act as countermeasures to WVC. Just as it has become very common around the world to use sensors and computer vision technologies to assist in the enforcement of traffic violations, the same can be done to mitigate WVCs and, as the focus of this work, CVCs. The focus of this work will be to evaluate different state-of-the-art object detection algorithms to serve as a base for a CVC avoidance system. The vision of the work is to further develop an autonomous mechanism that makes countryside roads safer. As has been covered so far in the introduction, the issue of animals colliding with vehicles is a global issue. There are solutions, though they can be very costly and can be improved upon to utilize newer technologies to achieve better results. In the next section, there will be a review of literature related to solutions for both global WVCs and local CVCs. Furthermore, a section will be dedicated to the discussion of our proposed system and the methodologies used. Lastly, there will be a section to review the results, summarize the work, and express the vision for future development. ## 2 Related Work In [7], the researchers propose an IoT solution for a Camel-Vehicle Collision avoidance system. Their solution consists of two parts: a detection system and an alarm system. The detection system is based on an Omni-directional radar that is responsible for detecting movement and uploading it to the cloud to be analysed. The alarm system reads the data from the cloud and proceeds to turn on/off the alarm signs and horns. In addition, they propose installing a wireless chip that controls nearby vehicles to slow down their speed. This wireless chip is controlled through the alarm system. In [8], the authors propose a camel crossing alert and tracking system. The method solution they provide is meant to allow camel owners to track their cattle, as well as provide a warning to drivers when the camels approach the road. The system is supposed to be able to detect with a range of 18 KM. This is made possible through the tracking of the geolocation of the camels using LoRaWAN. This is made possible by having collars on the camel's neck that enable a connection to control units spread out on the roads. The alarm system consists of flashing road signs. In [9], a similar LoRaWAN and GPS system is proposed to combat the issue of camel-vehicle collisions. The method they present is to implant LoRa sensors in the skin of the animals. These sensors are connected to nodes and sub-nodes that create different caution zones along the roads. Once a camel is detected by a node, a signal will be sent to the base station. Each of the nodes are equipped with a GPS system. The alarm system is composed of a mobile phone application that is connected to the base station and sends a message with caution sounds warning drivers about the location where the camels may cross. The authors in [10] propose a warning system designed for camel-vehicle collision mitigation. The solution they provide consists of night vision cameras installed on several different vehicles. Upon detection with cameras, a system sends a message containing the geolocation through the use of a cellular network. From the messages, a heatmap is derived showing the probability of camel distribution in the area. In addition to the plot of prediction of the movement of camels and where they will be, this is achieved through the use of a hidden Markov model. The alarm system proposed is a message and alarm based on an app installed and linked to both the camera and central base system. In [11], a review of several methods used for the purpose of mitigating animal-vehicle collisions is conducted. The researcher goes over statistics of the camels roaming in the Gulf region and discusses what methods have been proposed to mitigate camel-vehicle collisions and their efficiencies. It presents the fact that the most practical and effective method currently used is fencing the roads. The authors discuss the harm of closing off animals from crossing the road. Therefore, they propose a method that improves upon fencing by adding regions to the road that work as automated gates for camels to pass. This is done by attaching a radio collar to the camels. The alarm system is based on a signal being sent to a nearby cellular tower that broadcasts an SMS message to nearby drivers, in addition to warning lights flashing near the gates. A review of implemented animal-vehicle collision mitigation systems is conducted in [12]. It reviews the rate of accidents that occur in different parts of the world, as well as the types of animals involved. The review also discusses implemented and proposed methods for detecting camels on roads in the Middle East and the systems that address this issue. The authors propose a road-based system that consists of wireless IR sensors on the sides of the road, arranged in clusters and connected to a sink node. Each node has a thermal camera and an ultrasonic sensor. When movement is detected by the sensor, an image is taken with the thermal camera, and the sink node analyses the image. If the analysis indicates the presence of an animal, a flashing red alarm light is triggered on both sides of the road. Some computer vision and artificial intelligence solutions have also been proposed in other regions of the world. In Australia, the authors in [13] propose a region-based convolutional neural network solution for detecting kangaroos in traffic. They work on creating a vehicle-based framework that warns drivers of oncoming kangaroos. The system is composed of a camera or 3D LIDAR. Because of the lack of annotated data on kangaroo activity in traffic, images were generated using photo-realistic simulation and game engine frameworks. "Where's The Bear?" is an end-to-end framework that was created in [14] for automating image processing and animal detection through an IoT system. The project consists of three parts: the cloud, the edge, and the sensing. The automatic image processing is done by training a model using Google's TensorFlow and OpenCV technologies. The model was trained using generated images of several different animals based on the existing backgrounds found in the field where the system is deployed. IoT sensing devices (cameras) were deployed over 6000 acres at the UCSB Sedgwick Reserve. The distributed sensors are triggered by motion and capture an image, label it, and send it to the database for further analysis of the reserve. A deep convolutional neural network (DCNN) wildlife monitoring method was proposed in [15]. The focus of the authors in this research is to improve upon existing methods for analysing and tracking 20 different species. The training model consisted of 1100 images of different animals in their natural habitats, with bounding boxes labelling the region where the animal is present. There were also images of plain nature backgrounds with no animal to increase the diversity of the training set. The model was assessed using trap cameras to gather the images, and an accuracy of 91.4% was achieved ## 3 Methods This section will cover the dataset used for the experiments, and the annotation process for labelling the data. Deep learning object detection is used for the experiments, a brief description of how the algorithms works will be covered. The workflow is shown in (Figure 1) ### Dataset Details The dataset consists of 250 of camels (Figure 1) in different contexts. Some of the images contain camels in the desert, captivity, and roaming on the highways. Collection of the images was through several different online resources. A handy Google Chrome extension called 'Download All Images' was used to assist in gathering the images, it automatically compressed and downloaded a zip file of all photos present in a webpage. For the use of the object detection algorithms in these experiments the images required annotations. These annotations are BBs of the locations of the object of interest (i.e., camels) within the image to assist in training a model based on the used algorithms. A handy tool named 'Open Labelling' developed for [16] was used for annotating the images. ### Object Detection With the advancements of Convolutional Neural Networks (CNN), now computers are not only able to classify what is within an image, computer also is able to localize the object and draw Bounding Box (BB) around it. Only within the last decade, there have been remarkable remarks in regards of developing different object detections algorithms. Figure 1: Example images from the dataset The functionality of object detection algorithms is based off those of a CNN. Where there are convolutional layers to extract the features within an image, creating feature maps. Pooling layers to help reduce the dimensionality of the extracted feature maps, this is for the purpose of reducing the computational cost as the CNN becomes deeper. Followed by a fully connected Neural Network for the purpose of classification. With object detection the images are broken down into regions. However, the classification and detection are done in two parts generally after extraction of features through convolutional process (Figure 2). First stage is a probabilistic calculation for classifying what the content of the image within a region. Second stage is application of regression calculations in order to find most optimal BBs. There are a number of different techniques for performing object detection, but most approaches can be grouped into one of two categories: region-based object detection and image classification. ## 4 Results and Discussion Four pre-trained models were used for the experiments: CenterNet, EfficientDet, Faster R-CNN, and SSD. First discusses is how the performance of ML models are generally evaluated, and then covering specifically how object detection models are evaluated. After which, the results of the experiments will be shared. ### Evaluation Metrics Object detection models are uniformly evaluated using accuracy of the detection boxes through mean Average Precision (mAP) and mean Average Recall (AR). Intersection over Union (IoU) (Figure 3), Recall, and Precision, are helper metricesused to obtain the desired mAP measurement. First, some common evaluation terms and abbreviation widely used in ML model evaluation are the following: True positive (TP), Correct detection made by the model.True Negative (TN), No detection where/when none needed by model. False Positive (FP),incorrect detection made by the model.False Negative (FN), missed detection by model. Figure 3: IoU calculation formula. IoU of 0.50:0.95 is considered TP detection. Figure 2: General working of object detection models. To reach the desired performance metric, the procedure required of following the four equations 1-4 are used to find the mAP: 1) \(\begin{array}{ll}&\mbox{\it Precision}=\frac{TP}{TP+FP}\\ 2)&\mbox{\it Recall}=\frac{TP}{TP+FN}\\ 3)&\mbox{\it Average}\;\mbox{\it Precision}=\int_{0}^{1}p_{(r)}\,dr\\ 4)&\mbox{\it mAP}=\frac{1}{N}\sum_{l=1}^{N}AP\end{array}\) Note, _P(r)_ being the curve of plotting Precision-Recall, and \(N\) as number of classes. ### Results All the object detection model used in the experiments are obtained from the TensorFlow 2 Detection Model Zoo repository [17]. The models have been pre-trained on the famous COCO 2017 dataset. The models can be configured to custom datasets through the process of few-shot training. Few-shot learning is a type of machine learning where a model is trained on a small number of examples and is then able to generalize to unseen examples. It is particularly useful in situations where it is difficult or expensive to obtain large amounts of labelled training data, as the model can learn to classify new examples using only a few examples as support. The models used for the experiments were: CenterNet, EfficientDet, Faster R-CNN, and SSD. All the models were trained on the same computer and using a NVIDIA GeForce GTX 1080 GPU. The accuracy of the models can be seen in the following table (Table 1), as well as visualized in the figure (Figure 2) below. Over all, the CenterNet architecture proved to be the best model out of the four. It was able to achieve the highest accuracy, while also being the model that trains the fastest.EfficientDet would in come second place for performance, it has a good general detection rate at IoU=0.50, although it less precise when bound to IoU between 0.50:0.95. As for Faster R-CNN and SSD, either the training time or accuracy hindered their performance, making them less viab \begin{table} \begin{tabular}{l l l l l l} & \multicolumn{2}{c}{**mAP**} & \multicolumn{2}{c}{**AR**} & \multicolumn{1}{c}{**TrainingTime**} \\ \cline{2-7} **Model** & _IoU=0.50_ & _IoU=0.75_ & _IoU=0.50:0.95_ & _IoU=0.50:0.95_ & **(m)** \\ \hline _CenterNet_ & _83.4_ & _62.7_ & _58_ & _33.1_ & _35.5_ \\ _EfficientDet_ & _81.7_ & _55.4_ & _52.6_ & _31.8_ & _47.4_ \\ _Faster R-CNN_ & _80.4_ & _62.2_ & _52.5_ & _31.1_ & _85.8_ \\ _SSD_ & _74.8_ & _60.2_ & _47.9_ & _29_ & _52.2_ \\ \hline \multicolumn{7}{l}{*All models were trained for 10,000 steps.} \\ \end{tabular} \end{table} Table 1: Model performance comparison ## 5 Conclusion In conclusion, Wildlife-Vehicle Collisions (WVC) are a global issue that is having a significant socio-economic impact, resulting in billions of dollars in property damage and fatalities. In Saudi Arabia, Camel-Vehicle Collisions (CVC) are particularly deadly due to the large size of camels, which results in a higher fatality rate than other animals.This is a problem that is only expected to grow as the human population grows and more land is used for urbanization. Despite the efforts that have been put to combat this issue, they have not been very effective and are very costly. As computer become smaller and faster and cheaper, it is only making sense to create autonomous systems to combat this issue with warning systems or something of that similar nature. The application of AI and computer vision has proven to be effective in increasing the safety of the roads, with systems such cameras that enforce speed, texting and driving tickets and so on. As seen in the experiments, with a modest size dataset, satisfactory result has been achieved in detection of camels in different environments. The CenterNet model has proven to be best with a mAP of 58%, and the shortest of training times of only 35.5 minutes. The vision of this work is to then build on these finding and knowledge to them further develop a deployable autonomous system that is able to effectively to both help in reserving natural wildlife ecosystems and prevent property damage. ## 6 The Dataset Part of the contribution of this research is to provide a novel type of data that does not exist. A dataset of clean format images and annotated in two styles: Pascal, and YOLO format. [https://www.kaggle.com/datasets/khalidalnjuaidi/images-of-camels-annotated-for-object-detection](https://www.kaggle.com/datasets/khalidalnjuaidi/images-of-camels-annotated-for-object-detection)
2305.13696
Abstractive Text Summarization Using the BRIO Training Paradigm
Summary sentences produced by abstractive summarization models may be coherent and comprehensive, but they lack control and rely heavily on reference summaries. The BRIO training paradigm assumes a non-deterministic distribution to reduce the model's dependence on reference summaries, and improve model performance during inference. This paper presents a straightforward but effective technique to improve abstractive summaries by fine-tuning pre-trained language models, and training them with the BRIO paradigm. We build a text summarization dataset for Vietnamese, called VieSum. We perform experiments with abstractive summarization models trained with the BRIO paradigm on the CNNDM and the VieSum datasets. The results show that the models, trained on basic hardware, outperform all existing abstractive summarization models, especially for Vietnamese.
Khang Nhut Lam, Thieu Gia Doan, Khang Thua Pham, Jugal Kalita
2023-05-23T05:09:53Z
http://arxiv.org/abs/2305.13696v1
# Abstractive Text Summarization Using the BRIO Training Paradigm # Abstractive Text Summarization Using the BRIO Training Paradigm Khang Nhut Lam Can Tho University, Vietnam [email protected] &Thieu Gia Doan Can Tho University, Vietnam [email protected] Khang Thua Pham Duy Tan University, Vietnam [email protected] &Jugal Kalita University of Colorado, USA [email protected] ###### Abstract Summary sentences produced by abstractive summarization models may be coherent and comprehensive, but they lack control and rely heavily on reference summaries. The BRIO training paradigm assumes a non-deterministic distribution to reduce the model's dependence on reference summaries, and improve model performance during inference. This paper presents a straightforward but effective technique to improve abstractive summaries by fine-tuning pre-trained language models, and training them with the BRIO paradigm. We build a text summarization dataset for Vietnamese, called VieSum. We perform experiments with abstractive summarization models trained with the BRIO paradigm on the CNNDM and the VieSum datasets. The results show that the models, trained on basic hardware, outperform all existing abstractive summarization models, especially for Vietnamese. ## 1 Introduction Text summarization reduces the size of the original text while preserving its main content. The two main approaches for constructing summaries are extractive and abstractive. Extractive summarization directly lifts sentences or words which convey key topics of the original documents, and concatenates them. Abstractive summarization discovers the primary content of the documents and generates summaries. Abstractive summaries are usually more natural and coherent than extractive summaries. Most abstractive summarization models follow the encoder-decoder framework. Existing abstractive summarization models are trained using maximum likelihood estimation and rely on the reference summaries. Liu et al. (2022) propose a BRIO training paradigm to address reliance on reference summaries by assuming non-deterministic distribution of system-generated candidate summaries. In this paper, we use the BRIO training paradigm for abstractive summarization models to construct summaries for documents in English and Vietnamese. We make the following contributions: * We adapt the BRIO training paradigm for abstractive summarization using BART-based and T5-based models as backbones. * We present issues with the BRIO paradigm. * We investigate abstractive summarization models using BARTpho-BRIO and ViT5-BRIO to obtain improved results. * We publicly release the VieSum summarization dataset for research purpose. The remainder of this paper is organized as follows. Related work is presented in Section 2. Section 3 introduces a large dataset for summarization in Vietnamese, named VieSum. Experiments and discussion are presented in Section 4. Section 5 concludes the paper. ## 2 Related Work Sheng et al. (2022)'s Siamese Semantic Preserving Generative Adversarial Net (SSPGAN) uses a Transformer-based generator to generate summaries. A Siamese Transformer-based discriminator captures the semantic consistency between the source document and the corresponding summary. During adversarial training, the discriminator calculates a reward for each word generated. On the Gigaword dataset, SSPGAN model achieves better results than many existing abstractive text summarization models such as deep recurrent generative decoder Li et al. (2017), actor-critic approaches from reinforcement learning Li et al. (2018), and Transformer Vaswani et al. (2017). Liu et al. (2022) develop the PageSum model for abstractive summarization by incorporating locality bias in both encoder and decoder. Each document is partitioned into non-overlapping pages. The encoder, which is an abstractive summarizer, encodes each page and makes local predictions. The decoder predicts output based on a weighted combination of local predictions. The authors fine-tune the BART model Lewis et al. (2020) for abstractive summarization and investigate several approaches to locality, such as spatial locality, discourse locality, and document locality. PageSum outperforms abstractive summarization models such as longformer encoder-decoder Beltagy et al. (2020), encoder-decoder attention with headwise positional strides Huang et al. (2021), and BART with Hierarchical Attention Transformer Rohde et al. (2021). However, PageSum takes a long time to train, requires large memory size, and fails to capture long distance dependencies. Several studies use pre-trained models for abstractive text summarization. Farahani et al. Farahani et al. (2021) use mT5 Xue et al. (2021) and sequence to sequence ParsBERT Rothe et al. (2020) to construct abstractive summaries for Persian texts. T5 Raffel et al. (2020) and BERT Devlin et al. (2018) have also been used to construct abstractive summaries Garg et al. (2021). Kieuvongnam et al. (2020) summarize COVID-19 biomedical research articles using BERT and GPT-2 Radford et al. (2019). Features of documents are extracted and integrated into an abstractive model to improve summary generation. Nambiar et al. Nambiar et al. (2022) develop an encoder-decoder model using attention, in which POS features are incorporated to the word embedding layers to enhance the word vectors. Experiments on a dataset in Malayalam show that the integration of attention model and POS features is better than the seq2seq and attention models. Barna and Heickal Barna and Heickal (2021) adapt the pointer generator network for abstractive summarization by combining a pre-trained word embedding layer for transferring semantic similarity and topic features for better topic coverage. A drawback of usual abstractive summarization is the omission of named entities. To ameliorate, Berezin and Batura Berezin and Batura (2022) train a named entity recognition model based on ROBERTa to discover named entities. Then, the BART masked named entity language model is trained to pay attention on the name entities. Finally, BART is fine-tuned for text summarization. Most studies to construct abstractive summaries in Vietnamese use an encoder-decoder framework or a pre-trained model. Quoc et al. Quoc et al. (2019) integrate sentence positions and term frequencies into a pointer generator network with a coverage mechanism to perform the abstractive summarization for Vietnamese documents. Lam et al. Lam et al. (2022) construct abstractive summaries for online newspapers using RNN with attention, BiLSTM with copy generator, standard Transformer, BERT, and sequence-to-sequence abstractive models using bottom-up approach. Phan et al. Phan et al. (2022) perform experiments to summarize Vietnamese documents using Transformer-based encoder-decoder architectures such as Transformer, PhoBERT Tran et al. (2022), and ViT5 Phan et al. (2022). ## 3 VieSum Dataset We construct a VieSum dataset for Vietnamese consisting of 1,627,415 documents and their corresponding summaries, grouped into 23 categories. In particular, BeautifulSoup1 and Newspaper3k2 are used to collect and extract articles from popular online newspapers in Vietnamese such as vn-express.net, dantri.com.vn, danviet.vn, vietnamnet.vn, laodong.vn, and vov.vn. The summaries and content documents are considered reference summaries and documents, respectively. Footnote 1: [https://www.crummy.com/software/BeautifulSoup/](https://www.crummy.com/software/BeautifulSoup/) Footnote 2: [https://newspaper.readthedocs.io/en/latest/](https://newspaper.readthedocs.io/en/latest/) ## 4 Experimental Results We perform experiments in the Google Colaboratory environment, NVIDIA Tesla T4 16GB. We use the CNNDM3 dataset in English, and our VieSum dataset in Vietnamese. Due to limitation of the hardware, we perform experiments with 70,000 documents picked randomly and their corresponding reference summaries from VieSum. Each dataset is split into 3 parts including 75% for training, 8% for validation, and 17% for testing. Footnote 3: [https://cs.nyu.edu/](https://cs.nyu.edu/) kcho/DMQA/ In this paper, the pre-trained BART\({}_{\text{512-length}}\)-based and T5\({}_{\text{12-length}}\)-based models are used as backbones for generating abstractive summaries. The BART Lewis et al. (2020) and T5 Raffel et al. (2020) models are trained on the CNNDM dataset, while the BARTpho Tran et al. (2022) and ViT5 Phan et al. (2022) are trained on the VieSum dataset. All models are base models. To make it easy for comparison, we use the same parameters as suggested by the original authors. ### Standard Abstractive Models First, we experiment and evaluate abstractive summarization approaches using standard BART-base and T5-base models. We train the models using a batch size of 4, epoch count of 5, learning rate of \(10^{-5}\), warmup_step of 20,000, and the Adam optimizer. The results of abstractive summarization systems using the standard backbone models are presented in Table 1. ### Fine-tuning Abstractive Models To improve the quality of summaries created, we fine-tune the backbone models using the Trainer provided by Hugging Face4. We do not fine-tune the BART model because it is already fine-tuned on the CNN dataset. Table 2 shows the ROUGE scores of the fine-tuned abstractive models. Footnote 4: [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers) ### Fine-tuning Abstractive Models and BRIO The BRIO [11] training paradigm helps abstractive summarization models to predict tokens more accurately. liu2022multimulti (2022) use BART as the backbone model. BRIO assigns probability mass to output summary candidates based on their quality using contrastive learning. The abstractive model acts as a generation model to generate abstractive candidates in an auto-regressive way, and an evaluation model to evaluate the candidates by calculating their probability distribution. The generator is trained using the standard MLE loss, while the evaluator is trained using a contrastive loss [1]. In BRIO, a backbone model is used to produce \(N\) abstractive summaries, the so-called _candsum_s, for each document. Each _candsum_ is assigned a quality score by obtaining the average score of its ROUGE-1, ROUGE-2, and ROUGE-L values. In particular, liu2022multi (2022) use the BART\({}_{1024\text{-}1\text{ength}}\) model to create 16 _candsum_s for each document. Next, documents, reference summaries, and corresponding _candsum_s sorted by the descending quality scores are used to train the abstractive summarization model using the BRIO paradigm. We note that liu2022multi (2022) use the standard models as back-bones and train them with the BRIO paradigm. In our work, the fine-tuned backbone abstractive summarization models, presented in the previous section, are used to produce _N=6 candsums_ for each document using diverse beam search [10] with num_beam_groups=6, diversity_penalty=1.0, and num_beams=4. The abstractive summarization models are trained using a learning rate of \(10^{-3}\), and the Adafactor optimizer. liu2022multi (2022) claim that BRIO training helps the models reach the best performance within one epoch on the CNNDM dataset5. Therefore, we use one epoch for training the fine-tuned summarization models with the BRIO paradigm. The results of the abstractive summarization systems trained with BRIO are presented in Table 3. Footnote 5: [https://github.com/yixinL7/BRIO/issues/13](https://github.com/yixinL7/BRIO/issues/13) ### Fine-tuning Abstractive Models and BRIO-Loop As suggested by liu2022multi (2022), we perform loop processing, using the _candsum_s created by the abstractive summarization models trained with BRIO to train the models. However, after several \begin{table} \begin{tabular}{l l c c c} \hline **Dataset** & **System** & **R-1** & **R-2** & **R-L** \\ \hline CNNDM & BART & 42.53 & 20.21 & 39.47 \\ CNNDM & T5 & 36.24 & 15.34 & 33.34 \\ VieSum & BARTPho & 44.59 & 22.57 & 34.60 \\ VieSum & ViT5 & 53.39 & 20.63 & 35.88 \\ \hline \end{tabular} \end{table} Table 1: ROUGE scores of abstractive summarization systems using standard backbone models. \begin{table} \begin{tabular}{l c c c} \hline **System** & **R-1** & **R-2** & **R-L** \\ \hline T5 fine-tuned & 41.02 & 19.44 & 38.30 \\ BARTPho fine-tuned & 57.94 & 26.56 & 40.83 \\ ViT5 fine-tuned & 57.75 & 26.37 & 40.57 \\ \hline \end{tabular} \end{table} Table 2: ROUGE scores of abstractive summarization systems using the fine-tuned backbone models. The T5 fine-tuned model is trained on CNNDM, while the other models are trained on VieSum. \begin{table} \begin{tabular}{l c c c} \hline **System** & **R-1** & **R-2** & **R-L** \\ \hline BART-BRIO & 46.40 & 22.47 & 43.00 \\ T5-BRIO & 44.03 & 20.72 & 40.63 \\ BARTPho-BRIO & 59.12 & 27.01 & 42.05 \\ ViT5-BRIO & 59.50 & 27.33 & 42.76 \\ \hline \end{tabular} \end{table} Table 3: ROUGE scores of abstractive summarization systems, which use the fine-tuned backbone models, trained with the BRIO paradigm. BART-BRIO and T5-BRIO are trained on CNNDM, and BARTPho-BRIO and ViT5-BRIO are trained on VieSum. iterations of looping, the ROUGE scores seem to change very little. Especially, BARTpho and ViT5 almost reach the highest ROUGE scores with 2 iterations. Table 4 presents the ROUGE scores obtained after looping twice. Experimental results show that the BRIO training paradigm significantly helps improve the abstractive summaries by reducing the dependence of the system on the reference summaries. However, assigning weights to both _candsum_s and reference summaries is necessary in order to decrease reliance on reference summaries. The diverse beam search helps obtain diverse _candsum_s, but could cause interference in the beam search space because the model might not follow the reference summaries. In addition, using the ROUGE metric for evaluating the abstractive summarization models trained with the BRIO paradigm seems unfair because these models could produce summaries which are independent on the reference summaries. ### Discussion It is not easy to make comparisons between models trained on different hardware and on different datasets. We make an attempt to compare our work with published papers on similar datasets. Currently, BRIO using a standard BART\({}_{\text{1024-length}}\) model as backbone, which generates 16 _candsum_s, achieves SOTA results on the CNNDM dataset with a ROUGE-1 of 47.78 and a ROUGE-L of 32.58 (Liu et al., 2022). In addition, BART\({}_{\text{1024-length}}\)-BRIO with 2 iterations reaches ROUGE-1 and ROUGE-L of 48.01 and 44.67, respectively; these are both better than our BART\({}_{\text{512-length}}\)-BRIO, which creates 6 _candsum_s for each document, after 2 iterations: 46.55 for ROUGE-1 and 43.00 for ROUGE-L. Tawmo et al. (2022) fine-tune the T5 abstractive summarization model and evaluate on the CNNDM dataset. Their T5 model achieves ROUGE-1 and ROUGE-L scores of 40.79 and 34.80, respectively, which are lower than the scores of our fine-tuned T5 model, and significantly lower than scores of our best model, the T5-BRIO-Loop model: 45.24 for ROUGE-1 and 41.80 for ROUGE-L. For Vietnamese abstractive summarization, Quoc et al. (2019) use LSTMs with the features of sentence positions and term frequencies (LSTM+SP+TF) on a Vietnamese dataset collected from Baomoi6. The best ROUGE-1 and ROUGE-L scores of their model are 31.89 and 29.97, respectively, which are significantly lower than the scores of our BRIO-BART model. Footnote 6: [https://baomoi.com/](https://baomoi.com/) Both the BARTpho and ViT5 models trained with the BRIO paradigm outperform all models proposed by Lam et al. (2022) on the CTUNLPSum dataset, which is very similar to the VieSum dataset, including the sequence-to-sequence models, copy generator network, sequence-to-sequence with rewriter approach, and bottom-up approach. Tran et al. (2022) apply several models for abstractive summarization on the VNDS (Nguyen et al., 2019) dataset. They perform experiments on 8 A100 GPUs with 40GB each. Their model is trained for 15 epochs in about 6 days. Their best model, BARTpho, achieves a ROUGE-1 of 61.14, which is slightly higher than the BARTpho-BRIO-Loop, and a ROUGE-L of 40.15, which is lower than that of the BARTpho-BRIO-Loop. In addition, the BARTpho-BRIO-Loop is trained on one epoch in about 32 hours using basic hardware. Phan et al. (2022) introduce a pre-trained text-to-text Transformer for Vietnamese abstractive summarization, called ViT5. The authors claim the ViT5 model as the SOTA for Vietnamese abstractive summarization. Their ViT5 abstractive summarization model achieves ROUGE-1 and ROUGE-L of 61.85 and 41.70, respectively, on the VNDS dataset (Nguyen et al., 2019). We conducted experiments on VNDS and found interesting results related to the ViT5 model. The ROUGE scores of the ViT5 model trained using the common paradigm are essentially identical to the ROUGE scores provided by Phan et al. (2022). However, the scores of the ViT5 model trained using the BRIO paradigm are reduced to 59.37 and 41.6, respectively. On the VieSum dataset, the standard ViT5-base achieves an ROUGE-1 of 53.39 and ROUGE-L of 35.88; while the ViT5-BRIO-Loop has better scores: ROUGE-1 of 60.90 and ROUGE-L of \begin{table} \begin{tabular}{l c c c} \hline \hline **System** & **R-1** & **R-2** & **R-L** \\ \hline BART-BRIO-Loop & 46.55 & 22.56 & 43.00 \\ T5-BRIO-Loop & 45.24 & 21.50 & 41.80 \\ BARTpho-BRIO-Loop & 60.53 & 28.20 & 44.20 \\ ViT5-BRIO-Loop & 60.90 & 28.39 & 44.36 \\ \hline \hline \end{tabular} \end{table} Table 4: ROUGE scores of abstractive summarization systems trained with the BRIO paradigm after looping twice. BART-BRIO and T5-BRIO are trained on CNNDM, and BARTpho-BRIO and ViT5-BRIO are trained on VieSum. 44.36. We leave further exploration and evaluation these unstable results for future work. ## 5 Conclusion We investigated abstractive summarization models trained with the BRIO paradigm. Experiments show that we can improve abstractive summarization models by fine-tuning the backbones before training them with BRIO. In particular, the summarization models trained with BRIO outperform other summarization models in Vietnamese. We also discuss issues with the BRIO paradigm for further exploration. In addition, we built the VieSum dataset for summarization in Vietnamese. For future work, we will ask volunteers to evaluate and provide feedback on a small subset of the VieSum dataset. ## Limitations While many studies show that the architectures of the deep learning models significantly influence the results, we perform experiments with several base architectures because of the constrained hardware. Furthermore, there has not been a Vietnamese benchmark summarization dataset, which is both sizable and of high quality. The existing summarization datasets are derived from online magazines, which usually contain misspelled words and grammatical errors. In addition, the reference summaries might not convey the main content of the corresponding articles. Therefore, selecting and developing efficient summarization models for Vietnamese still present numerous challenges. ## Ethics Statement We use several different software tools in our experiments. These tools as well the English dataset are publicly available and we do not see any ethical issues in using them. In addition, we clearly reference the papers and other sources for the tools used. We create the VieSum dataset ourselves. Our paper's work depends on using previously published approaches to abstractive summarization. We clearly give credit to the authors of these approaches by citing original sources. This paper focuses on abstractive summarization of longer documents. There is potential for high quality abstractive summarizers to be misused. For example, students if/when given an assignment to summarize/review papers/articles may use such summarizers to automatically write reviews and claim them as their own. However, we believe abstractive summarizers for long documents have not achieved this level of sophistication at this time.
2307.14958
Rationality for arbitrary closure operations and the test ideal of full extended plus closure
We extend the notion of F-rationality to other closure operations, inspired by the work of Smith, Epstein and Schwede, and Ma and Schwede, which describe F-rationality in terms of the canonical module and top local cohomology module. We give conditions for a closure operation cl on a Cohen-Macaulay complete local ring under which cl-rationality is equivalent to parameter ideals being cl-closed. We also demonstrate that full extended plus closure as defined by Heitmann and weak full extended plus closure as defined by the first named author have no big test elements.
Zhan Jiang, Rebecca R. G.
2023-07-27T15:50:24Z
http://arxiv.org/abs/2307.14958v1
# Rationality for arbitrary closure operations and the test ideal of full extended plus closure ###### Abstract. We extend the notion of F-rationality to other closure operations, inspired by the work of Smith, Epstein and Schwede, and Ma and Schwede, which describe F-rationality in terms of the canonical module and top local cohomology module. We give conditions for a closure operation cl on a Cohen-Macaulay complete local ring under which cl-rationality is equivalent to parameter ideals being cl-closed. We also demonstrate that full extended plus closure as defined by Heitmann and weak full extended plus closure as defined by the first named author have no big test elements. ## 1. Introduction Rational or F-rational singularities have long been a useful descriptor for rings of equal characteristic [11, 12]. Given the proliferation of possible closure operations in the mixed characteristic case [1, 13, 14, 15], it is useful to know for which closures cl we can define cl-rational singularities, and what that definition looks like. In this paper we build on two definitions originally used for F-rationality over Cohen-Macaulay local rings: first, that a ring is F-rational if one or all parameter ideals are tightly closed [16], and second that a ring is F-rational if the annihilator of the tight closure of 0 in the injective hull of the residue field is the whole ring [17]. These ideas were used more recently in [18] to define BCM-rationality in terms of the annihilator in the canonical module of their BCM-closure of (0) in \(\mathrm{H}^{d}_{m}(R)\). Using these ideas, in Section 4 we define cl-rationality for closure operations on Cohen-Macaulay local rings in terms of the canonical module and \(\mathrm{H}^{d}_{m}(R)\), and show that under conditions comparable to those used for tight closure, this is equivalent to parameter ideals being cl-closed. We expect these results to be useful for researchers defining new closure operations in any characteristic, who want to check if cl-rationality coincides with rationality in equal characteristic 0, F-rationality in equal characteristic \(p\), and/or BCM-rationality in any characteristic. To demonstrate the importance of choosing the right closure, in Section 5 we show that even F-rational rings may not be \(\mathrm{cl}_{B}\)-rational for the module closures coming from some finitely-generated Cohen-Macaulay modules \(B\). The other key result of this paper is that full extended plus closure (epf) as defined by Heitmann [14, 15] does not have big test elements in general (see Section 3). This gives a more explicit way to understand how epf closure is "too big" to be the right mixed characteristic closure to define singularity types. We further prove that the epf test ideals (big and finitistic) agree with those of wepf as defined by the first named author [14], implying that the latter also lacks big test elements. ## 2. Background Throughout, \(R\) will be a commutative Noetherian ring of Krull dimension \(d\). When \((R,m,k)\) is local, \(E=E_{R}(k)\) will denote the injective hull of \(k\) and \({}^{\vee}\) the Matlis duality operator \(\mathrm{Hom}_{R}(\neg,E)\). **Definition 2.1**.: A _closure operation_\(\mathrm{cl}\) on a category \(\mathcal{M}\) of \(R\)-modules is a map sending each pair \(N\subseteq M\) contained in \(\mathcal{M}\) to an \(R\)-module \(N^{\mathrm{cl}}_{M}\subseteq M\) such that 1. \(N\subseteq N^{\mathrm{cl}}_{M}\); 2. \((N^{\mathrm{cl}}_{M})^{\mathrm{cl}}_{M}=N^{\mathrm{cl}}_{M}\); 3. and if \(N\subseteq N^{\prime}\subseteq M\) are in \(\mathcal{M}\), then \(N^{\mathrm{cl}}_{M}\subseteq(N^{\prime})^{\mathrm{cl}}_{M}\). An _interior operation_\(\mathrm{i}\) on a category \(\mathcal{M}\) of \(R\)-modules is a map sending each \(R\)-module \(M\) to a submodule \(\mathrm{i}(M)\) such that 1. \(\mathrm{i}(\mathrm{i}(M))=\mathrm{i}(M)\); 2. and if \(L\subseteq M\), then \(\mathrm{i}(L)\subseteq\mathrm{i}(M)\). _Notation 2.2_.: Let \(R\) be a ring and \(\mathrm{cl},\mathrm{cl}^{\prime}\) closure operations on a category \(\mathcal{M}\) of \(R\)-modules. We say that \(\mathrm{cl}\leqslant\mathrm{cl}^{\prime}\) if for all \(N\subseteq M\) in \(\mathcal{M}\), \(N^{\mathrm{cl}}_{M}\subseteq N^{\mathrm{cl}^{\prime}}_{M}\). **Definition 2.3** ([20]).: Let \(B\) be an \(R\)-module. The _module closure_ coming from \(B\) is given by \[L^{\mathrm{cl}}_{M}\coloneqq\{x\in M\mid b\otimes x\in\mathrm{im}(B\otimes_{ R}L\to B\otimes_{R}M)\text{ for all }b\in B\}.\] **Definition 2.4** ([23, Definition 2.2]).: Let \(R\) be a commutative ring and let \(\mathrm{cl}\) be a closure operation on a category of \(R\)-modules \(\mathcal{M}\). We say that \(\mathrm{cl}\) is functorial if whenever \(L\subseteq M,N\) are in \(\mathcal{M}\) and \(f:M\to N\) is a map in \(\mathcal{M}\), \(f(L^{\mathrm{cl}}_{M})\subseteq f(L)^{\mathrm{cl}}_{N}\). We say that \(\mathrm{cl}\) is _residual_ if whenever \(L\subseteq N\subseteq M\) in \(\mathcal{M}\) such that \(M/L,N/L\) are also in \(\mathcal{M}\), \(N^{\mathrm{cl}}_{M}=\pi^{-1}((N/L)^{\mathrm{cl}}_{M/L})\) where \(\pi:M\to M/L\) is the natural surjection. **Definition 2.5** ([23, Definition 3.1]).: Let \(\mathrm{cl}\) be a residual closure operation on \(R\)-modules. The _finitistic version_\(\mathrm{cl}_{fg}\) of \(\mathrm{cl}\) is given by \[L^{\mathrm{cl}_{fg}}_{M}=\bigcup\left\{L^{\mathrm{cl}}_{N}\mid L\subseteq N \subseteq M,N/L\text{ finitely-generated}\right\}.\] We say that a residual closure operation \(\mathrm{cl}\) is _finitistic_ if it is equal to its finitistic version. See [23] for a discussion of the finitistic version in the non-residual case. **Definition 2.6** ([20, Definition 3.9]).: Let \(R\) be local and \(\mathrm{cl}\) a closure operation on at least ideals of \(R\). We say that \(\mathrm{cl}\) satisfies _colon-capturing_ if for every partial system of parameters \(x_{1},\ldots,x_{k+1}\) on \(R\), \[(x_{1},\ldots,x_{k}):x_{k+1}\subseteq(x_{1},\ldots,x_{k})^{\mathrm{cl}}.\] We say that \(\mathrm{cl}\) satisfies _strong colon-capturing, version A_ if for every partial system of parameters \(x_{1},\ldots,x_{k}\) on \(R\) and \(0\leqslant a<t\), \[(x_{1}^{t},x_{2},\ldots,x_{k}):_{R}x_{1}^{a}\subseteq(x_{1}^{t-a},x_{2},\ldots,x_{k})^{\mathrm{cl}}.\] We say that \(\mathrm{cl}\) satisfies _strong colon-capturing, version B_ if for every partial system of parameters \(x_{1},\ldots,x_{k+1}\) on \(R\), \[(x_{1},x_{2},\ldots,x_{k})^{\mathrm{cl}}:_{R}x_{k+1}\subseteq(x_{1},x_{2}, \ldots,x_{k})^{\mathrm{cl}}.\] _Remark 2.7_.: Note that strong colon-capturing, version A does not necessarily imply colon-capturing, but strong colon-capturing, version B does imply colon-capturing. **Definition 2.8** ([23]).: Let \((R,m,k)\) be a complete local ring and \(\mathrm{cl}\) a residual closure operation on \(R\)-modules. We define a dual interior operation \(\mathrm{cl}^{\urcorner}\) on finitely-generated and Artinian \(R\)-modules, by \[\mathrm{cl}^{\urcorner}(M)=\left(\frac{M^{\vee}}{0^{\mathrm{cl}}_{M^{\vee}}} \right)^{\vee}.\] Note that for these modules \(M\cong M^{\vee\vee}\), so \(\mathrm{cl}^{\urcorner}(M)\) can be viewed as a submodule of \(M\) via this isomorphism. **Definition 2.9**.: Let \(\mathrm{cl}\) be a closure operation on \(R\)-modules. The _\(\mathrm{cl}\)-test ideal_ is \[\tau_{\mathrm{cl}}(R)=\bigcap_{N\leqslant M}N:_{R}N^{\mathrm{cl}}_{M},\] where the intersection is taken over all \(R\)-modules \(N\subseteq M\). The _finitistic_ cl-_test ideal_ is \[\tau_{\mathrm{cl}}^{fg}(R)=\bigcap_{N\in M\text{ f.g.}}N\!:_{R}N_{M}^{\mathrm{cl}},\] where the intersection is taken over all finitely-generated \(R\)-modules \(N\subseteq M\). The following result demonstrates how \(\mathrm{cl}\char 126(R)\) can be perceived as the cl-test ideal of \(R\). **Lemma 2.10** ([21, Proposition 3.9],[15, Theorem 5.5]).: _Let \((R,m,k)\) be complete and local. If \(\mathrm{cl}\) is a functorial, residual closure operation on \(R\)-modules, then \(\tau_{\mathrm{cl}}(R)=\operatorname{Ann}_{R}0^{\mathrm{cl}}_{E_{R}(k)}\)._ _As a result, \(\tau_{\mathrm{cl}}(R)=\mathrm{cl}\char 126(R)\)._ **Definition 2.11** ([21]).: Let cl be a residual closure operation on \(R\)-modules. We say that \(R\) is weakly-cl-regular if for every ideal \(I\) of \(R\), \(I_{R}^{\mathrm{cl}}=R\), or equivalently for every finitely-generated \(R\)-modules \(N\subseteq M\), \(N_{M}^{\mathrm{cl}}=N\). Note that if cl-captures colons, then weakly cl-regular rings are Cohen-Macaulay. **Definition 2.12** ([14]).: Let \(R\) be a local ring of characteristic \(p>0\) and let \(*\) denote tight closure. We say that \(R\) is _F-rational_ if \(I^{*}=I\) for any ideal \(I\) generated by part of a system of parameters. The result below gave the first way of viewing F-rationality in terms of local cohomology: **Theorem 2.13** ([16, Proposition 4.1.4]).: _Let \((R,m)\) be a complete local Cohen-Macaulay ring. Then \(R\) is F-rational if and only if \(\operatorname{Ann}_{R}\left(0^{*}_{\mathrm{H}^{d}_{m}(R)}\right)=R\)._ This perspective has since been used by many others, for example Epstein and Schwede in [10] and Ma and Schwede in [11] for their BCM test ideal: **Definition 2.14** ([11, Section 5]).: Let \((R,m,k)\) be a complete local ring of dimension \(d\) with a normalized dualizing complex \(\omega^{\bullet}\) and canonical module \(\omega\). Let \(B\) denote a big Cohen-Macaulay \(R\)-algebra. We set \[0^{\otimes\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Theorem 3.2**.: _Let \((R,m,k)\) be a complete local domain of mixed characteristic \(p\) with \(F\)-finite residue field \(k\). Then \(\tau_{\mathrm{epf}}(R)=0\)._ Proof.: Since \(R\) is a domain, the map from \(R\) to itself given by multiplication by \(p^{n}\) is injective. Taking its Matlis dual, we see that the map from \(E=E_{R}(k)\) to itself given by multiplication by \(p^{n}\) is surjective. This is preserved by tensoring over \(R\) with \(R^{+}\). So for any element \(u\in E\), \(d^{*}\otimes u\in R^{+}\otimes p^{n}E\) for \(\epsilon>0\) rational and \(d\in R^{+}\). Hence \(0^{\mathrm{epf}}_{E}=E\), which implies that \(\mathrm{Ann}_{R}(0^{\mathrm{epf}}_{E})=\mathrm{Ann}_{R}E=0\). **Definition 3.3** ([16, Definition 4.1]).: Let \(R\) be a complete local ring whose residue field has characteristic \(p>0\). We define the _weak full extended plus closure_ of \(N\) in \(M\) to be \[N^{\mathrm{wepf}}_{M}:=\bigcap_{n\geq 0}(N+p^{n}M)^{\mathrm{epf}}_{M}.\] **Corollary 3.4**.: _Let \(R\) be a complete local domain of mixed characteristic \(p\) with \(F\)-finite residue field. Then \(\tau_{\mathrm{wepf}}(R)=0\)._ Proof.: Since \(0^{\mathrm{wepf}}_{E}=\bigcap_{n}(p^{n}E)^{\mathrm{epf}}_{E}\) and \(E=0^{\mathrm{epf}}_{E}\subseteq(p^{n}E)^{\mathrm{epf}}_{E}\subseteq E\), we have \[0^{\mathrm{wepf}}_{E}=\bigcap_{n}(p^{n}E)^{\mathrm{wepf}}_{E}=\bigcap_{n}E=E.\qed\] However, as a corollary of [14, Corollary 4.2], we see that full extended plus closure does have finitistic test elements: **Corollary 3.5**.: _Let \((R,\mathfrak{m})\) be a complete normal local domain of residue characteristic \(p>0\) and of dimension \(d\). Let \(J\) be the defining ideal of the singular locus of \(R\). Then there exists an integer \(N\) such that \(J^{N}I^{\mathrm{epf}}\subseteq I\) for all \(I\subseteq R\)._ Proof.: From the proof of [14, Corollary 4.2], we have * \(I^{\mathrm{epf}}\subseteq(I,p^{n})B\cap R\) for some fixed perfectoid big CM \(R^{+}\)-algebra \(B\) and every \(n\); * There exists some \(N\) such that \(J^{N}\subseteq\mathrm{Im}(\mathrm{Hom}_{R}(B,R)\to R)\). Then the last paragraph of the proof of [14, Corollary 4.2] works with \(\overline{I^{h}}\) replaced by \(I^{\mathrm{epf}}\). Explicitly, for every \(r\in J^{N}\), there exists \(\phi\in\mathrm{Hom}_{R}(B,R)\) such that \(\phi(1)=r\). Applying \(\phi\) to \(I^{\mathrm{epf}}\subseteq(I,p^{n})B\cap R\) we get \(rI^{\mathrm{epf}}\subseteq(I,p^{n})R\) for every \(n\). Hence \(J^{N}I^{\mathrm{epf}}\subseteq\bigcap_{n}(I,p^{n})R=I\). **Theorem 3.6**.: _Let \(R\) be a complete local domain of mixed characteristic \(p>0\). Then the finitistic test ideals for \(\mathrm{epf}\) and \(\mathrm{wepf}\) are the same._ Proof.: Since \(\mathrm{epf}\leqslant\mathrm{wepf}\)[16], any element in \(\tau^{fg}_{\mathrm{wepf}}(R)\) will also be in \(\tau^{fg}_{\mathrm{epf}}(R)\). For the reverse containment, if \(c\in\tau^{fg}_{\mathrm{epf}}(R)\), then \(c(I,p^{n})^{\mathrm{epf}}\subseteq(I,p^{n})\) for all \(n>0\). For any \(u\in I^{\mathrm{wepf}}=\bigcap_{n\in\mathbb{N}}(I,p^{n})^{\mathrm{epf}}\), we have \(cu\in(I,p^{n})\) for all \(n\), which in particular implies that \(cu\in I\). Hence, \(c\in\tau^{fg}_{\mathrm{wepf}}(R)\). _Remark 3.7_.: We do not know if or where these two closure operations agree, despite the above result. If \(\mathrm{epf}=\mathrm{wepf}\) in general, then this implies that \(\mathrm{epf}\) is a Dietz closure [14]. If they do not always agree, then they provide an interesting example of distinct closures with the same finitistic and big test ideals. _Remark 3.8_.: The results above indicate that the finitistic and big test ideals do not coincide for either \(\mathrm{epf}\) or \(\mathrm{wepf}\). This gives a partial answer to [13, Question 3.7], which asks which closure operations have the property that the big and finitistic test ideals coincide. For finitistic closure operations like Frobenius and plus closure, the big and finitistic test ideals are known to coincide, and it is conjectured that they coincide for tight closure. ## 4. Canonical modules and cl-rationality In this section we define cl-rational singularities for residual, functorial closure operations cl over Cohen-Macaulay local rings. Under hypotheses comparable to those used for F-rationality, we show that cl-rational rings are those rings whose parameter ideals are cl-closed. We begin by setting up the necessary results from closure-interior duality. **Definition 4.1**.: Let \((R,\mathfrak{m},k)\) be a complete local ring. Write \(\vee\) for the Matlis duality operator \(\operatorname{Hom}_{R}(-,E)\), where \(E\) is the injective hull of \(k\). If \(M\) is a finitely-generated or Artinian \(R\)-module and \(N\subseteq M^{\vee}\), we define \[\operatorname{Ann}_{M}N:=\{f\in\operatorname{Hom}_{R}(M^{\vee},E)\mid f(N)=0\}.\] Note that by the isomorphism \(M^{\vee\vee}\cong M\), we may view this as a submodule of \(M\). **Proposition 4.2**.: _Let \((R,\mathfrak{m},k)\) be a complete local ring and \(M\) a finitely-generated or Artinian \(R\)-module. Then for any module \(M\) that is finitely-generated or Artinian and any submodule \(N\subseteq M^{\vee}\) we have_ \[\left(\frac{M^{\vee}}{N}\right)^{\vee}=\operatorname{Ann}_{M}N\] Proof.: \[f\in\left(\frac{M^{\vee}}{N}\right)^{\vee} \iff f\in\operatorname{Hom}_{R}\left(\frac{M^{\vee}}{N},E\right)\] \[\iff f\in\operatorname{Hom}_{R}\left(M^{\vee},E\right)\text{ and }f\text{ kills }N\] \[\iff f\in(M^{\vee})^{\vee}=M\text{ and }f\text{ kills }N\] \[\iff f\in\operatorname{Ann}_{M}(N).\qed\] **Corollary 4.3**.: _Let \(R\) be a complete local ring and \(\operatorname{cl}\) a closure operation on \(R\)-modules. Then for \(M\) a finitely-generated or Artinian \(R\)-module,_ \[\left(\frac{M^{\vee}}{0^{\operatorname{cl}}_{M^{\vee}}}\right)^{\vee}= \operatorname{Ann}_{M}(0^{\operatorname{cl}}_{M^{\vee}}).\] _In particular,_ \[\left(\frac{\operatorname{H}_{m}^{d}(R)}{0^{\operatorname{cl}}_{\operatorname{ H}_{m}^{d}(R)}}\right)^{\vee}=\operatorname{Ann}_{\omega}(0^{ \operatorname{cl}}_{\operatorname{H}_{m}^{d}(R)}),\] _where \(\omega\) is the canonical module of \(R\) and \(R\) has Krull dimension \(d\)._ Proof.: The first part follows by setting \(N=0^{\operatorname{cl}}_{M^{\vee}}\). **Corollary 4.4**.: _Let \(R\) be a complete local ring and \(M\) a finitely-generated or Artinian \(R\)-module. Let \(N\subseteq M^{\vee}\). Then \(N=0\) if and only if \(\operatorname{Ann}_{M}N=M\)._ Proof.: The forward direction is immediate. For the reverse direction, note that since \(E\) is injective, any map \(N\to E\) extends to a map \(M^{\vee}\to E\), i.e., an element of \(M^{\vee\vee}\cong M\). By hypothesis, this map kills \(N\). Hence, \(\operatorname{Hom}_{R}(N,E)=0\), which implies that \(N=0\). **Corollary 4.5**.: _Let \(R\) be a complete local ring of Krull dimension \(d\) with canonical module \(\omega\) and \(\operatorname{cl}\) a residual closure operation on \(R\)-modules. Then_ \[\operatorname{cl}\bar{\phantom{\omega}}(\omega)=\operatorname{Ann}_{\omega}0^ {\operatorname{cl}}_{\operatorname{H}_{m}^{d}(R)}.\] Proof.: This follows from Corollary 4.3 and Definition 2.8. As a result, we see that \(\operatorname{cl}\bar{\phantom{\omega}}(\omega)\) gives us a test submodule generalizing Definition 2.14 in the same way that the dual of \(\operatorname{cl}\bar{\phantom{\omega}}(R)\) gives us the traditional test ideal as in [14, 15]. **Definition 4.6**.: We define \(\tau_{\mathrm{cl}}(\omega):=\mathrm{Ann}_{\omega}\,0^{\mathrm{cl}}_{\mathrm{H}^{d} _{m}(R)}\). We apply this to define \(\mathrm{cl}\)-rationality: **Definition 4.7**.: Let \(R\) be a Cohen-Macaulay complete local ring with canonical module \(\omega\), and \(\mathrm{cl}\) a closure operation on \(R\)-modules. We say that \(R\) is \(\mathrm{cl}\)-rational if \(\tau_{\mathrm{cl}}(\omega)=\omega\). **Lemma 4.8**.: _Let \(R\) be a Cohen-Macaulay complete local ring of Krull dimension \(d\) with canonical module \(\omega\) and \(\mathrm{cl}\) a residual closure operation on \(R\)-modules. Then \(R\) is \(\mathrm{cl}\)-rational if and only if \(0^{\mathrm{cl}}_{\mathrm{H}^{d}_{m}(R)}=0\)._ Proof.: This follows from Definition 4.7 and Corollary 4.4. **Proposition 4.9**.: _Let \(R\) be a Cohen-Macaulay complete local ring of Krull dimension \(d\) with canonical module \(\omega\). Let \(\mathrm{cl}\leqslant\mathrm{cl}^{\prime}\) be closure operations on \(R\)-modules. If \(R\) is \(\mathrm{cl}^{\prime}\)-rational, then \(R\) is \(\mathrm{cl}\)-rational._ Proof.: If \(\mathrm{cl}\leqslant\mathrm{cl}^{\prime}\), then \(\tau_{\mathrm{cl}^{\prime}}(\omega)\subseteq\tau_{\mathrm{cl}}(\omega)\). So if \(\tau_{\mathrm{cl}^{\prime}}(\omega)=\omega\), then \(\tau_{\mathrm{cl}}(\omega)=\omega\). **Axiom 4.10**.: Let \((R,m)\) be a Cohen-Macaulay local ring of Krull dimension \(d\) with canonical module \(\omega\), and \(\mathrm{cl}\) a residual closure operation on \(R\)-modules. We say that \(\mathrm{cl}\) satisfies the _injective finiteness condition_ if \(0^{\mathrm{cl}}_{\mathrm{H}^{d}_{m}(R)}=0^{\mathrm{cl}f_{g}}_{\mathrm{H}^{d}_{ m}(R)}\). In particular, finitistic closure operations satisfy this condition. The following result indicates that our definition of \(\mathrm{cl}\)-rationality often coincides with the more ideal-theoretic definition, as in the case of F-rationality [14, 15]. **Theorem 4.11**.: _Let \(R\) be a Cohen-Macaulay complete local ring with canonical module \(\omega\), and \(\mathrm{cl}\) a residual, functorial closure operation on \(R\)-modules. If \(R\) is \(\mathrm{cl}\)-rational, then every ideal generated by a system of parameters is \(\mathrm{cl}\)-closed. If in addition \(\mathrm{cl}\) satisfies Axiom 4.10, then the converse holds._ Proof.: By Lemma 4.8, \(R\) is \(\mathrm{cl}\)-rational if and only if \(0^{\mathrm{cl}}_{\mathrm{H}^{d}_{m}(R)}=0\). First we prove the converse. Since \(R\) is Cohen-Macaulay, for any system of parameters \((x_{1},\ldots,x_{d})\), \[\mathrm{H}^{d}_{m}(R)=\varinjlim_{t}R/(x_{1}^{t},\ldots,x_{d}^{t})R.\] By our hypotheses, \[0^{\mathrm{cl}}_{\mathrm{H}^{d}_{m}(R)}=0^{\mathrm{cl}f_{g}}_{\mathrm{H}^{d}_{ m}(R)}=\sum_{G}0^{\mathrm{cl}}_{G},\] where \(G\) ranges over the finitely-generated submodules of \(\mathrm{H}^{m}_{m}(R)\). These include the \(R/(x_{1}^{t},\ldots,x_{d}^{t})\), so \[\sum_{t}0^{\mathrm{cl}}_{R/(x_{1}^{t},\ldots,x_{d}^{t})}\leq 0^{\mathrm{cl}f_{g}} _{\mathrm{H}^{d}_{m}(R)}.\] Since each \(G\) must be contained in \(R/(x_{1}^{t},\ldots,x_{d}^{t})\) for some \(t\) and \(\mathrm{cl}\) is functorial, \[0^{\mathrm{cl}}_{G}\leq 0^{\mathrm{cl}}_{R/(x_{1}^{t},\ldots,x_{d}^{t})}\] for some \(t\), so \[0^{\mathrm{cl}}_{\mathrm{H}^{m}_{m}(R)}\subseteq\sum_{t}0^{\mathrm{cl}}_{R/(x _{1}^{t},\ldots,x_{d}^{t})}.\] Hence \[0^{\mathrm{cl}}_{\mathrm{H}^{m}_{m}(R)}=\sum_{t}0^{\mathrm{cl}}_{R/(x_{1}^{t},\ldots,x_{d}^{t})}.\] If every ideal generated by a system of parameters is \(\mathrm{cl}\)-closed, then since \(\mathrm{cl}\) is residual, \(0\) is \(\mathrm{cl}\)-closed in every \(R/(x_{1}^{t},\ldots,x_{d}^{t})\). Hence \(0_{\mathrm{H}^{d}_{m}(R)}=0\). Conversely, assume \(0_{\mathrm{H}^{d}_{m}(R)}=0\), and let \(x_{1},\ldots,x_{d}\) be a system of parameters for \(R\). Suppose there is some \(t\) such that \(0^{\mathrm{cl}}_{R/(x_{1}^{t},\ldots,x_{d}^{t})}\neq 0\). Say \(u\in 0^{\mathrm{cl}}_{R/(x_{1}^{t},\ldots,x_{d}^{t})}\) is nonzero. Since \(\mathrm{cl}\) is functorial, the image of \(u\) in \(\mathrm{H}^{d}_{m}(R)\) must be in \(0^{\mathrm{cl}}_{\mathrm{H}^{d}_{m}(R)}=0.\) Thus there is some \(s\geqslant 0\) such that \((x_{1}\cdots x_{d})^{s}u\in(x_{1}^{t+s},\ldots,x_{d}^{t+s})\). Since \(R\) is Cohen-Macaulay, \(x_{1},\ldots,x_{d}\) is a regular sequence, and so \(u\in(x_{1}^{t},\ldots,x_{d}^{t})\), giving a contradiction. Hence \(0\) is cl-closed in \(R/\big{(}x_{1}^{t},\ldots,x_{d}^{t}\big{)}\) for every system of parameters \(x_{1},\ldots,x_{d}\). Since \(\mathrm{cl}\) is residual, this implies every ideal generated by a system of parameters is cl-closed. _Remark 4.12_.: Note that while tight closure is not known to be finitistic in general, Smith proved that \(0^{*}_{\mathrm{H}_{n}^{d}(R)}=0^{*tf_{g}}_{\mathrm{H}_{n}^{d}(R)}\) as long as \(R\) is a reduced, equidimensional excellent local ring [14, text after 3.3], so Theorem 4.11 holds for tight closure. Our proof above is modelled on the tight closure proof in [14]. **Example 4.13**.: We give an example to illustrate the need for Axiom 4.10 in the statement of Theorem 4.11. Consider epf closure. Let \(R\) be a complete regular local ring of mixed characteristic. By Theorem 3.2, \(0^{\mathrm{epf}}_{E}=E\). Since \(R\) is regular, \(E=\mathrm{H}_{m}^{d}(R)\). However, by [13, Theorem 3.19], every ideal of \(R\) is epf-closed in \(R\). Since epf-closure is residual [13, Lemma 3.1 and Proposition 7.2], \(0^{\mathrm{epf}}_{R/(x_{1}^{t},\ldots,x_{n}^{t})R}=0\) for all \(t\geqslant 0\). The next few results give weaker conditions that are equivalent to all parameter ideals being cl-closed, paralleling those for tight closure. **Corollary 4.14**.: _If every ideal generated by a system of parameters is \(\mathrm{cl}\)-closed and \(I\) is generated by part of a system of parameters, then \(I_{R}^{\mathrm{cl}}=I\)._ _In particular, this holds if \(R\) is \(\mathrm{cl}\)-rational._ Proof.: We use the method of [13, Page 126]: let \(I=(x_{1},\ldots,x_{k})\), where \(x_{1},\ldots,x_{d}\) is a system of parameters for \(R\). Then for every \(t\), \(J_{t}=(x_{1},\ldots,x_{k},x_{k+1}^{t},\ldots,x_{d}^{t})\) is \(\mathrm{cl}\)-closed and \(I^{\mathrm{cl}}\subseteq J_{t}^{\mathrm{cl}}=J_{t}\). Hence \(I^{\mathrm{cl}}\subseteq\bigcap_{t}J_{t}=I\). The final claim follows from Theorem 4.11. **Lemma 4.15**.: _Let \(R\) be a local ring and \(\mathrm{cl}\) a closure operation on \(R\)-modules satisfying colon-capturing such that every ideal \(I\) generated by part of a system of parameters is \(\mathrm{cl}\)-closed. Then \(R\) is Cohen-Macaulay._ Proof.: Let \(I=(x_{1},\ldots,x_{d})\) where \(x_{1},\ldots,x_{d}\) form a system of parameters on \(R\). Then for each \(1\leqslant i\leqslant d\), \[(x_{1},\ldots,x_{i-1}):x_{i}\subseteq(x_{1},\ldots,x_{i-1})^{\mathrm{cl}}=(x_ {1},\ldots,x_{i-1}).\] Hence, \(x_{1},\ldots,x_{d}\) is a regular sequence, so \(R\) is Cohen-Macaulay. To prove that a ring is \(F\)-rational, it suffices to prove that one system of parameters is tightly closed under some conditions (reduced, local, has a test element) [13, Page 128]. In the next couple of results, we prove comparable results for other closure operations. **Proposition 4.16**.: _Let \(R\) be a complete local Cohen-Macaulay ring. Suppose \(I=(x_{1},\ldots,x_{d})\) is generated by a system of parameters. If \(\mathrm{cl}\) is a functorial, residual closure operation and \(I_{t}=(x_{1}^{t},\ldots,x_{d}^{t})\) is \(\mathrm{cl}\)-closed for all \(t>0\), then every ideal generated by a system of parameters is \(\mathrm{cl}\)-closed._ _If in addition \(\mathrm{cl}\) satisfies Axiom 4.10, then \(R\) is \(\mathrm{cl}\)-rational._ Proof.: Let \(y_{1},\ldots,y_{d}\) be a system of parameters on \(R\). Then for \(t\gg 0\), \((x_{1}^{t},\ldots,x_{d}^{t})\subseteq(y_{1},\ldots,y_{d})\). By [13, Page 122], there is an injective \(R\)-module map \[f:A_{1}=R/(y_{1},\ldots,y_{d})R\hookrightarrow R/(x_{1}^{t},\ldots,x_{d}^{t})R= A_{2}.\] Since \(\mathrm{cl}\) is functorial, \(f\left(0^{\mathrm{cl}}_{A_{1}}\right)\subseteq 0^{\mathrm{cl}}_{A_{2}}=0\). Since \(f\) is injective, \(0^{\mathrm{cl}}_{A_{1}}=0\). Since \(\mathrm{cl}\) is residual, \((y_{1},\ldots,y_{d})^{\mathrm{cl}}_{R}=(y_{1},\ldots,y_{d})\). The final statement follows from Theorem 4.11. **Theorem 4.17**.: _Let \(R\) be a complete local Cohen-Macaulay ring. Suppose that \(\mathrm{cl}\) is a functorial and residual closure operation satisfying colon-capturing and strong colon capturing Version \(A\) (Definition 2.6). Then if \(I=(x_{1},\ldots,x_{d})\) generated by one system of parameters is \(\mathrm{cl}\)-closed, then every ideal generated by a system of parameters is cl-closed._ _If \(\operatorname{cl}\) also satisfies Axiom 4.10, \(R\) is \(\operatorname{cl}\)-rational._ Proof.: Write \(I_{t}=(x_{1}^{t},\ldots,x_{d}^{t})R\). We aim to show that \(I_{t}\) is \(\operatorname{cl}\)-closed for all \(t\geqslant 1\). If not, by [10, Page 126], we can assume that for some \(t\), there exists \(u=x_{1}^{t-1}\ldots x_{d}^{t-1}z\in I_{t}^{\operatorname{cl}}-I_{t}\) where \(z\) represents an element of the socle in \(R/I\). Since \(\operatorname{cl}\) satisfies strong colon-capturing, version A, \((x_{1}^{t},\ldots,x_{d}^{t})^{\operatorname{cl}}:(x_{1}\ldots x_{d})^{t-1} \subseteq(x_{1},\ldots,x_{d})^{\operatorname{cl}}\). Hence \(z\) is in \((x_{1},\ldots,x_{d})^{\operatorname{cl}}=(x_{1},\ldots,x_{d})\), a contradiction. Now by Proposition 4.16, every ideal generated by a system of parameters is \(\operatorname{cl}\)-closed. The final statement follows by Theorem 4.11. **Theorem 4.18**.: _Let \(R\) be a local ring, \(x_{1},\ldots,x_{d}\) a system of parameters on \(R\), and \(\operatorname{cl}\) a closure operation on at least ideals of \(R\) satisfying strong colon-capturing, version B. Let \(I_{k}\) denote the ideal \((x_{1},\ldots,x_{k})\). If \(I_{d}^{\operatorname{cl}}=I_{d}\), then \(I_{k}^{\operatorname{cl}}=I_{k}\) for each \(0\leqslant k\leq d\) and \(x_{1},\ldots,x_{d}\) is a regular sequence on \(R\)._ _Consequently, \(R\) is Cohen-Macaulay, so if \(R\) is complete and \(\operatorname{cl}\) is functorial and residual, then every ideal generated by a system of parameters is \(\operatorname{cl}\)-closed. If in addition \(\operatorname{cl}\) satisfies Axiom 4.10, then \(R\) is \(\operatorname{cl}\)-rational._ Proof.: The first paragraph follows as in the proof of the first Theorem on page 128 of [10], substituting strong colon-capturing, version B for the use of the Theorem cited in the proof. The second paragraph follows from Proposition 4.16. **Corollary 4.19**.: _Let \(B\) be a big Cohen-Macaulay \(R\)-module. Then \(R\) is \(\operatorname{cl}_{B}\)-rational if and only if one ideal generated by a system of parameters is \(\operatorname{cl}_{B}\)-closed._ Proof.: By [10], \(\operatorname{cl}_{B}\) is a functorial, residual closure operation satisfying Axiom 4.10 and strong colon-capturing, versions A and B. By Theorem 4.18, if the ideal generated by one system of parameters is \(\operatorname{cl}_{B}\)-closed, then \(R\) is \(\operatorname{cl}_{B}\)-rational. The other direction follows from Corollary 4.14. **Corollary 4.20**.: _Let \(R\) be a local ring and \(\operatorname{cl}\) a functorial, residual closure operation on \(R\)-modules. If \(R\) is weakly \(\operatorname{cl}\)-regular, then every ideal generated by a system of parameters is \(\operatorname{cl}\)-closed. Consequently, if \(R\) is complete and \(\operatorname{cl}\) also satisfies Axiom 4.10, then \(R\) is \(\operatorname{cl}\)-rational._ Proof.: This follows from Theorem 4.11 and the definition of weakly \(\operatorname{cl}\)-regular (Definition 2.11). However, when \(R\) is Gorenstein, the two conditions are equivalent, just as they are for tight closure [10, Page 129]. The result below is a variant of [10, Corollary 3.6], with a different style of proof. **Theorem 4.21**.: _Let \((R,\mathfrak{m},K)\) be a complete local Gorenstein ring, and \(\operatorname{cl}\) a residual, functorial closure operation on \(R\)-modules satisfying Axiom 4.10. Then TFAE:_ 1. \(R\) _is weakly_ \(\operatorname{cl}\)_-regular, i.e., every submodule of finitely generated modules are_ \(\operatorname{cl}\)_-closed._ 2. \(R\) _is_ \(\operatorname{cl}\)_-rational._ Proof.: \((1)\Rightarrow(2)\) follows from Corollary 4.20. We only need to show the converse. Assume that \(R\) is \(\operatorname{cl}\)-rational, and let \(N\subseteq M\) be finitely generated \(R\)-modules. If \(N\) is not \(\operatorname{cl}\)-closed, choose \(u\in N^{\operatorname{cl}}-N\). We may replace \(N\) by \(N^{\prime}\) where \(N\subseteq N^{\prime}\subseteq M\) such that \(N^{\prime}\) is maximal with respect to the property of not containing \(u\). We still have \(u\in(N^{\prime})^{\operatorname{cl}}-N^{\prime}\). The maximality of \(N^{\prime}\) implies that \(u\) is in every nonzero submodule of \(M/N^{\prime}\). We replace \(M,N^{\prime}\) by \(M/N^{\prime}\) and \(0\) respectively, and \(u\) by its image in \(M/N^{\prime}\). Since \(\operatorname{cl}\) is residual, \(0\neq u\in 0_{M}^{\operatorname{cl}}\). Then \(M\) has finite length and \(u\) spans the socle of \(M\) by [10, Lemma on page 56]. Let \(x_{1},\ldots,x_{d}\) be a system of parameters for \(R\), so that \(A=R/(x_{1}^{t},\ldots,x_{d}^{t})R\) is an Artinian Gorenstein local ring. Then \(Ru\cong K\hookrightarrow A\), where the second map sends \(y\mapsto y(x_{1}\cdots x_{d})^{t-1}\). Since \(\operatorname{cl}\) is functorial, \(u\in 0^{\operatorname{cl}}_{A}\). But since \(\operatorname{cl}\) is residual and \(R\) is \(\operatorname{cl}\)-rational, \(0^{\operatorname{cl}}_{A}=0\), giving us a contradiction. ## 5. Examples In this section we give examples of rings that are and are not \(\operatorname{cl}\)-rational for certain closure operations \(\operatorname{cl}\). In particular, we find rings of finite Cohen-Macaulay type that are not \(\operatorname{cl}_{M}\)-rational for some of their MCM modules \(M\), despite having rational singularities [10] and hence F-rational singularities for large enough characteristic \(p>0\)[1]. This makes a case that MCM module closures do not quite capture the right notion of singularity to align with existing classes of singularities. **Proposition 5.1**.: _Let \(R=k[\![x^{d},x^{d-1}y,\ldots,xy^{d-1},y^{d}]\!]\), where \(k\) is a field. Let \(S=k[\![x,y]\!]\) so that \(R\) is a (complete) Veronese subring of \(S\). For any integer \(0\leqslant i\leqslant d-1\), let \(M_{i}\) be the MCM \(R\)-module generated by the degree \(i\) monomials in \(S\), i.e., \(M_{i}=x^{i}R+x^{i-1}yR+\ldots+y^{i}R\). Then_ 1. \(R\) _is_ \(\operatorname{cl}_{M_{i}}\)_-rational for_ \(0\leqslant i\leqslant d-2\)_._ 2. \(R\) _is not_ \(\operatorname{cl}_{M_{d-1}}\)_-rational._ Proof.: By [10, Corollary 6.4], the \(M_{i}\) are indecomposable MCM modules over \(R\) since they are isomorphic to the direct summands of \(S\) as an \(R\)-module. Then by Corollary 4.19, for each \(0\leqslant i\leqslant d-1\), \(R\) is \(\operatorname{cl}_{M_{i}}\)-rational if and only if some parameter ideal \(I\) is \(\operatorname{cl}_{M_{i}}\)-closed. Let \(I=(x^{d},y^{d})R\). Then \(I\) is a parameter ideal of \(R\). We use the following isomorphism \[M_{i}=x^{i}R+x^{i-1}yR+\ldots+y^{i}R\cong(x^{d},x^{d-1}y,\ldots,x^{d-i}y^{i})R\] to view \(M_{i}\) as an ideal \(I_{i}\) of \(R\) for \(0\leqslant i\leqslant d-1\). Then \(I\) is \(\operatorname{cl}_{M_{i}}\)-closed if and only if \((II_{i}):_{R}I_{i}=I\). First we show that \(I\) is not \(\operatorname{cl}_{M_{d-1}}\)-closed. Note that \[(x^{d},y^{d})\cdot(x^{d},x^{d-1}y,\ldots,xy^{d-1})=(x^{2d},x^{2d-1}y,\ldots,x^ {d+1}y^{d-1},x^{d}y^{d},\ldots,xy^{2d-1}),\] which is equal to \[(x^{d},x^{d-1}y,\ldots,xy^{d-1},y^{d})\cdot(x^{d},x^{d-1}y,\ldots,xy^{d-1}).\] So \((x^{d-1}y,\ldots,xy^{d-1})\) is contained in \(I^{\operatorname{cl}_{M_{d-1}}}\), which implies that \(I\) is not \(\operatorname{cl}_{M_{d-1}}\)-closed. The fact that \(I\) is \(\operatorname{cl}_{M_{i}}\)-closed for \(0\leqslant i\leqslant d-2\) follows from the next lemma. **Lemma 5.2**.: _Using the previous notation, if \(0\leqslant i\leqslant d-2\) and \(J\cdot I_{i}\subseteq I\cdot I_{i}\), then \(J\subseteq I\)._ Proof.: Since \(I,I_{i}\), and \(I\cdot I_{i}\) are monomial ideals, we can assume that \(J\) is a monomial ideal. Let \(x^{a}y^{b}\) be a generator of \(J\). Then \(a+b\) is divisible by \(d\). If \(a+b>d\), then \(a+b\geqslant 2d\). Hence, either \(a\geqslant d\) or \(b\geqslant d\), which implies that \(x^{a}y^{b}\in I\), so we may assume that \(a+b=d\). Let \(x^{a}y^{b}\in J\) be a monomial not in \(I\). Then \(a\neq 0,b\neq 0\). Let \(j=\min(d-b-1,i)\). Then \(x^{a}y^{b}\cdot x^{d-j}y^{j}=x^{a+d-j}y^{b+j}\in I\cdot I_{i}\). Since \(j,b+j\leqslant d-1\), we conclude that \[x^{a+d-j}y^{b+j} \in x^{d}I_{i}\] \[\Leftrightarrow a+d-j \geqslant 2d-i\] \[\Leftrightarrow 2d-b-j \geqslant 2d-i\] \[\Leftrightarrow i \geqslant b+j.\] Looking at the choice of \(j\), there are two possibilities: 1. \(j=i\), which contradicts \(i\geqslant b+j>j\). 2. \(j=d-b-1\leqslant i\Leftrightarrow j+b=d-1\), which contradicts \(d-2\geqslant i\geqslant b+j\). Either way we have \(x^{a}y^{b}\cdot I_{i}\neq I\cdot I_{i}\), which is a contradiction. Hence \(J\subseteq I\). _Remark 5.3_.: The canonical module of the ring \(R\) above is \(\omega=M_{d-2}\) ([11, Corollary 3.1.3]). The above proof shows that \(R\) is \(\operatorname{cl}_{\omega}\)-rational, as expected given that \(R\) is F-rational when \(k\) has characteristic \(p>0\)[15, 5.1]. **Proposition 5.4**.: _Let \(R=k\llbracket x,y\rrbracket/(x^{2}y)\). \(R\) is neither \(\operatorname{cl}_{M_{1}}\)-rational, nor \(\operatorname{cl}_{M_{2}}\)-rational, where \(M_{1}=R/(y)\) and \(M_{2}=R/(x^{2})\) represent the two isomorphism classes of indecomposable MCM \(R\)-modules._ Proof.: The classification of the indecomposable MCM \(R\)-modules comes from [14, Example 14.23]. Since \(I=(x+y)\) is a parameter ideal, we only need to show that \(I^{\operatorname{cl}}\) is strictly larger than \(I\) for \(\operatorname{cl}=\operatorname{cl}_{M_{1}}\) and \(\operatorname{cl}_{M_{2}}\). By definition of \(\operatorname{cl}_{M_{1}}\), it is clear that \(y\in I^{\operatorname{cl}_{M_{1}}}\). We only need to show that \(y\notin I\), but this is immediate since \(I\neq\mathfrak{m}=(x,y)R\). The proof is similar for \(M_{2}\): \(x^{2}\in I^{\operatorname{cl}_{M_{2}}}\) but \(x^{2}\notin I\). **Proposition 5.5**.: _Let \(R=k\llbracket x,y\rrbracket/(y^{2})\) where \(k\) is a perfect field. Let_ \[\mathcal{M}=\{I_{n}:n\in\mathbb{N}\cup\{\infty\}\},\] _where \(I_{n}=(x^{n},y)\). Then \(R\) is not \(I_{n}\)-rational for all \(n\)._ Proof.: The set \(\mathcal{M}\) gives a representative for each isomorphism class of indecomposable MCM \(R\)-modules [11, Example 6.5]. As a result, if \(R\) is \(\operatorname{cl}_{I_{n}}\)-rational then \(I_{n}I_{:R}I_{n}=I\) for the parameter ideal \(I=(x)\). But \(y\cdot I_{n}=y(x^{n},y)=(x^{n}y)\subseteq(x^{n+1},xy)R)=I_{n}I\), so \(y\in I_{n}I_{:R}I\) even though \(y\notin I\). So \(R\) is not \(\operatorname{cl}_{I_{n}}\)-rational. **Theorem 5.6**.: _Let \(R=k\llbracket x,y\rrbracket/(x^{n}+y^{2})\) where \(k\) is an algebraically closed field and \(n\) is an odd positive integer. Consider the set \(\{R,(x,y),(x^{2},y),\ldots,(x^{\frac{n-1}{2}},y)\}\) of ideals of \(R\). If \(n>1\), then \(R\) is not \(\operatorname{cl}_{(x^{i},y)R}\)-rational for \(i\geqslant 1\)._ Proof.: By [11, Proposition 5.11], this is a set of representatives of the isomorphism classes of indecomposable MCM modules of \(R\). Note that \(I=(x)R\) is a parameter ideal since \(y^{2}=-x^{n}\in I\Rightarrow\mathfrak{m}=\sqrt{I}\). Assume \(n>1\). We only need to show that \(I_{i}I:_{R}I_{i}\neq I\) where \(I_{i}=(x^{i},y)R\) for \(i=1,...,\frac{n-1}{2}\). Note that \(y\cdot I_{i}=(x^{i}y,y^{2})=(x^{i}y,x^{n})\subseteq(x^{i+1},xy)=I_{i}\cdot I\), but \(y\notin I\). **Proposition 5.7**.: _Let \(R\) be a 2-dimensional ADE singularity, so that \(R\) is a ring of the form \(k\llbracket x,y,z\rrbracket/(z^{2}+g(x,y))\) where \(g(x,y)\in k\llbracket x,y\rrbracket\). Let \(M\) be a MCM \(R\)-module, that is, a cokernel of a matrix of the form \(zI_{n}-\varphi\) where \(\varphi\) is an \(n\times n\) square matrix with entries in \((x,y)k\llbracket x,y\rrbracket\) and \(I_{n}\) is the \(n\times n\) identity matrix. Then \(R\) is not \(\operatorname{cl}_{M}\)-rational._ See [14] for a discussion of MCM modules over these 2-dimensional ADE singularities. Proof.: Let \(J=(x,y)\), then \(z^{2}\in J\) implies that \(\sqrt{J}=\mathfrak{m}\). Note that \(z\notin J\) as \(R/J\simeq k\llbracket z\rrbracket/(z^{2})\neq k\). We will show that \(z\in J^{\operatorname{cl}}\) where \(\operatorname{cl}=\operatorname{cl}_{M}\). By definition, we need to show that for any \(u\in M\), \(zu\in JM\). Write \(M\) as the cokernel of \(R^{n}\xrightarrow{zI_{n}-\varphi}R^{n}\), let \(\{e_{i}\}_{1\in i\in n}\) be a basis for the second copy of \(R^{n}\), and let \(\varphi_{i}\) denote the \(i\)th column of \(\varphi\). Then \(ze_{i}-\varphi_{i}=0\) in \(M\), which implies that \(ze_{i}\in JM\). Hence \(zu\in JM\) for any \(u\in M\). _Remark 5.8_.: All of the examples in [1, Section 6] satisfy the above theorem. ## Acknowledgments We thank Neil Epstein for some very helpful comments on this paper.
2310.17646
Learning the dynamics of a one-dimensional plasma model with graph neural networks
We explore the possibility of fully replacing a plasma physics kinetic simulator with a graph neural network-based simulator. We focus on this class of surrogate models given the similarity between their message-passing update mechanism and the traditional physics solver update, and the possibility of enforcing known physical priors into the graph construction and update. We show that our model learns the kinetic plasma dynamics of the one-dimensional plasma model, a predecessor of contemporary kinetic plasma simulation codes, and recovers a wide range of well-known kinetic plasma processes, including plasma thermalization, electrostatic fluctuations about thermal equilibrium, and the drag on a fast sheet and Landau damping. We compare the performance against the original plasma model in terms of run-time, conservation laws, and temporal evolution of key physical quantities. The limitations of the model are presented and possible directions for higher-dimensional surrogate models for kinetic plasmas are discussed.
Diogo D Carvalho, Diogo R Ferreira, Luis O Silva
2023-10-26T17:58:12Z
http://arxiv.org/abs/2310.17646v3
Do Graph Neural Networks Dream of Landau Damping? Insights from Kinetic Simulations of a Plasma Sheet Model ###### Abstract We explore the possibility of fully replacing a plasma physics kinetic simulator with a graph neural network-based simulator. We focus on this class of surrogate models given the similarity between their message-passing update mechanism and the traditional physics solver update, and the possibility of enforcing known physical priors into the graph construction and update. We show that our model learns the kinetic plasma dynamics of the one-dimensional plasma model, a predecessor of contemporary kinetic plasma simulation codes, and recovers a wide range of well-known kinetic plasma processes, including plasma thermalization, electrostatic fluctuations about thermal equilibrium, and the drag on a fast sheet and Landau damping. We compare the performance against the original plasma model in terms of run-time, conservation laws, and temporal evolution of key physical quantities. The limitations of the model are presented and possible directions for higher-dimensional surrogate models for kinetic plasmas are discussed. _Keywords: Plasma Physics, Machine Learning, Graph Neural Networks_ ## 1 Introduction Simulating the kinetic behavior of a plasma [1] is a complex and computationally demanding task. Fully relativistic massively-parallelized Particle-in-Cell (PIC) codes are commonly used to model these phenomena and have been shown to correctly recover and predict plasma collective behavior [2, 3, 4]. To obtain computational speed-ups, there have been recent attempts to combine existing PIC codes with machine learning surrogate models. These efforts include approaches to accelerate [5] or fully replace [6] the field solver block, or the integration of surrogate models into advanced physics extensions [7]. However, how to simultaneously obtain a significant computational gain, while enforcing known physics constraints is still an open research question. Developments in machine learning introduced several physics-inspired surrogate models as an alternative to standard differential equation solvers [8, 9, 10, 11] and \(n\)-body or mesh-based simulators [12, 13, 14, 15, 16, 17]. From the broad set of available surrogate models, one class of algorithms that can be of particular interest for kinetic plasma simulations are graph neural network-based approaches [18, 19], because of their capability to model both particle-particle [12] and particle-mesh interactions [13], as well as the possibility of enforcing known invariances or symmetries into the network architecture [15, 16, 19]. These approaches have been successfully applied to fluid [12, 13], rigid body [13, 20], and charged particle dynamics [15, 16, 21]. However, to the best of our knowledge, their applicability to model kinetic plasma simulations is still to be demonstrated. In this work we aim to model the predecessor of the PIC loop, the one-dimensional electrostatic sheet plasma model introduced by Dawson [22, 23]. This is an ideal initial testbench since it provides a simpler scenario, in terms of the problem structure and possible computational gains, while capturing a wide range of kinetic plasma physics phenomena, that go beyond "collisionless" physics, including Coulomb collisions, and collisional thermalization [22, 23, 24, 25, 26]. Moreover, recent studies in the fundamental statistical physics processes in plasmas have been using the sheet model and/or direct extensions [27]. We show how to leverage previous work on graph neural network-based simulators by Sanchez-Gonzalez et al. [12] to kinetic plasma simulations and to the one-dimensional sheet model by introducing domain knowledge into the graph construction and simulator update mechanisms which enforce the desired symmetries and invariances. We discuss the advantages and disadvantages of using our surrogate model when compared to the standard physics simulator in terms of accuracy, run-time, energy conservation, and generalization capabilities. Finally, based on our findings, we comment on the expected impact of graph neural network-based simulators for the multi-dimensional PIC scenario. ## 2 Electrostatic Sheet Model The single-species one-dimensional electrostatic sheet model introduced by Dawson [22, 23] represents a plasma as a group of equally negatively charged sheets, moving freely over a uniformly neutralizing ion background (see figure 1). In a one-dimensional system, this model is exact and describes, within classical physics, the dynamics of an arbitrary non-relativistic plasma. The system is in equilibrium if the sheets are at rest and equally spaced. In such a scenario, the electric field at a given sheet is zero, and the field profile is represented by a sawtooth function; it varies linearly along the x-axis by a factor of \(4\pi en_{0}\), where \(e\) represents the electron charge and \(n_{0}\) the background ion number density, and jumps by a factor of \(-4\pi en_{0}\delta\) at each sheet, where \(\delta=L/N_{sheets}\) represents the inter-sheet spacing at equilibrium, \(L\) the box length and \(N_{sheets}\) the number of sheets. If one sheet moves a certain distance \(\xi\) from its equilibrium position, it will experience an electric field \(E=4\pi en_{0}\xi\), resulting in an equation of motion given by: \[\ddot{\xi}=-\omega_{p}^{2}\xi \tag{1}\] where \(m_{e}\) represents the electron mass, and \(\omega_{p}=\sqrt{4\pi n_{0}e^{2}/m_{e}}\) is the plasma frequency. This result implies that, for small displacements, each sheet behaves as an independent harmonic oscillator. For larger displacements, it is possible that consecutive sheets cross each other, corresponding to a one-dimensional Coulomb collision, meaning their equilibrium positions switch. Alternatively, this interaction can be modeled as an elastic collision, i.e. one can simply switch the velocities of the sheets at the instant of crossing (instead of their equilibrium positions) as it results from the conservation of energy and momentum. An illustration of the difference in the resulting individual trajectories is provided in figure 2. To simulate this system, two computational directions can be used [23]: a synchronous method, and an asynchronous method. The synchronous method first updates the sheet dynamics according to (1) considering a fixed \(\Delta t\). It then detects crossings by testing the condition \(x_{i}^{t+1}>x_{j}^{t+1}\) for \(j>i\), and proceeds to use an iterative method to estimate the crossing times and correct the motion of the corresponding sheets. This method does not correctly resolve crossings involving more than two sheets in a single time step, which leads to an overall energy loss in the system if the time step is too large compared with the inverse of the typical collision frequency. For this reason, for higher sheet velocities it is necessary to use smaller simulation time steps (the collision frequency increases with increasing thermal velocity). On the other hand, the asynchronous method advances the simulation until the next crossing. The next crossing time can be predicted analytically from (1) by solving for \(x_{i}(t)=x_{j}(t)\) with respect to \(t\) for all pairs of adjacent sheets. This algorithm guarantees energy conservation but implies additional computational effort since a sorted table containing all expected crossing times between neighboring sheets must be pre-computed and updated after each crossing is resolved. Figure 1: Schematic of the 1D single species electrostatic sheet model. In equilibrium, the negatively charged sheets (red) are equally spaced inside the box. If one sheet is displaced from its equilibrium position, the average electric field the sheet feels is not zero, due to the charge imbalance. Adapted from Dawson [22]. Since the Graph Network Simulator (GNS) is a synchronous model, we use the synchronous sheet model algorithm (illustrated in figure 3) for both data generation and testing purposes to allow for model accuracy comparisons at equivalent simulation time steps. ## 3 Graph Network Simulator The GNS architecture used here is inspired by the work of Sanchez-Gonzalez et al. [12] while taking into consideration the specifics of the electrostatic sheet model. The main building blocks are presented in figure 3. Based on the sheet positions \(\mathbf{x}^{t}\), velocities \(\mathbf{v}^{t}\), equilibrium positions \(\mathbf{x}^{t}_{eq}\), and boundary conditions, we generate a graph representation \(\mathcal{G}\) of the plasma. A Graph Neural Network (GNN) predicts each individual sheet acceleration \(\mathbf{a}^{t}\), which will be used to update the positions \(\mathbf{x}^{t}\) and velocities \(\mathbf{v}^{t}\). We enforce the boundary conditions by Figure 2: Comparison of charged sheet trajectories when considering sheet interactions as crossings (top) _versus_ binary collisions (bottom). The represented system consists of 10 sheets (represented by different colors) moving on a periodic box of length \(L\). Initial velocities were randomly chosen. We will learn to model the dynamics of the first case (crossings) as this is considerably easier, mainly due to the smoothness of the sheet trajectories. More details on the difficulties that arise when attempting to learn collisional dynamics are provided in Appendix A. re-injecting particles that crossed the boundary, sorting the particles by their position, and updating their equilibrium positions. This process can be repeated to generate longer simulation rollouts. Our GNS does not treat particle interaction as binary collisions. Instead, we learn to predict the changes in velocity as sheets move through one another. This choice makes it significantly easier for the network to learn the dynamics of the system and also reduces the graph and model complexity. We provide this comparison, as well as the structural changes required to the simulator when considering crossings as collisions, in A. ### Graph Representation The plasma is represented as a graph by a set of nodes \(\{\mathbf{n}_{i}\}\) representing the negatively charged sheets, and a set of directed edges \(\{\mathbf{r}_{ij}\}\) which connect neighboring sheets. In our case, for simplicity and performance reasons we opted to connect only the first neighbors (additional comments on the impact of higher connectivity are provided throughout the paper). Each node vector \(\mathbf{n}_{i}\) contains the information relative to the corresponding negatively charged sheet, while each edge vector \(\mathbf{r}_{ij}\) contains information about the relative displacement of node \(i\) in relation to node \(j\). They are defined as: \[\begin{split}\mathbf{n}_{i}^{t}&=\left[\xi_{i}^{t}, v_{i}^{t}\right]\\ \mathbf{r}_{ij}^{t}&=\left[x_{j}^{t}-x_{i}^{t}\right] \end{split} \tag{2}\] where \(\xi_{i}^{t}\) corresponds to the displacement of the \(i\)th sheet from its equilibrium position \(\left(x_{i}^{t}-x_{eq_{i}}^{t}\right)\), and \(v_{i}^{t}\) the finite difference velocity \(\left(x_{i}^{t}-x_{i}^{t-1}\right)/\Delta t\). To allow the model to generalize to different box sizes and number of sheets, we normalize all distances and velocities by the intersheet distance at equilibrium \(\delta\). This transformation makes Figure 3: Schematic representations of the synchronous Electrostatic Sheet Model algorithm [23] and the Graph Network Simulator used. the inputs of the network invariant to the system size (box length and number of particles). This is also the reason why we include in the node vector the displacement from equilibrium instead of the sheet position inside the box, which additionally enforces an invariance to the sheet rank. When considering reflecting boundary conditions, extra nodes representing mirrored versions of the boundary sheets are added to the graph (figure 4). Ideally, the number of mirror sheets should be large enough so that no boundary sheet crosses all mirror sheets in a single time step (in our case we set it to the number of message-passing steps, which we cover in Section 3.2). On the other hand, for periodic boundaries the graph becomes cyclic and we consider the distance between boundary sheets to be equal to the distance through the simulation box walls. In subsequent sections, we will show that representing boundaries in this way allows the GNN model to be applied to different boundary conditions than the ones it was trained on since it learns interactions between pairs of sheets (not interactions with the wall). However, this comes at the cost of an extra hard-coded step, that re-injects particles that crossed the boundary (procedure to be explained in more detail in later sections). We experimented with several other possible representations of the system, (e.g. different boundary representations, node and edge parameters, connect \(n\)-nearest neighbors) but these variants either produced worse results (in terms of accuracy or generalization capabilities) or introduced extra complexity and memory requirements that did not result in meaningful accuracy and/or speed-up improvements for the tested scenarios. ### Graph Neural Network Architecture The GNN module follows an encoder-processor-decoder architecture as introduced by Battaglia et al. [18]. A slight modification is used, which includes the sent edges in the node update mechanism of the processor block (equivalent to the Graph Network implementation available in Jraph [28]). The main building blocks are as follows: **Encoder**: The encoder transforms each node \(\mathbf{n}_{i}\) and edge \(\mathbf{r}_{ij}\) vectors into a latent space node \(\mathbf{v}_{i}\) and edge \(\mathbf{e}_{ij}\) representation according to: \[\mathbf{v}_{i}=\varepsilon^{v}\left(\mathbf{n}_{i}\right) \tag{3}\] \[\mathbf{e}_{ij}=\varepsilon^{e}\left(\mathbf{r}_{ij}\right)\] Figure 4: Graph representation of a four-sheet system for different boundary conditions. The number of mirror sheets used for reflecting boundaries is not necessarily one, and, if necessary, can be larger than the number of sheets inside the box. where \(\varepsilon^{v}\) and \(\varepsilon^{e}\) are learnable functions. **Processor**: The processor consists of a series of \(M\) slightly modified Graph Network (GN) blocks [18, 28] which sequentially update the node and edge values according to: \[\begin{split}\mathbf{e}_{ij}^{m+1}&=\phi^{e}\left( \mathbf{e}_{ij}^{m},\mathbf{v}_{i}^{m},\boldsymbol{\cdot}\mathbf{v}_{j}^{m} \right)\\ \overline{\mathbf{e}}_{r_{i}}^{m+1}&=\sum_{j\in \mathcal{N}(i)}\mathbf{e}_{ij}^{m+1}\\ \overline{\mathbf{e}}_{s_{i}}^{m+1}&=\sum_{j\in \mathcal{N}(i)}\mathbf{e}_{ji}^{m+1}\\ \mathbf{v}_{i}^{m+1}&=\phi^{v}\left(\overline{ \mathbf{e}}_{r_{i}}^{m+1},\overline{\mathbf{e}}_{s_{i}}^{m+1},\mathbf{v}_{i}^ {m}\right)\end{split} \tag{4}\] where the superscript denotes the block number, \(\mathcal{N}(i)\) the set of nodes connected to \(i\), and \(\phi^{v}\), \(\phi^{e}\) are learnable functions. The value of \(M\) is set depending on the training time step and the maximum velocity of the particles present in the training simulations. Ideally, \(M\) should be larger than the maximum number of neighboring sheets that a given sheet crosses in any particular time step, since a graph node will at most receive/send information from/to the \(M^{\text{th}}\) neighbor. This condition can be relaxed if the \(N\)-nearest neighboring nodes are directly connected (with the cost of additional memory requirements), and we have observed similar performance in our tested scenarios for models with an equivalent \(M\times N\) factor. However, for higher crossing frequencies than the ones presented in this work, it might be preferable to increase \(N\) to avoid possible information bottlenecks [29]. **Decoder**: The decoder block transforms the latent node representations of the last layer into the output vector: \[\mathbf{y}_{i}=\delta^{v}\left(\mathbf{v}_{i}^{M}\right) \tag{5}\] where \(\delta^{v}\) is a learnable function. In our case, the output vector \(\mathbf{y}_{i}\) is a single real value that corresponds to an estimate of the individual finite difference particle acceleration \(a_{i}^{t}=(v^{t+1}-v^{t})/\Delta t=(x^{t+1}-2x^{t}-x^{t-1})/\Delta t\) normalized to the intersheet spacing \(\delta\). We parameterize the encoder and decoder functions \((\varepsilon^{e},\varepsilon^{v},\delta^{v})\) as linear transformations. As for the processor functions \((\phi^{e},\phi^{v})\), they are given by a two-layer dense neural network following: Input \(\rightarrow\{\text{LinearLayer}()\rightarrow\text{ReLU}\rightarrow\text{ LinearLayer}()\}\)\(\rightarrow\) Output. In every block (encoder, processor, and decoder), we use a latent space of size 128. A summary of the hyper-parameter tuning experiments that led to these final values is provided in Appendix B. Although this GNN architecture does not enforce equivariance with respect to reflections (a symmetry present in the sheet model, i.e. if the simulation box is flipped the absolute value of the predicted accelerations should simply switch signs), we observed that the network was nonetheless capable of correctly approximating this symmetry within the training data range. In fact, we developed an alternative architecture that enforced this equivariance but did not observe relevant gains concerning the required number of training simulations nor improved rollout accuracy or energy conservation capabilities (in fact we observed a deterioration for considerable out-of-training data distribution values). Similarly, not using the sent messages for the node update mechanism led to poorer energy conservation for (considerably) out-of-training data distribution values. More details about these comparisons and the equivariant architecture are provided in C. ### Position and Velocity Update To update the particle positions and velocities we use a semi-implicit first-order Euler integration scheme as follows: \[\begin{split}\tilde{v}^{t+1}&=v^{t}+a^{t}\Delta t \\ \tilde{x}^{t+1}&=x^{t}+\tilde{v}^{t+1}\Delta t. \end{split} \tag{6}\] After this update, we resolve the boundary crossings. When considering reflecting boundary conditions, we flip the position and velocity of the particles that left the simulation box. No change is applied to their equilibrium positions. When considering periodic boundaries, we instead re-insert the particles through the opposite boundary without changing their velocities. Additionally, the equilibrium positions are updated to take into consideration particles that crossed the boundaries (additional information is provided in D). Finally, we sort particles by their position inside the box. This step is required to correctly attribute equilibrium positions and ensure the necessary relative ordering for graph construction. ## 4 Implementation For reference, we implemented the synchronous version of the original electrostatic sheet model [22, 23] in Python, using NumPy [30]. This code is used to generate all ground truth training and test data at a high temporal resolution. The GNS was also implemented in Python using JAX [31], Jraph [28], and Haiku [32]. Additionally, from here onwards we will adopt a system of units similar to Dawson [22]. Time will be shown in units of the plasma period \(\omega_{p}^{-1}\) (with \(\omega_{p}\) as defined in (1)), distances will be presented in units of the intersheet spacing in equilibrium \(\delta\), resulting in velocities in units of \(\delta\!\cdot\!\omega_{p}\) and accelerations in units of \(\delta\!\cdot\!\omega_{p}^{2}\). Note that in the adopted units the Debye length \(\lambda_{D}\) is equivalent to the thermal velocity since, by definition, \(v_{th}=\lambda_{D}\omega_{p}\)[1], and the length of the simulation box \(L\) is equivalent to the number of sheets since \(L=N_{sheets}\delta\). ### Generating the ground truth data Using the electrostatic sheet model, we generate 10,000 simulations of systems consisting of 10 sheets moving inside a periodic box. All simulations are run for a duration of \(t_{max}=10\ \omega_{p}^{-1}\) using a time step of \(\Delta t_{sim}=10^{-4}\ \omega_{p}^{-1}\). The initial displacements from the equilibrium positions and velocities of the sheets are randomly sampled from uniform distributions. The maximum initial displacement equals \(\xi_{max}^{0}=0.2\ \delta\) and the maximum initial velocity is \(v_{max}^{0}=10\ \delta\!\cdot\!\omega_{p}\). In addition, we ensured that the total energy of the system did not vary more than a predefined threshold (\(\Delta\varepsilon/\varepsilon_{0}\!=\!10^{-6}\)) during the full simulation by discarding simulations that did not fulfill this criterion. This guarantees that all crossings are well resolved by the sheet model. ### Data preprocessing and augmentation Before training the GNN models, we apply the following preprocessing steps. First, we downsample the data to the desired training time step (e.g. \(\Delta t_{train}=10^{-2}\ \omega_{p}^{-1}\)). To take advantage of the system symmetries, we proceed to augment the training dataset by mirroring the simulations along the x and time axis (the latter is not equivalent to simply changing the sign of the velocities since we are using the finite difference velocity). We proceed to generate pairs of input graphs and output target accelerations, where each input graph corresponds to a full simulation rollout (and corresponding augmented versions). More details on the impact of the impact of the training dataset size and data augmentation are provided in Appendix E. ### Training To train the models, we hold out 100 simulations for validation purposes. We proceed to minimize the mean squared error between the predicted and target accelerations using the Adam optimizer. We use an exponential learning rate scheduler, similarly to Sanchez-Gonzalez et al. [12], for which \(\alpha(j)=\alpha_{final}+(\alpha_{start}-\alpha_{final})\cdot 0.1^{j\cdot 10^{6}}\), where \(j\) represents the gradient update step, and the initial and final learning rates are given by \(\alpha_{start}=10^{-4}\) and \(\alpha_{final}=10^{-6}\). We set the batch size to 1 (one graph corresponds to a full simulation) and compute the validation loss on the full validation set once a full cycle over the training dataset is completed. The training procedure is then run to a maximum of \(1\times 10^{6}\) gradient updates for \(\Delta t_{train}=10^{-1}\ \omega_{p}^{-1}\), and \(1.5\times 10^{6}\) gradient updates for \(\Delta t_{train}=10^{-2}\ \omega_{p}^{-1}\). The final weights of the model are those obtained for the smallest recorded validation loss. The full training procedure lasts approximately 4 hours for \(\Delta t_{train}=10^{-1}\ \omega_{p}^{-1}\) and \(M=5\), and 1 day for \(\Delta t_{train}=10^{-2}\ \omega_{p}^{-1}\) and \(M=3\), using a single Nvidia Titan X GPU. For each value of \(\Delta t_{train}\) we train 5 equivalent models using different random seeds in order to assess the dependence of performance on weight initialization. ## 5 Model Benchmark In this section, we assess the capability of the GNS to predict individual sheet trajectories. We showcase the generalization capabilities already hinted at in Section 3.1 by evaluating the model accuracy for systems of different sizes and boundary conditions. Additionally, we compare its energy conservation capabilities and run-time against the Sheet Model (SM) and discuss the identified GNS limitations. ### Trajectory Prediction Error In order to benchmark the rollout accuracy and generalization capabilities of the GNS, we evaluate its accuracy on multiple test sets consisting of systems with different numbers of sheets and boundary conditions. Each test set contains 100 simulations with a similar duration, temporal resolution, maximum initial displacement and velocity as the ones present in the training set. Our evaluation metrics are the rollout mean absolute error (MAE) and the earth mover's distance (EMD) [33] between the predicted and the ground truth sheet trajectories, calculated, for each time step, as \[\begin{split}\text{MAE}=&\ \frac{1}{N}\sum_{i} \left|x_{i}^{GNS}-x_{i}^{True}\right|\\ \text{EMD}=&\ \min\frac{1}{N}\sum_{i}^{N}\sum_{j}^{N} \left|x_{i}^{GNS}-x_{j}^{True}\right|\cdot c_{ij},\\ &\ \text{subject to}\ \sum_{i}c_{ij}=1\,\ \sum_{j}c_{ij}=1\,\ \text{and}\ c_{ij}\in\{0,1\}\end{split} \tag{7}\] and then average over the full simulation rollout. In the case of periodic boundaries, we modify both metrics to consider the absolute distance to be the minimum of the distances through the box or through the walls. The results presented in figure 5 allow us to draw some conclusions. We observe the rollout errors obtained are considerably small (note that they are presented in units of the intersheet spacing \(\delta\)) demonstrating that, despite training solely on single-step acceleration prediction, we achieve a stable rollout accuracy. To provide additional insight into the small scale of the errors, we showcase in figure 6 the worst test simulation rollouts (highest rollout EMD across all models) for different time steps and boundary conditions. We also observe the simulator accuracy is independent of the number of sheets and boundary conditions, without additional re-training/fine-tuning. These invariances, already hinted at in Section 3.1, are a direct consequence of the chosen graph representation of the system. Finally, figure 5 illustrates the importance of using both the MAE and the EMD as complementary evaluation metrics. The larger intervals associated with the MAE are produced by a small set of simulations where, due to the accumulation of small prediction errors, two particles switch their expected relative order during a tangential crossing (i.e. when they are moving in the same direction with very similar velocities). This results in a permutation of their predicted trajectories with respect to their ground truth, leading to larger MAE values (see for example figure 6 for reflecting boundary, around \(t=2\)\(\omega_{p}^{-1}\) the orange and dark blue trajectories permute after reflection). The reason why the error intervals decrease significantly for the EMD case is because this metric is invariant to permutations of sheet trajectories. This invariance (which the MAE does not provide) is an important property for our case study since a simple permutation of sheet trajectories does not change the distribution function of the system (i.e. the systems are equivalent). Therefore, the EMD provides a better assessment of the accuracy of the simulator to model the collective plasma dynamics. We observed that, overall, equivalent models trained with different random seeds converge to very similar rollout errors (detailed comparisons provided in F). The only exception was one of the models trained for \(\Delta t=10^{-2}\)\(\omega_{p}^{-1}\) which revealed a slightly worse rollout performance. We attribute this larger error to its worse single-step prediction capabilities given the validation loss at train time was approximately \(2\times\) that of equivalent models. Figure 5: Rollout error metrics for the GNS in the test set simulations. For each value of \(\Delta t\) we compute the metrics for 5 equivalent GNNs trained using different random seeds. The presented mean values are computed by averaging over sheets, time steps, simulations, and GNN models (for a detailed comparison between different models see F). The error bars represent the minimum and maximum rollout error achieved for the corresponding set of test simulations across all models. The results demonstrate that even though the training data contains solely systems consisting of 10 sheets moving over a periodic box, the GNS is capable of generalizing to smaller/larger system sizes and different boundary conditions. Furthermore, the reported errors are considerably small. Figure 6: Example of simulation rollouts observed for test simulations of 10 sheet systems. These examples correspond to the worst-performing rollouts (largest EMD across all models) for the indicated simulator time step and boundary conditions. The predicted and ground truth trajectories and the MAE/EMD evolution are shown (per time step average over sheets). In both cases, the ground truth trajectories (obtained with the Sheet Model using \(\Delta t=10^{-4}\)\(\omega_{p}^{-1}\)) are downsampled to the same simulation time step as the GNS. We plot the ground truth trajectories with a larger marker size in order to be possible to distinguish them with respect to the prediction. It is clear that the GNS is capable of correctly modeling the sheet trajectories for longer rollouts, even though it was solely trained on single-step prediction. ### Energy Conservation In order to check for energy conservation, we run simulations using two types of initial velocity distributions: thermal - velocities sampled from a normal distribution with standard deviation equal to \(v_{th}\); oscillation - sheets share the same initial velocity \(v_{0}\) (no crossings should occur). For both initial conditions, we perform a scan over the initial thermal/oscillating velocity (one simulation per value). All simulations consider a system of \(10^{3}\) sheets moving over a periodic box for a total of \(t_{max}=5\times 2\pi\)\(\omega_{p}^{-1}\). While for the sheet model the energy decreases monotonically, we observed this was not the case for the GNS (energy might increase, decrease, or oscillate, examples are provided in F). Furthermore, since the GNS uses the finite difference velocities instead of the instantaneous velocities, there is an oscillation associated with the plasma period (the period is equal to half a plasma period) which is clearly dominant for lower thermal velocities. This oscillation is merely an artifact of the finite different velocities used to compute the energy of the system. To allow for a fair comparison between the sheet model and GNS we then compute the total energy variation as follows: we skip the first plasma oscillation, apply a moving average with a window size of \(\Delta t=2\pi\)\(\omega_{p}^{-1}\) to the remaining datapoints and retrieve the maximum deviation from the initial energy of the system (all steps are further justified in F). For each time step the energy of the system \(\epsilon\) is computed according to: \[\epsilon=\frac{1}{2}m_{e}\sum_{i}^{N}\left(v_{i}^{2}+\omega_{p}^{2}\xi_{i}^{2 }\right). \tag{8}\] The final results containing scans performed for different initial velocities distributions and simulation time steps are presented in figure 7. It is observed that as the initial thermal velocity increases, the GNS showcases a gain of one or two orders of magnitude in terms of energy conservation when compared to the Sheet Model (SM) algorithm running at a similar time step. This happens since the GNS captures more accurately crossings involving \(n>2\) sheets. The accuracy of the GNS models remains approximately constant for the thermal velocities for which it was trained (\(v_{th}\leq v_{th}^{train}\)). However, for out-of-distribution scenarios the energy variation starts increasing, more noticeable for \(\Delta t=10^{-2}\)\(\omega_{p}^{-1}\). This behavior is not visible for the oscillatory initial conditions, where we observe a very stable energy loss rate across all models for \(v_{0}\leq v_{max}^{train}\). Therefore, the lower performance for higher \(v_{th}\) should be attributed to a failure in correctly modeling all crossing events in such conditions. A performance degradation is expected since: a) the number of message-passing steps associated with each model (\(M=3\) for \(\Delta t=10^{-2}\)\(\omega_{p}^{-1}\), \(M=5\) for \(\Delta t=10^{-1}\)\(\omega_{p}^{-1}\)) limits the GNS capability to correctly resolve crossings involving a larger number of sheets (which are more likely to occur at larger values of \(v_{th}\)); b) the GNN is only trained on crossings involving \(n\leq n_{max}\), where \(n_{max}\) is the maximum number of sheets involved in any crossing within the training data. However, this does not explain why outside the training data the performance of the GNS at different time resolutions is equivalent. We attribute this behavior to three main effects. Firstly, there is one particular GNN model that consistently performed worse across all metrics (validation loss, rollout accuracy, and energy conservation) which biased significantly the average value of the energy variation rate. Removing this model from the average calculation changes considerably the behavior for \(v_{th}^{train}<v_{th}<v_{train}^{max}\) (more details in Appendix F). Secondly, the percentage of training data points containing crossings at higher temporal resolution is, at least, \(10\times\) smaller than that of the lower resolution scenario. This can bias the training procedure to reduce the prediction error for events that do not involve crossings to the detriment of the (smaller) subset that contains crossings (which becomes problematic at test time for scenarios where crossings dominate the overall dynamics). Additional support for this claim is the fact that the higher temporal resolution models seem to be "overfitting" the purely oscillatory dynamics within the training data range since they fail to generalize to larger oscillation amplitudes (note the steep increase in energy variation for the oscillating initial conditions for \(v_{0}>v_{max}^{train}\)). Finally, since we are performing an increased number of updates for the same time Figure 7: Energy variation rate for simulations of \(10^{3}\) sheets moving over a periodic box for \(t_{max}=5\times 2\pi\)\(\omega_{p}^{-1}\). Initial sheet velocities are sampled from a normal distribution (thermal), or all equal to \(v_{0}\) (oscillation). For the Sheet Model (SM) we run a single simulation per set of initial conditions and simulation time step \(\Delta t\). For the GNS we run, per trained \(\Delta t\), the exact same simulations using 5 equivalent GNN models (trained with different random seeds). The mean values across GNN models are presented in full line, and the min/max values are represented by the shaded region (for a detailed comparison between different seeds see Appendix F). No lines are shown for the SM in the oscillation regime since it conserves perfectly the energy when no crossings occur. For dynamics that involve sheet crossings, the GNS either performs significantly better or is on pair with the SM running at an equivalent simulation step. interval, the prediction errors for the out-of-distribution data might be accumulating faster. The impact of the aforementioned effects could be investigated in future work (and mitigated if necessary) by using alternative data sampling strategies and/or optimizing for multi-step accuracy at train time [17]. However, the main focus should be the improvement of the performance for lower temporal resolutions since this is where larger computational gains are expected. It would also be important to study, in future work, why some models are capable of achieving better energy conservation capabilities at larger \(v_{th}\) (which might indicate that they learn a more robust crossing resolution algorithm) and how to consistently achieve this level of performance (e.g. by using a regularizing penalty [21]). Additional evaluation metrics/tests should also be devised since neither the validation loss nor the current rollout accuracy tests seem to be good predictors for improved energy conservation capabilities. These new tests could include, for instance, measurements of rollout accuracy for significantly longer simulations and higher thermal velocities. ### Run-time Using the same setup for thermal initial conditions, we analyzed the run-time of the sheet model _versus_ the GNS. The results obtained are presented in figure 8. It is clear that the sheet model run-time increases with the crossing frequency, while the GNS does not. This difference in behavior coupled with the gain in energy conservation at higher thermal velocities makes a stronger case for the suitability of the GNS when modeling these scenarios. For example, when considering \(v_{th}=10\ \delta\omega_{p}\ (\lambda_{D}=10\ \delta)\), the GNS using \(\Delta t=10^{-1}\ \omega_{p}^{-1}\) obtains an order of magnitude improvement in both energy conservation and run-time when compared to the sheet model using \(\Delta t=10^{-2}\ \omega_{p}^{-1}\). However, it is important to highlight that the different models are implemented with different packages (NumPy _vs_ JAX) and running on different hardware (CPU _vs_ GPU) which influences their respective run-time. Furthermore, we make no claims that our implementations are optimal, meaning they could both benefit from further speed-ups. Nonetheless, the qualitatively different behaviour of both algorithms, coupled with differences in accuracy of orders of magnitude on both energy conservation and run-time, leads us to believe that there is a computational gain to be expected in using the GNS for larger simulation steps with higher crossing rates. ### Limitations The main limitations that we identified for the GNS are the requirement to use a fixed simulation step (equal to the training simulation step) and the performance degradation on out-of-training data distribution scenarios (as demonstrated in figure 7). The first constraint is solely due to the fact that the network has to learn to predict sheet crossings, which implicitly forces it to know what is the simulation step. If one wishes to train a single model for different simulation steps, it would be necessary to provide \(\Delta t\) as an input to the network (in the GNS the time step only appears explicitly in the ODE integrator). Alternatively, as we have shown, different models can be trained, one per \(\Delta t\). As long as enough training data is provided and the model architecture is scaled accordingly (e.g. by increasing the number of message passing steps for larger \(\Delta t\) or connecting the \(n^{\text{th}}\)-closest neighbors) there is no limitation on the time step which can be used to train the model. Regarding the second constraint, note that the GNS still performs better than the sheet model at equivalent simulation steps for out-of-training distribution \(v_{th}\) values. Additionally, high-fidelity simulations for larger values of \(v_{th}\) can be generated in order to fine-tune or retrain models for a broader dynamic range. ## 6 Recovering Known Kinetic Plasma Processes In order to provide stronger evidence of the GNS generalization capabilities, we now showcase a broad range of known kinetic plasma processes that the simulator is able to recover. These examples, present in both the original sheet model benchmarks [22, 23, 24, 25, 26] and other kinetic codes benchmarks [2, 3, 27, 34], aim to demonstrate the capability of the GNS to simulate collective behavior in accordance with known kinetic theory. An important point to stress is that the surrogate simulator was not explicitly trained to Figure 8: Run-time of the Sheet Model (SM) _vs._ the GNS for systems of \(10^{3}\) sheets moving on a periodic box over the period of a single plasma oscillation (\(t_{max}=2\pi\ \omega_{p}^{-1}\)). Values shown are averages over 5 simulations. For a fairer comparison among different time resolutions, the rollout data is only saved to a buffer every \(\Delta t=10^{-1}\ \omega_{p}^{-1}\). The just-in-time compilation time for the GNS is not included since it is a fixed cost that does not change for longer simulations (where it is considerably diluted). It amounts to \(t_{JIT}=1.2\) s for \(\Delta t_{GNS}=10^{-2}\ \omega_{p}^{-1}\) and \(t_{JIT}=2.3\) s for \(\Delta t_{GNS}=10^{-1}\ \omega_{p}^{-1}\). The results demonstrate that the SM run-time increases with \(v_{th}\) (higher number of crossings) while the GNS run-time is constant. By comparing these results with those of figure 7 it is observed that the GNS is faster than the SM at equivalent energy variation rates. reproduce these effects. The GNN only learned to (correctly) model single-step updates over a reduced system size (10 sheets). However, when we apply it to larger system sizes and longer time durations, we observe the emergence of the expected kinetic collective plasma dynamics. The results presented hereafter are produced using the GNN trained for \(\Delta t=10^{-1}\)\(\omega_{p}^{-1}\) which showcased the best energy conservation capabilities (Model #4 in Appendix F). The same collective plasma dynamics are recovered for the equivalent GNN models trained using different random seeds, and those trained using the larger time step \(\Delta t=10^{-2}\)\(\omega_{p}^{-1}\). ### Plasma Thermalization In [22, 25], Dawson demonstrated that, independently of the initial velocity distribution of the sheets, it is expected that over time the system will move towards thermal equilibrium, and that this happens due to crossings/collisions involving more than 2 sheets [25] (cf. Section 2 for a discussion on the physics of \(n=2\) crossing/elastic collisions and why they do not modify the distribution function). The distribution function of the sheet velocities is expected to converge to a normal distribution whose standard deviation corresponds to the thermal velocity of the plasma. We demonstrate this behavior by performing 50 simulations of systems consisting of \(10^{3}\) sheets with initial velocities randomly sampled from a uniform distribution (\(v\ \in\ [-5,5]\ \delta\cdot\omega_{p}\)). We provide snapshots of the evolution of the distribution function (averaged over simulations) in figure 9. It is clear that the system does indeed thermalize, and that the measured thermal velocity \(v_{th}=2.671\ \delta\cdot\omega_{p}\) is in accordance with the expected value \(v_{th}=2.679\ \delta\cdot\omega_{p}\). The latter is computed according to \(v_{th}^{2}=1/3\ v_{max}^{2}r_{kin}\)[23], where \(v_{max}\) corresponds to the initial uniform distribution maximum value, and \(r_{kin}\) represents the ratio of the available kinetic energy with respect to the total energy of the system (estimated by averaging over time steps and simulations) since a percentage of the initial kinetic energy is deposited in the fields. Additionally, using a diagnostic similar to the one introduced by Liang et al. [35], figure 9 demonstrates that there is a steep increase in the entropy (\(S\)) of the system until \(t\approx 1.25\ \omega_{p}^{-1}\). This increase is associated with the establishment of correlations between sheets as crossings start to occur, and the length of this time interval is actually independent of the initial velocity range [25]. ### Debye Shielding Another fundamental property of plasmas is their quasi-neutrality [1], i.e. on a macroscopic scale the overall charge density of positive and negative particles will cancel out. However, within local regions of characteristic length \(\lambda_{D}=v_{th}/\omega_{p}\) (referred to as the Debye length) the local electric fields generated by a charged particle will not be fully screened by the oppositely charged particles. We expect to observe the same behavior Figure 9: Evolution of the velocity and displacement from the equilibrium position density functions, and the entropy (\(S\)) of the system. Histograms represent the density function ensemble averaged over 50 simulations at the corresponding time. The entropy variation of the different phase-space components (\(\xi\), \(v\)) is obtained using a diagnostic similar to the one implemented by Liang et al. [35]. For the calculation of the distribution functions, we discretized the (\(\xi\), \(v\)) phase-space for the range \(\xi\in[-6.5,6.5]\,\,\delta\) and \(v\in[-12.6,12.6]\,\,\delta\cdot\omega_{p}\) using 51 bins along each axis. These results demonstrate that the GNS is capable of correctly modeling the process of plasma thermalization from a non-equilibrium state, with the thermal velocity of the system in equilibrium \(v_{th}=2.671\,\,\delta\cdot\omega_{p}\) (measured by fitting a Gaussian to the final distribution) in excellent agreement with the theoretical prediction \(v_{th}=2.679\,\,\delta\cdot\omega_{p}\). for the sheet model. More precisely, the density of sheets at a certain distance from a test position is expected to follow [22]: \[n(x)=n_{0}\left(1-\frac{\delta}{2\lambda_{D}}e^{-|x|/\lambda_{D}}\right). \tag{9}\] To test the GNS, we initialize systems of \(10^{4}\) sheets following different initial thermal distributions (\(v_{th}=[1.5,\ 2.5,\ 5.0]\ \delta\cdot\omega_{p}\)). The simulations are run for \(t_{max}=80\tau_{relax}\), where \(\tau_{relax}=\sqrt{2\pi}\lambda_{D}/\delta\ \omega_{P}^{-1}\) is an estimate of the relaxation time of the system [22], i.e. the time it takes for the system to forget its current state. To compute the sheet density profiles shown in figure 10, we follow a similar procedure as Dawson [22]. We choose a set of equally spaced test sheets and, for each of them, measure the number of neighboring sheets within a pre-defined range of increasing distances. In our case, we compute the number of sheets within a distance \(d\in\,]0.2i,0.2(i+1)]\lambda_{D}\) up to \(3\lambda_{D}\) (\(i=15\)). We repeat this procedure for every \((3\lambda_{D}/\delta)\)-th sheet, over multiple independent time steps (\(t_{j}=j\cdot\tau_{relax}\) for \(j>0\)). The counts are then averaged over the number of test sheets, and time steps. It is clear from the results presented in figure 10, that the expected behavior is recovered. ### Electrostatic fluctuations Although the plasma is in thermal equilibrium, there are constant exchanges of energy between the sheets and the (electrostatic) waves propagating inside the plasma. This leads to the appearance of electrostatic fluctuations, with an average power spectrum that follows [2]: \[\frac{\langle E^{2}(k)\rangle}{8\pi}\ =\frac{k_{B}T}{2L\left(1+k^{2}\lambda_{D}^ {2}\right)} \tag{10}\] Figure 10: Example of Debye shielding for systems with different Debye lengths. The GNS can correctly recover the expected density profiles for all the tested scenarios. where \(k\) represents the wave vector, \(k_{B}\) the Boltzmann constant, \(T\) the plasma temperature (\(k_{B}T=mv_{th}^{2}\)), and \(\langle\cdot\rangle\) the time average. In figure 11 we recover this spectrum for a system of \(10^{3}\) sheets with \(\lambda_{D}=5\ \delta\). We make use of the ergodic theorem [36] to compute the statistical average of the power spectrum by averaging over independent time steps (separated by \(\Delta t=\tau_{relax}\)). ### Drag on a Fast Sheet A fast sheet (\(v_{sheet}\gg v_{th}\)) moving through the plasma is expected to feel a constant drag given by [22]: \[\frac{dv}{dt}=-\frac{\omega_{p}^{2}\delta}{2}. \tag{11}\] This drag is independent of the velocity of the sheet and is caused by the excitation of a electrostatic wake on the rear of the fast sheet, i.e. the sheet transfers energy to the electrostatic wake. In figure 12 we demonstrate this behavior. The results were obtained by performing simulations of periodic systems of 100 sheets with \(\lambda_{D}=5\ \delta\) (\(v_{th}=5\ \delta\cdot\omega_{p}\)) over a period of \(t_{max}=5\ \omega_{p}^{-1}\). For each simulation, we set the initial velocity of the first sheet to \(v_{0}=\pm\alpha v_{th}\) and track its evolution over time. We then average over simulations, 1000 for each initial sheet velocity (accounting for the sign change). Figure 11: Electric field power spectrum for a system in thermal equilibrium. The power spectrum for the last time step and the temporal average (computed over relaxation periods \(\Delta t=\tau_{relax}\approx 13\ \omega_{p}^{-1}\)) are shown. The time-averaged power spectrum retrieved from the GNS simulation matches the theoretical curve, thus demonstrating that it correctly models the electrostatic fluctuations around thermal equilibrium. ### Landau Damping While fast sheets are able to excite an electrostatic wake in their rear, the resulting electrostatic wake can also accelerate sheets moving slightly slower than its phase velocity [1, 22]. Electrostatic modes are therefore self-consistently generated by particles moving close to its phase velocity, while being damped by particles moving slightly slower. However, since a plasma in thermal equilibrium follows a Maxwellian distribution in velocity space, there exist on average more particles moving faster than the wave, than those moving slower. Therefore, on average, the modes will be damped. This mechanism is known as Landau damping [1] and is an inherently collisionless kinetic process that the sheet model has been shown to recover [22]. For a given mode \(m\), with a wavelength \(\lambda_{m}=2L/m\) and wave vector \(k_{m}=2\pi/\lambda_{m}\), we can compute its wave frequency and damping time by finding numerically the roots of the dispersion relation for \(k=k_{m}\)[37]: \[1=\frac{\omega_{p}^{2}}{k^{2}}\int_{-\infty}^{\infty}\frac{\partial\hat{f}_{0} /\partial v}{v-(\omega/k)}dv \tag{12}\] where \(\hat{f}_{0}\) corresponds to the distribution function in velocity space. The solution will have both a real and imaginary part. The real part corresponds to the wave angular frequency, while the imaginary part corresponds to the inverse of the damping time. To reproduce the expected damping behavior we follow a similar procedure to that of Dawson [22]. We produce 50 simulations, each with a duration of \(t_{max}=500\)\(\omega_{p}^{-1}\), of thermal plasmas consisting of \(10^{3}\) sheets for \(\lambda_{D}=5\)\(\delta\) and reflecting boundaries. For each simulation time step we compute the mode "amplitude" \(A_{m}\) and its rate of change Figure 12: Average drag on fast sheets of different initial velocities (\(v_{0}\gg v_{th}=5\)\(\delta\) \(\cdot\)\(\omega_{p}\)). The GNS recovers the expected drag felt by the fast sheets independently of their initial velocity and propagation direction. \(\dot{A}_{m}\) using the cross-correlation: \[\begin{split} A_{m}^{t}&=\frac{2}{N}\sum_{i=0}^{N-1} \left(x_{i}^{t}-x_{eq_{i}}^{t}\right)\ \sin\left(\frac{m\pi}{N}\left(i+\frac{1}{2}\right)\right)\\ \dot{A}_{m}^{t}&=\frac{A_{m}^{t}-A_{m}^{t-1}}{\Delta t }\end{split} \tag{13}\] where the index \(i\) indicates the relative ordering of the sheets in the box. We then collect trajectories of equal time length every time the mode crosses the region of phase-space \((A_{m},\dot{A}_{m})\) defined by a ring of radius \(R\) and thickness \(dR\) (\(dR\ll R\)). Finally, we rotate the trajectories so that they all start on the same position in phase-space, and compute their average. In figure 13 we showcase the results obtained for several modes. It is possible to see that, although for some trajectories the mode is still growing, on average it decreases according to the expected damping time. This demonstrates that the GNS is capable of correctly modeling Landau damping, an inherently kinetic mechanism associated with the collective collisionless dynamics of a plasma. ### Two-stream Instability As a final example, we show the two-stream instability in the cold beam regime [1, 24]. For this scenario, two counter-propagating beams with velocities \(\pm v_{0}\) and no energy spread excite a wave that grows exponentially until all particles are trapped inside the electric field, at which point the instability saturates. Figure 13: Damping of different modes. The initial mode amplitudes are normalized to \(R=0.3\ \delta\). Individual trajectories length is \(\Delta t_{traj}=50\ \omega_{p}^{-1}\). Damping times are respectively \(\tau_{28}=10.25\ \omega_{p}^{-1}\), \(\tau_{30}=7.97\ \omega_{p}^{-1}\), and \(\tau_{32}=6.41\ \omega_{p}^{-1}\). The agreement between the theoretical curves and the average mode trajectories demonstrates that the GNS is capable of correctly modeling Landau damping, an inherently kinetic mechanism. From linear theory [1, 24], we expect that for two-cold beams with density \(n_{beams}=n_{0}/2\), the fastest growing mode will correspond to \(k=\sqrt{3/8}\cdot\omega_{p}/v_{0}\) with a corresponding growth rate of \(\gamma=\omega_{p}/\sqrt{8}\). Therefore, to excite mode \(m\) we need to set \(v_{0_{m}}=\sqrt{3/8}\cdot\omega_{p}\delta/m\pi\cdot N_{sheets}\). Furthermore, the number of sheets per wavelength of this mode is given by \(N_{\lambda_{m}}=\lambda_{m}/\delta=2/m\cdot N_{sheets}\). Note that both \(v_{0_{m}}\) and \(N_{\lambda_{m}}\) are proportional to the number of sheets used. Therefore, to excite a mode whose wavelength can be resolved by a significant amount of sheets, we need to increase \(v_{0}\) proportionally. For a system of \(10^{4}\) sheets and \(m=4\), we obtain \(v_{0_{m}}=486\)\(\delta\cdot\omega_{p}\) and \(N_{\lambda_{m}}=5\times 10^{3}\). The chosen velocity is considerably out of the training data range, therefore we expect the energy loss of the GNS to be slightly higher than the one observed for the training scenarios (as shown in figure 7). Nonetheless, we will see that the GNS is still able to capture the macrophysics. Additionally, these (significantly) out-of-training distribution values allow us to once again highlight the potential of the GNS to achieve better energy conservation than the sheet model algorithm at comparable simulation time steps. In figure 14 we provide a comparison between the evolution of the phase-space and the potential energy for the sheet model at high (\(\Delta t=10^{-3}\)\(\omega_{p}^{-1}\)) and low (\(\Delta t=10^{-1}\)\(\omega_{p}^{-1}\)) temporal resolution and the GNS at the same low temporal resolution. The energy variation during the full simulation was approximately 0.1% for the sheet model at \(\Delta t=10^{-3}\)\(\omega_{p}^{-1}\) and 73% at \(\Delta t=10^{-1}\)\(\omega_{p}^{-1}\), while for the GNS it was approximately 2%. These values are in accordance with what was measured in figure 7. It is observed that the GNS recovers similar macrophysics when compared to the higher temporal resolution simulation (which we consider as a good approximation of the ground truth) during the linear phase and up to the saturation time. An extra diffusion in phase-space is observed for the GNS, which is associated with the aforementioned higher energy variation. This is expected since the GNN is not capable of correctly resolving crossings involving more than \(2M+1=11\) sheets, and, on average, a sheet moving with \(v_{0_{m}}=486\)\(\delta\cdot\omega_{p}\) should cross \(\approx 49\) neighbors just in the first timestep (since when the simulation starts all sheets are equally spaced by \(\delta\) and, on average, half of the nearest neighbors are counter-propagating with the same absolute velocity). Nonetheless, the overall phase-space structure and growth rates are similar, which provides further support for the generalization capabilities of the model. Additionally, when using the sheet model at a comparable time resolution the results are strikingly different, which reinforces the conclusions derived in Section 5.2 that the GNS is capable of significantly improving simulation results at larger time steps. Finally, we provide in Appendix G further comparisons for higher order modes (lower \(v_{0_{m}}\)), larger simulation steps (\(\Delta t=10^{-2}\)\(\omega_{p}^{-1}\)), as well as results for the remaining GNNs trained with different seeds (with worse energy conservation capabilities at high \(v_{th}\)). These results further support the claim that the GNS is consistently able to correctly model the overall dynamics of the instability. ## 7 Conclusions In this work, we demonstrated that graph neural network-based simulators are capable of fully replacing a one dimensional kinetic plasma physics simulator. By introducing domain knowledge into the graph representation and the overall structure of the simulator, we showed that the GNS is capable of generalizing to a broad range of scenarios and is limited only by its training data distribution and a fixed simulation step. Furthermore, we showed that the GNS, when trained on high-fidelity data, conserves energy better than the synchronous algorithm of the sheet model, for larger simulation steps. In future work, the accuracy of the simulator for higher thermal velocity values Figure 14: Comparison of the phase-space and potential energy evolution for the Sheet Model (SM) and the GNS in the two counter-propagating cold beams scenario. The GNS is able to recover the same macrophysics as the sheet model at a significantly higher temporal resolution and conserves the system energy significantly better than the SM at an equivalent time resolution. can be improved by generating additional ground truth data, ideally using the asynchronous version of the sheet model, which is guaranteed to perfectly resolve crossings. Additionally, although not explored, the developed simulator is fully differentiable, which opens the way to explore gradient-based optimization strategies for the discovery of new physics of interest [38]. This work also indicates that it would be possible to accurately model the standard PIC loop. However, we do not believe that this would result in meaningful computational gains since the standard PIC loop implemented in modern architectures is extremely optimized and does not suffer from the issues that are present in the sheet model scenario (e.g. time-consuming serial routines). The usage of graph neural networks can be relevant to model multi-scale plasma dynamics or extra physics modules. This will be explored in future publications. ## Data availability statement The code used for this project is made available through a git repository: (to be disclosed). The training and test data, as well as a set of the final model weights are available in a zenodo repository: (to be disclosed) ## Conflict of interest The authors declare no conflict of interest. ## Acknowledgments The authors would like to thank P. J. Bilbao, T. Grismayer, A. Joglekar, and E. P. Alves for helpful discussions. This work was supported by the FCT (Portugal) under the Project No 2022.02230.PTDC (X-MASER) and PhD Fellowship Grant 2022.13261.BD and has received funding from the European Union's H2020 programme through the project IMPULSE (grant agreement No 871161). The graphics processing units (GPUs) used in this work were donated by NVIDIA Corporation.
2310.01024
Joint Source-Channel Coding System for 6G Communication: Design, Prototype and Future Directions
The goal of semantic communication is to surpass optimal Shannon's criterion regarding a notable problem for future communication which lies in the integration of collaborative efforts between the intelligence of the transmission source and the joint design of source coding and channel coding. The convergence of scholarly investigation and applicable products in the field of semantic communication is facilitated by the utilization of flexible structural hardware design, which is constrained by the computational capabilities of edge devices. This characteristic represents a significant benefit of joint source-channel coding (JSCC), as it enables the generation of source alphabets with diverse lengths and achieves a code rate of unity. Moreover, JSCC exhibits near-capacity performance while maintaining low complexity. Therefore, we leverage not only quasi-cyclic (QC) characteristics to propose a QC-LDPC code-based JSCC scheme but also Unequal Error Protection (UEP) to ensure the recovery of semantic importance. In this study, the feasibility for using a semantic encoder/decoder that is aware of UEP can be explored based on the existing JSCC system. This approach is aimed at protecting the significance of semantic task-oriented information. Additionally, the deployment of a JSCC system can be facilitated by employing Low-Density Parity-Check (LDPC) codes on a reconfigurable device. This is achieved by reconstructing the LDPC codes as QC-LDPC codes. The QC-LDPC layered decoding technique, which has been specifically optimized for hardware parallelism and tailored for channel decoding applications, can be suitably adapted to accommodate the JSCC system. The performance of the proposed system is evaluated by conducting BER measurements using both floating-point and 6-bit quantization.
Xinchao Zhong, Sean Longyu Ma, Hong-fu Chou, Arsham Mostaani, Thang X. Vu, Symeon Chatzinotas
2023-10-02T09:17:55Z
http://arxiv.org/abs/2310.01024v1
# Joint Source-Channel Coding System for 6G Communication: Design, Prototype and Future Directions ###### Abstract The emergence of the AI era signifies a shift in the future landscape of global communication networks, wherein robots are expected to play a more prominent role compared to humans. The establishment of a novel paradigm for the development of next-generation 6G communication is of utmost importance for semantics task-oriented empowered communications. This paper begins by examining the historical development of advanced communications, focusing specifically on the incorporation of semantics and task-oriented features. The goal of semantic communication is to surpass optimal Shannon's criterion regarding a notable problem for future communication which lies in the integration of collaborative efforts between the intelligence of the transmission source and the joint design of source coding and channel coding. The convergence of scholarly investigation and applicable products in the field of semantic communication is facilitated by the utilization of flexible structural hardware design, which is constrained by the computational capabilities of edge devices. This characteristic represents a significant benefit of joint source-channel coding (JSCC), as it enables the generation of source alphabets with diverse lengths and achieves a code rate of unity. Moreover, JSCC exhibits near-capacity performance while maintaining low complexity. Therefore, we leverage not only quasi-cyclic (QC) characteristics to propose a QC-LDPC code-based JSCC scheme but also Unequal Error Protection (UEP) to ensure the recovery of semantic importance. In this study, the feasibility for using a semantic encoder/decoder that is aware of UEP can be explored based on the existing JSCC system. This approach is aimed at protecting the significance of semantic task-oriented information. Additionally, the deployment of a JSCC system can be facilitated by employing Low-Density Parity-Check (LDPC) codes on a reconfigurable device. This is achieved by reconstructing the LDPC codes as QC-LDPC codes. The QC-LDPC layered decoding technique, which has been specifically optimized for hardware parallelism and tailored for channel decoding applications, can be suitably adapted to accommodate the JSCC system. The performance of the proposed system is evaluated by conducting BER measurements using both floating-point and 6-bit quantization. This is done to assess the extent of performance deterioration in a fair manner. The fixed-point system is synthesized and subsequently used to a semantic feature transmission and reception system across a noisy channel, with the aim of presenting a prototype for semantic communications. This study concludes with some insights and potential research avenues for the JSCC prototype in the context of future communication. JSCC, Joint Source-Channel Code, LDPC, QC-LDPC, FPGA, Image transmission, Semantic communications, Task-oriented communications, 6G, wireless communication, Edge AI, Unequal Error Protection ## I Introduction Traditional communication systems often overlook the significance of the meaning behind information. They operate on the assumption that all symbols or bits are of equal importance and are handled as such. The primary objective of these systems is to ensure the accurate retrieval of transmitted sequences at receiving ends, prioritizing conformity in transmission. The design methods in this field have predominantly relied on principles from digital communication. Information theory establishes the maximum limits on the capacity of the system. While channel coding concentrates on developing strategies that can approach these limits with extremely low error probability, source coding(known as data compression) refers to the process of encoding information in a way that reduces the amount of data required with the objective of optimizing the length of the source encoded sequence. However, the latest generation of communication systems is being applied in ways that challenge the conventional design paradigm, particularly in terms of semantic and task-oriented aspects [1]. The semantic aspect pertains to the level of precision and accuracy with which transmitted symbols are able to convey the intended meaning and comprises the transmission of a notion or informational material from a source to a destination without delving into the intricacies. It entails the comparison of the inferred meaning by the recipient with the intended meaning by the sender while considering the content, requirements, and semantics in order to enhance the communicative system towards a state of intelligence. Furthermore, the task-oriented aspect prioritizes task completion and efficiency and examines the potential ramifications associated with the utility of the provided information. The efficacy of a task or performance metric is determined by how efficiently the acquired information aids in its accomplishment. The achievement of a shared aim within task-oriented limitations and requirements is facilitated by the utilization of available resources, including communication bandwidth, computing expense, and power consumption. The evaluation of system performance can be measured in relation to the extent to which a certain objective is achieved, taking into account the allocated resources, instead of considering all the transmission sequences that can typically be conveyed in the aforementioned information theory framework-based approach. Based on the stated task objectives and existing knowledge, it can be demonstrated that semantic source coding [2] has the potential to achieve greater reductions in redundancy expense and communication overhead compared to the Separate Source-Channel Coding (SSCC). This is mostly due to its ability to refine the most pertinent and concise information and then condense it. An intriguing aspect worth exploring is the extent of compression achieved in semantic source coding in relation to the original information [3]. The authors have obtained the theoretical boundaries for lossless and lossy compression based on this semantic source, together with the lower and upper limitations on the rate-distortion function. Furthermore, semantic-information feature learning [4] is the key to addressing the utilization of cognitive techniques employed as a means to direct computational resources towards activities of higher importance. The process of achieving dynamic adaptation in semantic compression involves the utilization of a feature learning module and an attention feature module. These modules enable the source encoder to generate a variable number of symbols and modify the capability of the source encoder and channel encoder through the use of cognitive techniques. Moreover, task-oriented feature learning [5] addresses that the attainable precision of inference is contingent upon the amount of feature components collected and the extent to which they are distorted by detecting noise and quantization defects. The classification gain is only influenced by the distributions of classes in the feature space and represents the highest possible inference accuracy that can be attained theoretically. Therefore, this learning accuracy is dependent on not only the underlying design of the artificial intelligence models being used but also the classification distributions of the feature space. In order to facilitate the implementation of immediate intelligent services in future communication systems, it is advantageous to distill and communicate merely the information that is pertinent to the task at hand and precise semantics. The semantic and task-oriented approach effectively reduces the overall latency of the system. However, it should be noted that this method deviates from the optimal Shannon's SSCC [6], which is typically applied in the communication of long block-length bit sequences. SSCC employs advanced compressing methods to eliminate all redundant sequences from source-encoded symbols, whereas Joint Source Channel Coding (JSCC) exploits the remaining redundant information that emerges following compressing with the error-correcting capability in order to minimize distortion within a certain limit of codelength. Unequal error protection (UEP) prevents errors from occurring by assigning encoded redundancies according to the significance of the information bits. Only some sets of bits are of equal significance when sending source-encoded information because of the varied degree of vulnerability of the source decoder. Therefore, the authors in [7] present a remarkable performance of UEP JSCC code system by providing an innovative adjustable code rate for multiple semantic task classes. We summarize our contribution from the following perspectives: 1. This paper provides a brief exploration of the latest designs and methodologies in semantic and task-oriented communication, with a specific emphasis on the knowledge pertaining to the prototype of the JSCC scheme. The proposed fixed-point JSCC scheme serves as a practical solution not only for transmitting and receiving semantic features over noisy channels but also for UEP semantic importance applications. This study presents a novel approach for enhancing the semantic encoder/decoder by including the UEP capabilities of the quasi-cyclic Low-Density Parity-Check (QC-LDPC) JSCC decoder. The primary objective is to safeguard the semantic significance of 6G communication. 2. The primary aim of this study is to explore the potential of surpassing the traditional Shannon criteria, while also presenting the initial iteration of a prototype for a semantic JSCC communication system. The JSCC prototype significantly reduces the data width from 32-bit floating-point to 6-bit fixed-point, making practical FPGA implementation possible. Additionally, we revisit the design of semantic codec learning and task-oriented signal compression in order to explore how the semantic task-oriented techniques can be adapted to the proposed JSCC platform. 3. The JSCC system under consideration acts as a prototype to make it easier to modify communication protocols in the future. Through the use of deep learning techniques, this system is specifically created to be adaptable to a broad variety of semantic and task-oriented features. The proposed prototype, when compared to other state-of-the-art, can deliver better BER performance despite the reduction in fix-point implementation and is achieved by applying QC-LDPC codes with a reasonable code rate. In this paper, we revisit the design methodologies of next-generation 6G communication systems with regard to semantic and task-oriented aspects and JSCC system following the demonstration of the state-of-the-art prototype designs in Section II. Second, we investigate the applications of a JSCC system based on QC-LDPC codes. By leveraging quasi-cyclic characteristics and optimizing the whole system, QC-LDPC codes are adopted to enable the feasibility of a JSCC system. We deploy the JSCC system in a Field-Programmable Gate Array (FPGA)-based platform so the users can configure this platform after manufacture. We also demonstrate that semantic feature transmission and reception are potential application scenarios. The JSCC system can maintain a high code rate (0.8) at a low level of \(E_{b}/N_{0}\), which tandem coding systems must decrease to no more than 0.5. In general, the flexibility and performance enhancement of the JSCC system is particularly attractive to next-generation 6G communication. In section III, the proposed QC-LDPC code-based JSCC system is detailed. Section IV presents the experiments and the corresponding results followed by not only a demonstration of a semantic feature transmission and reception but also applicable to task-oriented features. The primary goal is to provide a prototype venture for semantic communications to the marketplace for consumers. Lastly, conclusions and future directions are depicted in Section V. ## II Design of Next Generation Communication and Joint Source Channel coding In this section, we examine the application of JSCC system in relation to the integration of AI for semantic and task-oriented considerations to explore the opportunities of future communications. ### _Semantic Codec learning_ The evolution of semantic communication [2] is traced back to the early 20_th_ century with its continuous growth into the realm of modern communications with regard to beyond 5G and 6G technologies. In light of this, there is a significant need to develop more intelligent and efficient communication protocols that can meet the diverse quality of service (QoS) needs. This must be done while addressing the challenge of limited communication bandwidth and computation. The development of an intelligent communication system is considered essential in both industry and academia. Such a system is not limited to memorizing data flows based on rigorous regulations but also aims to comprehend, analyze, and articulate the fundamental semantics. The ambition of "beyond Shannon" [8] surpasses the conventional Shannon paradigm, which focuses on ensuring the accurate receipt of individual sent bits, regardless of the conveyed meaning of these bits. In the context of conveying meaning or achieving a goal through communication, the crucial factor lies in the influence exerted by the received sequences on the interpretation of the message meant by the sender or on the attainment of a shared objective. However, despite its growing popularity, research on semantic communication remains fragmented and encompasses a wide range of research interests. The evolution of human dialogues in relation to the semantics of everyday usage for the purpose of semantic communication is still in its infancy. The challenge of developing theoretical semantic models for actual multidimensional information has led to the adoption of JSCC technique in most extant semantic designing strategies. The current implementation of module architectures, similar to the classical Shannon paradigm, presents several issues. As the level of interest in this particular field continues to increase, there is a concerted effort being made to address and surmount the challenges associated with it. Hence, the semantic scheme based on JSCC emerges not only as a highly viable contender for next-generation 6G communication systems but also as a promising transit candidate for the optimal goal of semantic communication. The partnership between JSCC and deep learning in [9] reveals that the deep learning encoder and decoder, as presented, exhibit superior performance in terms of word error rate compared to the conventional technique, particularly when the computational resource allocated for syllable encoding is limited. A limitation of this approach is the utilization of a predetermined bit length for encoding words of varying lengths. In [10], a performance comparison of deep and JSCC-based semantic communication [9] to present the potential advantage and the design strategies has concluded the difference between semantic communication and traditional communication as follows: 1. There exist various domains of processing information. The first phase of SC involves the manipulation of information within the semantic realm, whereas the conventional focuses on compressing information within the realm of entropy. 2. Traditional communication methods prioritize the precise retrieval of information, whereas semantic communication systems are designed to facilitate decision-making or the achievement of specific transmission objectives. 3. Conventional systems primarily focus on designing and optimizing the information transmission modules found in standard transceivers. In contrast, semantic communication systems take a holistic approach by jointly designing the entire information processing framework, spanning from the source information to the ultimate goals of applications. In comparison [10], the complexity analysis of the deep-SC scheme demonstrates superior performance compared to existing SSCC schemes while the JSCC-based semantic communication scheme has lower computing latency than the deep-SC scheme. The successful incorporation [10] and the survey [2] of deep learning and JSCC techniques inspires the innovative construction of semantic communication. The subject matter pertains to the examination of semantic source coding [3] in connection with the primary information, thereby enabling an inclusive research framework that has the capacity to surpass the limitations of Shannon's conventional information theory. In Fig.1, we summarize the design overview of the semantic aspect in [2] from semantic theory encircled semantic channel capacity and future communication channels and datasets, including text, audio, image, video, unmanned aerial vehicles (UAVs), and the Internet of Things (IoT), are based on semantic theory. In order to calculate the source semantic entropy, a model measurement is required. The semantic quantification determines the semantic compression and signal processing. Furthermore, the semantic framework and hierarchy are illustrated in Fig.1. To effectively enable task execution, the effectiveness on the top of the design architecture is based on task or goal-oriented methodologies. Following the basics of theory and dataset, the second level brings the semantic aspect to an evolution of future communication. For the bottom level, this layer via physical transmission integrates with the semantic level and presents JSCC encoder/decoder regarding synchronization, precoding, antennas, and power amplify. ### _Task-Oriented signal compression_ The tremendous growth seen in telecommunication technologies is yet to follow the goal of reliable, fast, and low latency communications or to improve the capacity at which a communication network can serve users. While the idea of reliable communication appears to be a very intuitive and obvious requirement for communication systems, it is arguably a human need. Be it communication for a voice call, download of a photo, or streaming a video, we always prefer to receive our desired content at the best perceivable quality. This need, however, is no longer in place when designing machine-to-machine communications. The machine-to-machine communications [1] occur since this can help the receiver to make more informed decisions [11, 12, 13] or more precise estimates or computations [14] or both [15]. Naturally, in this context, there is no need for the reliability of communications to be beyond serving the specific needs of the control, estimation, or computational task at hand. This calls for a fresh examination into the design of communication systems that have been engineered with reliability as one of their ultimate goals [16]. The emerging literature regarding SC [17] as well as goal/task-oriented communications [11] is attempting to take the first steps towards the above-mentioned goal, i.e., incorporating these semantics [18, 19], together with the goal of message exchange, into the design of communication systems. The ever-increasing growth of machine-to-machine communications is the major motivating factor behind the accelerating research interest in the task-oriented design of communications. As IoT networks and cloud-based applications become more commercialized, autonomous vehicles/UAVs become more mature, and industry 4.0 approaches maturity, a boom in machine-to-machine communications is fueled. To emulate a cyber-physical system composed of several inter-dependant devices or machines, this paper considers the mathematical framework of a generalized decentralized partially observable Markov decision process (Dec-POMDP). There is a significant body of literature behind the theoretical advancements for solving generalized forms of Markov Decision Processes [20], and their applications in telecommunication and cyber-physical systems [12, 21]. Their work departs from the literature on the instance design of the observation function for each agent in the Dec-POMDP. The challenge of jointly developing the observation function and control strategy for each agent in Dec-POMDPs was investigated in [22]. It is important to note that the agents' observations come from a fundamental Markov decision process (MDP). While in classical Dec-POMDP problems [20], the observation function is considered to be a single fixed function, the framework in [22] offers more flexibility in designing the control policies for a multi-agent system. This approach specifically permits a restricted joint design of the observation and control policy, which is summed up as follows: 1. The bit-budgets for the inter-agent communication channels are respected; 2. The observation functions filter any non-useful observation information for each agent; 3. The removal of non-useful observation information by the observation functions is carried out such as minimization of any loss on the average return from multi-agents system (MAS's) due to bit-budgeted inter-agent communications. The approach in [22] is neither a classic MDP nor a POMDP [23] as the action vector is not jointly selected at a single entity: a task-oriented data compression (TODC) problem [22, 24] can be approximated by identifying the quantization policy in the joint control and quantization problem. A limited bit-budget for the multi-agent communication channels can be achieved with the aforementioned approaches to maximize the expected return by the system. The analytical investigation was presented in [22, 25, 26] into how the TODC can be disentangled from the control problem - given the possibility of a centralized training phase. The author's analytical studies confirmed that despite the separation of the TODC and the Fig. 1: The design overview for semantic communications in [2] control problems, they can ensure very little compromise on the average return by the MAS when compared with jointly optimal control and quantization. It is worth noting that the conventional quantization problems regard minimizing the absolute difference between the original signal and its quantized version. However, the difference between task-oriented communication is achieved by considering the usefulness and value of the goal-oriented approach for the task at hand, while conventional communication does not consider it. The significance of the result obtained from [22] is multi-fold: 1. Reduces the complexity of the clustering algorithm by transforming it from multi-dimensional observations to the one-dimensional output space of the value functions, 2. The observation points are linearly separable when being clustered according to the generalized data quantization problem 3. The effectiveness of the data for the task is considered for goal-oriented quantization. 4. The value of the observations begins to grow as the ultimate target of the task at hand becomes closer. Furthermore, the prevalence of deception and Trojan assaults utilizing adversarial machine learning poses a serious threat to machine-to-machine communications and edge servers/devices in [27, 28, 29]. The authors demonstrate adversarial threats and the potential methodology for encouraging more approaches with security and defense of task-oriented communications on 6G networks. In Fig.2, the aforementioned design methodologies are summarized as the selected reference for the task-oriented aspect of next-generation 6G communication. Therefore, the previously discussed principles on goal-oriented quantization can be effectively employed in the JSCC scheme to achieve further resource optimization. ### _Joint Source Channel coding and its Prototype_ In accordance with Shannon's separation theorem, a typical cascade structure that improves the performance of source coding and channel coding independently may maintain the entire system at its optimum [37], such as the majority of contemporary systems for wireless image transmission, which compresses the picture using a source coding method (e.g., JPEG, WebP, BPG) before encoding the bit stream with a source-independent channel code (e.g., LDPC, Polar, Turbo, BCH etc.). Nevertheless, the theory is based on certain premise conditions, such as an unlimited code length, point-to-point transmission system, memory-less stationary source, etc. Since these requirements are seldom satisfied in real-world applications, these tandem coding schemes are often suboptimal, such as autonomous driving and the Internet of Things (IoT) that enforce low latency real-time communication and/or low computation complexity implementation. In addition, if the channel quality goes below a specific level, the channel coding may not offer enough error corrections, and the source coding will inevitably collapse catastrophically. Consequently, jointly optimizing source coding and channel coding for relatively short messages, also known as JSCC, gradually becomes advantageous and garners a great deal of interest. JSCC was initially conceptualized more than four decades ago [38]. This has been explored further since the 1990s [39][40][41]. An iterative joint source-channel decoding algorithm was then proposed in the subsequent works, such as [42], and it was verified that these structures produce a large coding gain over separate coding in finite block-length transmission. The article [43] presented a novel JSCC scheme in which double LDPC codes [44] were applied as the source and channel codes. It was also found that for fixed-size blocks, the source (encoded or unencoded) is redundant, and this redundancy may be exploited on the decoder side. For instance, the channel encoder uses information from the source to lower its frame error rate (FER) while maintaining a very low signal-to-noise ratio (SNR). Image transmission and reception in a JSCC system were soon proposed as a feasible application. The authors of [45] explored the possibility of transmitting images in a JSCC system through a deep-space communication channel. The authors of [46] proposed a joint source-channel coding scheme using BCH codes in a binary symmetric channel (BSC), and it reduced the distortion of satellite images better than a classical tandem source-channel coding scheme based on BCH codes. In [47, 48], the authors optimized the JSCC system or proposed new codes to enhance systemic performance, such as the BER. The research on JSCC systems has become more popular [49, 50, 51, 52, 53, 54], but the feasibility of implementing a JSCC system at the circuit level is rarely mentioned. As a sub-class of LDPC codes [55, 56, 57, 58], QC-LDPC, where the parity Fig. 2: Selected reference of the machine-to-machine communications for task-oriented design check matrix is composed of permutation matrices (CPMs), can contribute to effective partial-parallel processing, due to the regularity of their parity check matrices **H**. For telecommunication, there have been comprehensive studies [59, 60, 61] on improving the complexity and accuracy of the LDPC decoders. They applied appropriate calculations [62, 63, 64] and various structures of LDPC codes [65, 66, 67, 68]. Other than communications, LDPC code decoders have also been popular in other areas such as storage [69, 70, 71] or biometric systems [72, 73, 74]. The prototype of the JSCC decoder can be remarked on and summarized in [31] that trace back to the early 20th century the initiative of JSCC implemented in Variable Length Error Correction (VLEC) code [75] exhibits a considerable level of intricacy due to the utilization of a vast alphabet for the selection of encoded sources. With the advent of the UEC-URC code [76], Unary Error Correction (UEC) code combined with a Unity Rate Convolutional (URC) code. It provides a nearly optimal performance at a reduced computational cost. This holds true even when dealing with extensive encoded sources, such as those seen in source coding. Although the Log-BCJR algorithm used in UEC-URC decoding is hindered by the presence of sequential information dependencies, which negatively impact processing latency, a solution known as the Fully Parallel Turbo Decoder (FPTD) [77] overcomes these limitations. By eliminating the aforementioned dependencies inherent in the traditional Log-BCJR approach, the FPTD attains the performance of the first high throughput near-capacity JSCC decoder and its prototype architecture [31]. Furthermore, the Decode-or-Compress-and-Forward (DoCF) [30] is proposed for the mechanism that the JSCC decoding step is responsible for processing the demodulated sequences from a relay, followed by an additional stage to retrieve the original message from the source, taking into account the fluctuating circumstances of the channel. The method of the decoder at the point of arrival involves a two-phase decoding procedure that utilizes the standard BCJR algorithm. The experimental verification of the proposed scheme's superiority is conducted by implementing it on Software-Defined Radios (SDRs), with a focus on addressing system-level difficulties in provisioning. The analog circuit prototype [32, 33, 34] of JSCC can get a compression technique for lightweight devices and the performance of the system was assessed through the utilization of Spice simulations, in addition to the construction and testing of PCB prototypes. The variant of deep learning-based JSCC (DJSCC) [78, 79, 80, 71] is to attain notable levels of reliability despite constraints such as restricted resources and poor SNRs. The reconstruction work involves the utilization of DJSCC, which may be seen as a variant of an auto-encoder. In DJSCC, the SNRs are introduced into the intermediary segment of the encoder, resulting in a distorted version of the original auto-encoder. The DJSCC encoder employs semantic signals or analog waveform sequences instead of digital transmission, unlike traditional wireless transmission. This enables its operation in challenging channel circumstances while still achieving satisfactory restoration. Nevertheless, the efficiency enhancement obtained by current DJSCC techniques is only discernible through simulations in academic studies, which serve as the knowledge sharing by auxiliary transmission for semantic networks [5]. These simulations typically assume ideal synchronization, precoding, antenna, and power amplifier conditions. Therefore, the study [35] focuses on an SDR-based DJSCC platform. The capability of this system is evaluated by taking into account two important factors: synchronization error and non-linear distortion. Moreover, drawing inspiration from the impressive resilience demonstrated by Vision Transformers (ViTs) [82] in effectively addressing various challenges associated with picture nuisances, the authors in [36] provide a novel approach that utilizes a ViT-based framework for the purpose of SC. The methodology employed in our study demonstrates an increase in peak signal-to-noise ratio (PSNR) by a satisfactory level when compared to several variations of convolutional neural networks. Finally, we summarize the state-of-the-art prototype of JSCC and DJSCC in Table I. Our work demonstrates the inclusive prototype of an edge semantic device to inspire further advanced hardware design for JSCC-based semantic task-oriented communication. ## III Proposed Prototype of JSCC system for semantic feature transmission and reception ### _Overview_ The proposed JSCC system is designed to accommodate various types of semantic communications and task-oriented communications through the application of deep learning techniques. A deep learning model recovers the transmitted data with the semantic or task-oriented feature in accordance with side information derived from knowledge-based and task-effectiveness metrics. As depicted in Fig. 3, the architecture employs a specialized semantic encoder to transform the raw data into a source sequence based on knowledge-based side information, denoted as \(\mathbf{s}\). This sequence undergoes a random interleaving process or the proposed UEP installation for the semantic encoder to leverage the UEP capability and be aware of the location of the invulnerable codeword segment, resulting in \(\mathbf{itrl}(\mathbf{s})\). Subsequently, the JSCC QC-LDPC encoder with a code rate of \(0.8\) compresses and encodes \(\mathbf{itrl}(\mathbf{s})\) into a new sequence \(\mathbf{C}\) to ensure reliable data transmission. The encoded sequence \(\mathbf{C}\) is then modulated using BPSK, where bits \(0\) and \(1\) are mapped to \(+1\) and \(-1\), respectively, yielding \(\mathbf{X}\). This modulated sequence is transmitted over an Additive White Gaussian Noise (AWGN) channel, resulting in the received signal \(\mathbf{X}^{\prime}=\mathbf{X}+N\), where \(N\) represents the AWGN. Finally, a QC-LDPC-based JSCC decoder processes \(\mathbf{X}^{\prime}\), followed by a de-interleaving step, to produce the estimated source sequence \(\mathbf{s}^{\prime}\) for the semantic decoder. ### _QC-LDPC codes construction and encoding_ The JSCC system outlined in this paper is designed for implementation on hardware such as FPGAs, requiring a shift from floating-point to fixed-point arithmetic. This change inevitably leads to a predictable decrease in computational accuracy. To counteract this performance loss, additional parity bits are incorporated into the JSCC QC-LDPC encoder. This adjustment ensures that the system maintains a reasonable performance level when deployed on hardware. Furthermore, in our previous work [83], we provide a UEP LDPC code construction that shows the best irregular node distribution in accordance with the design of the large-degree variable node, which has a better capability for error correction than the small-degree variable node. From a hardware perspective, a variable node unit with a larger degree and stronger error-correcting capabilities has a higher level of computational complexity than a smaller degree. In a tanner graph, the degree of the check/variable node can be represented as its connecting edge. The authors derive the theoretical analysis to demonstrate the decoding capability of UEP LDPC code. This result can be applied to UEP QC-LDPC code design for semantic communication and apply the following QC-LDPC code construction by determining variable node degree to achieve UEP for the JSCC system. The construction of QC-LDPC codes is twofold based on Protograph LDPC (P-LDPC) codes and CPMs replacement for the quasi-cyclic characteristic. Our aim is to construct a QC-LDPC matrix based on \((\mathbf{B}_{s1},\mathbf{B}_{c1})\). The optimized P-LDPC codes, \((\mathbf{B}_{s1},\mathbf{B}_{c1})\), deriving from [84], are chosen in that such code pair can achieve a decoding threshold of -2.1 dB, 1.5dB lower than the classic \((\mathbf{B}_{R4JA},\mathbf{B}_{R4JA})\)[85]. Besides, in general, P-LDPC codes [86] also offer rapid encoding and decoding structures and achieve the linear minimum Hamming distance, leading to a better performance in the waterfall region and the error-floor region of BER curves. In what is called the lifting method for \((\mathbf{B}_{s1},\mathbf{B}_{c1})\), the JSCC matrix can be written as \[\mathbf{H}_{50\times 90}=\begin{pmatrix}\mathbf{H}_{s(20\times 40)}& \mathbf{H}_{L(20\times 50)}\\ \mathbf{0}_{s(30\times 40)}&\mathbf{H}_{c(30\times 50)}\end{pmatrix} \tag{1}\] where \(\mathbf{0}_{30\times 40}\) is an all-zero matrix of size \(30\times 40\). \(\mathbf{H}_{L(20\times 40)}=[\mathbf{0}_{20\times 30}\mathbf{I}_{20\times 20}]\). The next step is to replace traditional LDPC codes with QC-LDPC codes. The parity check matrix of the QC version of the selected P-LDPC codes can be obtained by replacing "1"s with CPMs of appropriate shift values and "0"s with all-zero matrices based on \(\mathbf{H}_{50\times 90}\) using Golomb-Ruler [87], which can ensure the girth of generated matrices are large enough. By lifting \(\mathbf{H}_{50\times 90}\) with a factor of \(z=160\), the matrix of QC-LDPC codes that we obtain can be denoted by \[\mathbf{H}_{QC(50\times 90)}=\begin{pmatrix}\mathbf{H}_{sQC(20\times 40)}& \mathbf{H}_{LQC(20\times 50)}\\ \mathbf{0}_{sQC(30\times 40)}&\mathbf{H}_{cQC(30\times 50)}\end{pmatrix} \tag{2}\] Referring to Eq.2, the parity check matrices for the JSCC encoding are indicated as \(\mathbf{H}_{\mathbf{sQC}}\) and \(\mathbf{H}_{\mathbf{sQC}}\), which sizes are 3200\(\times\)6400 and 4800\(\times\)8000, respectively. The encoding method can be simply achieved by \[\mathbf{c}=\mathbf{G}^{T}\mathbf{s} \tag{3}\] Fig. 3: Block diagrams of the proposed semantic-feature image transmission scheme where, generation matrix \(\mathbf{G}\) can be calculated using parity matrix \(\mathbf{H_{sQC}}\). Since the large size of the proposed \(\mathbf{H}\) matrix requires massive computation, the other equivalent approach is leveraging the right side of Eq.2 to divide and conquer the JSCC encoding scheme. To generate the output of the JSCC encoder, denoted by \(c\), two major steps, source compression and parity-bit generation for channel communication should be followed as below. **Source Compression**: The first step can be regarded as compressing data, which is given by \[\mathbf{b}=\mathbf{H_{sQC}}. \tag{4}\] Based on the size of parity check matrices, the source information vector, denoted as \(\mathbf{s}\), has a size of 6400 \(\times\) 1. As optimized QC-LDPC codes are designed for \(p\) = 0.04, this means that the probability of "1" in the source vector should be 4%. In other words, \(p=Pr(s_{i}=1)=0.04,\ i=1,2,...,6400\). Given that \(\mathbf{H_{sQC}}\) (3200\(\times\)6400) is the H matrix for the source codes, the compressed output can be calculated using Eq. (4). The size of the output \(\mathbf{b}\) is 3200\(\times\)1. The source compression ratio \(R_{s}\) is equal to \(6400/3200=2\). **Channel Parity-bit Generation**: the subsequent manner is based on the property of LDPC parity check matrix in Eq. 5: \[\mathbf{H}_{QC}\ \mathbf{c}=\left[\mathbf{H_{1(4800\times 4800)}}\right) \mathbf{H_{2(4800\times 3200)}}\right]\mathbf{c}=0. \tag{5}\] As codeword \(\mathbf{c}\) of the JSCC encoder can be denoted as \[\mathbf{c}=\left[\mathbf{p}\ \mathbf{b}\right]^{T} \tag{6}\] where \(\mathbf{p}\) represents the parity bit vector of size \(1\times 4800\). Accordingly, the parity bit vector can be calculated using \[\mathbf{p}=\mathbf{H_{1(4800\times 4800)}}\mathbf{H_{2(4800\times 3200)}} \mathbf{b}. \tag{7}\] The second half of the proposed JSCC encoder, which behaves like a channel encoder and is fed with a 3200-bit long\(\mathbf{b}\), outputs 8000 bits. Therefore, the channel code rate \(R_{c}\) is \(3200/8000=0.4\). So far, the overall code rate will be \(R_{overall}=R_{s}\times R_{c}=0.8\). It is noted that all data involved in the JSCC encoding scheme are represented as binaries, corresponding operations depicted in the aforementioned equations are merely bitwise operators (ANDs and XORs), which are hardware-friendly to implement. ### _JSCC Layered decoding algorithm_ The JSCC decoder using QC-LDPC codes can be visualized through a Tanner graph. This graph is essentially divided into two interconnected subgraphs: one representing the source and the other representing the channel, depicted in Fig. 4. As the major component of the proposed system, the QC-LDPC code-based JSCC decoder can be achieved by applying a QC-LDPC layered sum-product decoding algorithm [64][88], although some new calculations, such as message exchanges between the source side and the channel side, need to be included. The partial parallelism of the layered decoding algorithm, which enables full parallel operations amongst all sub-matrices within one check node or "layer", simplifies the hardware implementation of the source and channel decoders. The following parameters and values should be defined and assumed before explaining the JSCC decoding process. * An AWGN channel with zero mean and variance \(\delta^{2}\) is assumed. The channel value from this AWGN channel is set as the initial values of Variable-Node-to-Check-Node (V2C) messages. * \(M_{s}\) and \(M_{c}\) represent the number of check nodes (CNs) in the source and the channel, respectively. * \(N_{s}\) and \(N_{c}\) represent the number of variable nodes (VNs) in the source and the channel, respectively. The key decoding procedures for the source and channel are as follows. * Variable node processor (VNP) in source decoding: the variable node to check node (V2C) messages,\(\beta_{jk}^{sc}\), computed according to Eq. (8). To be more specific, \(L_{sc}^{k}\) is denoted as the source log-likelihood ratio (LLR) of the \(k\)-th VN in the source decoder. \(M(k)\setminus j\) represents the set of CNs connected to the \(k\)-th VN, excluding the \(j\)-th CN itself. The other denotations regarding all other equations can be found in Fig.4. It should be noted that circles and squares, respectively, represent variable nodes and check nodes in Fig. 4. The hollow circles represent punctured variable nodes. \[\beta_{jk}^{sc}=L_{k}^{sc}+\sum_{j^{\prime}\in M(k)\setminus j}\alpha_{jk}^{sc} \qquad\forall k\in N(j)\] (8) * Check node processor (CNP) in source decoding: Two operations are performed in two steps. 1. Update the check node to variable node (C2V) messages \(\alpha_{jk}^{sc}\) in Equation 9. 2. Updating message from the source decoder to the channel decoder, \(I_{\hat{k}}^{sc,cc}\) in Equation 10. \[tanh(\alpha_{jk}^{sc}/2)=tanh(I_{\hat{k}}^{cc,sc}/2)\times\prod_{k^{\prime} \in N(j)\setminus k}tanh(\beta_{jk}^{sc}/2)\] \[\forall k\in N(j)\] (9) \[tanh(I_{\hat{k}}^{sc,cc}/2)=\prod_{k^{\prime}\in N(j)}tanh(\beta_{jk^{ \prime}}^{sc}/2)\] (10) Calculating _Posteriori_ LLR \(l_{k}^{sc}\) needs to be done after CNP using Eq. (11). \[l_{k}^{sc}=L_{k}^{sc}+\sum_{j^{\prime}\in M(k)}\alpha_{j^{\prime}k}^{sc}. \tag{11}\] For channel decoding, the procedure for updating LLR messages is very similar to source decoding, as depicted on the right side of Fig. 4. * VNP in channel decoding: owing to the crossed message-passing mechanism between the source decoder and the channel decoder, VNP in channel decoding must be computed separately. \(\hat{N}^{cc\_sc}\) indicates the set of VNs connected to the corresponding CNs in the source decoder. \[\beta_{jk}^{cc}=L_{k}^{cc}+\sum_{j^{\prime}\in M(k)\backslash j}\alpha_{j^{\prime }k}^{cc}\qquad\forall k\in N(j)\cap\tilde{N}^{cc\_sc}\] (12) In Eq. (12), \(\tilde{N}^{cc\_sc}\)is the complement of \(N^{cc\_sc}\). \[\beta_{jk}^{cc}=I_{k}^{sc\_cc}+L_{k}^{cc}+\sum_{j^{\prime}\in M(k) \backslash j}\alpha_{j^{\prime}k}^{cc}\qquad\forall k\in N(j)\cap N^{cc\_sc}\] (13) * CNP in the channel decoding: \[tanh(\alpha_{jk}^{cc}/2)=\prod_{k^{\prime}\in N(j)\backslash k}tanh(\beta_{jk} ^{cc}/2)\qquad\forall k\in N(j)\] (14) * Message from the channel decoder to the source decoder: \[I_{k}^{cc\_sc}=L_{k}^{cc}+\sum_{j^{\prime}\in M(k)}\alpha_{j^{\prime}k}^{cc}.\] (15) * Posteriori LLR update without VNs connected with CNs in the source decoder: \[l_{k}^{cc}=L_{K}^{cc}+\sum_{j^{\prime}\in M(k)}\alpha_{j^{\prime}k}^{cc} \qquad\forall k\in N(j)\cap\tilde{N}^{cc\_sc}\] (16) * A Posteriori LLR update with only VNs connected with CNs in the source decoder: \[l_{k}^{cc}=I_{k}^{cc}+L_{k}^{sc,cc}+\sum_{j^{\prime}\in M(k)}\alpha_{j^{\prime }k}^{cc}\quad\forall k\in N(j)\cap N^{cc\_sc}\] (17) The last step is the hard decision and the stopping criterion for the decoding iteration. Let us consider the decoded source-side sequence and channel-side sequence with \(\hat{s}=\{\hat{s}_{1},\hat{s}_{2},...,\hat{s}_{k}\}\) and \(\hat{c}=\{\hat{c}_{1},\hat{c}_{2},...,\hat{c}_{k}\}\), respectively. They are used to obtain an estimated value of the received codeword sent on the sender's side, according to the following rule: * \(\hat{c}_{k}\) = 0 if \(l_{k}^{cc}\geq\) 0, otherwise \(\hat{c}_{k}\) = 1 \(\forall k\) * \(\hat{s}_{k}\) = 0 if \(l_{k}^{sc}\geq\) 0, otherwise \(\hat{s}_{k}\) = 1 \(\forall k\) Stopping the decoding process is subject to if the decoder reaches the maximum number of decoding iterations, which is preset before decoding. It should be emphasized that in our case, the number of layers is: \(m_{s}z_{s1}\) = 20. \(G_{s}\) and \(G_{c}\), representing the number of decoding groups per layer in the source and channel decoder, respectively, and we choose one group (\(G_{s}\) = 1 and \(G_{c}\) = 1) for both sides. The proposed architecture shown in Fig. 5 is different from a normal layered LDPC decoding architecture, as \(I_{k}^{cc\_sc}\) and \(I_{k}^{sc\_cc}\), used for exchanging messages between the source and channel decoders, in Eq. 15 and Eq. 10 are involved exclusively in a JSCC system. Therefore, \(I_{k}^{cc\_sc}\) and \(I_{k}^{sc\_cc}\) should be calculated as well in the CC2SC/SC2CC Processors. To balance the complexity of this system, a modified quantized Fig. 4: The Bipartite Graph of JSCC Decoding Scheme using QC-LDPC codes sum-product algorithm based on [62] was adopted to simplify the hyperbolic tangent functions in Eq. 9 and Eq. 14. Specifically, a look-up table (LUT) architecture is implemented for a two-input fixed-point \(tanh\) calculation. ### _Interleaver and De-interleaver for UEP installation_ The interleaving technique is used to spread burst errors and average the distribution of "0"s and "1"s in the source vector. Especially for semantic feature images, the importance is likely to be centered and continuous. Interleaving the information in these images can improve the error correction capability of LDPC codes. To recover the order of each binary on the decoding side, the de-interleaver is applied. In [89], the proposed regular interleaver and de-interleaver are shown in Fig.6 with the assumption of semantic importance having a higher occurrence probability on the even position using the signal processing technique or semantic model. This technique can be designed for the allocation of semantic importance to invulnerable or stronger error-correction capability as illustrated in the red codeword segment in contrast to the vulnerable codeword in the green segment. Hence, the semantic encoder/decoder possesses information on the distribution of UEP in the transmitted coded sequence. Consequently, it may achieve enhanced semantic compression while semantic importance is protected by an invulnerable codeword segment. ## IV Performance Evaluation ### _Experimental Platform_ The experimental setup utilizes a Virtex Ultrascale + FPGA VCU118 evaluation kit as the receiving platform, shown in Fig. 7. The system's processor is built around an open-source 32-bit RISC-V CPU core, which is connected to a flexible interconnection bus. Essential modules such as DMA controllers, NAND flash controllers, and main memory controllers for DDR RAMs are integrated, along with an Ethernet IP. Fig. 5: A brief design architecture of the QC-LDPC decoder implementation for either the Source or Channel side. Fig. 6: The proposed regular interleaver in [89] for UEP installation The core components of the system include a specialized JSCC decoder based on QC-LDPC codes and a deep learning accelerator for semantic and task-oriented processes. This accelerator is designed to handle matrix multiplication, convolution, and activation functions, which are fundamental to neural network computations based on YOLO [90] and auto-encoder. To simulate transmission modules (in the first row depicted in Fig. 3) and model an AWGN channel, a separate computer is employed. Consequently, the combined computer-FPGA setup functions as two distinct JSCC systems. The first system encodes semantic feature images and transmits them as BPSK-modulated data through an AWGN channel. The second system receives these image data from the channel and reconstructs the original image information using demodulation and JSCC decoding techniques. ### _Implementation results_ A 6-bit quantization scheme denoted as "Proposed-Q6" in Fig. 3, is adopted in the proposed JSCC system. The synthesized results for the main component, the QC-LDPC code-based JSCC decoder, are shown in Table II. It should be noted that the logic (FFs and LUTs) to implement the decoder occupies around 33% of the entire FPGA and around 1.4% of block memories. For each decoding iteration, 31 milliseconds are spent by the FPGA driven by a 100 MHz clock. In Table III, the feature comparison reveals that our FPGA platform for JSCC semantic communication encompasses both floating point analysis and fixed point (6-bit) quantization. Additionally, the inclusion of supplemental parity bits enhances the BER performance when employing a code rate of 0.8 to compensate for the quantization loss, in comparison to the other platforms utilizing a code rate of 1. Owing to the distinct architecture of the proposed scheme and the dimensions of the LDPC matrix, a direct performance comparison with extant literature is not feasible. As a result, a simulated system employing 32-bit floating-point precision, labeled as 'Proposed-FP32' in Fig. 3, serves as a benchmark against the 6-bit hardware implementation and six additional relevant studies. As illustrated in Fig. 8, the Bit Error Rates (BER) for the proposed codes, both under floating-point simulation and 6-bit hardware implementation, were scrutinized in juxtaposition with other works that solely investigated JSCC systems in a simulated setting, as indicated in the final column of Fig. 3. The BER is plotted on the y-axis, while the \(E_{b}/N_{0}\), or the energy per bit to noise power spectral density ratio, is plotted on the x-axis. The empirical findings from the 6-bit quantized version align well with the simulated outcomes, denoted as 'Proposed FP32.' Notably, the trajectory of these curves, as \(E_{b}/N_{0}\) increases, is congruent with existing scholarly contributions. Despite the simplifications introduced by data quantization in the proposed architecture, a modest Fig. 7: The Experimental Platform on the receiver’s side decline in the BER of 'Proposed-Q6' is observed. However, its BER performance remains superior to those reported by Z.Xu et al. [91], Q.Chen et al. [52], Double regular LDPC, and R4JA in [92], all of which employ non-QC-LDPC codes with 32-bit floating-point simulations. This superior performance can primarily be attributed to the QC-LDPC code generation, reasonable code rate modifications, and efficacious hardware deployment. As the QC-LDPC codes in this design are optimized on the condition that \(p\leq 0.04\), a semantic feature, such as the original one depicted in the upper image of Fig. 9, can be well contained in a 160\(\times\)40 image, in which each black and white pixel can be represented by 1 bit in the source vector. This compact system is tested under \(E_{b}/N_{0}\) ranging from \(-2\) to \(0\) with 0.5 as its step. Fig. 9 shows one occasion under various \(E_{b}/N_{0}\). Considering the low BER results in Fig. 8, this simple semantic feature sender and receiver can communicate correctly with each other nearly all the time. The code rate for this entire system is \(R=R_{\text{s}}\times R_{c}=2\times 0.4=0.8\). It should be noted that in such low \(E_{b}/N_{0}\), many communication standards, such as IEEE 802.11n (WiFi) and IEEE 802.16e (WiMAX), need to set a low code rate to 0.5 or even lower when complex or high-volume noise is detected. ## V Conclusion and Future Directions By facilitating the exchange of highly informational, up-to-date, and efficient data. SC has the potential to enhance the effective use of resources, improve information accuracy and efficacy in task completion, and serve as a model and technical foundation for future generations of communication systems. The utilization of task-oriented communication has been widely regarded as a novel approach in the development of communication methods for multi-agent systems. In this paper, a novel JSCC system based on QC-LDPC Fig. 8: BER performance comparison. \(p=0.04\) Fig. 9: Original and Received Semantic Feature Images in a case when \(p=0.0398\) codes is proposed as a promising candidate for semantic task-oriented communication systems. As the proposed irregular QC-LDPCcode construction with the nature of UEP capability, the semantic importance can be dynamically assigned to the variable node with stronger error-correction capability by the proposed interleaver. After semantic source coding passes the information to the simple structure of the QC-LDPC codes, the JSCC system is implemented on the hardware device. Significantly, the operations of the JSCC decoder are layered and are then executed in parallel both on the source and channel sides. The fixed-point system also maintains fair BER performance compared to the simulated one. Moreover, the design with its optimized QC-LDPC codes is further investigated by compressing image data and protecting encoded data via an AWGN channel. This application is the semantic feature of image transmission and reception. In many cases, in practice, sources are transmitted uncoded, while state-of-the-art block channel encoders are used to protect the transmitted data against channel errors. If the JSCC scheme is used in such cases, the throughput can be improved by compressing the source data (\(R_{s}=2\)) and then the channel coding starts. If the input image and channel codes are fixed, the throughput of our proposed design is doubled, compared with the non-compression system. Even when competing with another JSCC system, a better error correction capability can still outperform others. We conclude some potential future research directions as follows. 1. The design concepts entail the utilization of collaborative techniques with an edge AI server throughout the process of semantic transmission in order to enhance adaptability. The objective of this collaboration is to provide guidelines for configuring the JSCC decoder in the context of feature and federated learning. 2. Due to the nature of the applicable implementation in [93], the JSCC framework is used to interface with binarized neural networks to optimize resource utilization in edge devices with limited computational capabilities. This approach is specifically applied in the context of task-oriented communication and goal-oriented quantization while considering the application of the RISC-V low-power revolution. 3. Although there is a tendency for DJSCC to supplant the role of JSCC, the future possibility of collaboration with machine learning remains a focal point not only in the study of theoretical analysis but also in practical implementation. The importance of including semantic and task-oriented design aspects in the development of UEP JSCC prototype cannot be overstrated, particularly in the context of edge AI for future-generation 6G communications. 4. In the cybersecurity aspect, the analysis of the secrecy rate transition between edge servers and devices utilizing JSCC can be explored within the framework of differential privacy [94], specifically in the presence of eavesdropping or the other assault scenario. The incorporation of semantics and task-oriented communication is expected to assume a significant role in forthcoming intelligent systems. The purpose of this article is to offer an introductory overview and a cohesive perspective for the prototype of next-generation 6G communication systems to inspire more potential research activities.
2303.08886
vFHE: Verifiable Fully Homomorphic Encryption with Blind Hash
Fully homomorphic encryption (FHE) is a powerful encryption technique that allows for computation to be performed on ciphertext without the need for decryption. FHE will thus enable privacy-preserving computation and a wide range of applications, such as secure cloud computing on sensitive medical and financial data, secure machine learning, etc. Prior research in FHE has largely concentrated on improving its speed, and great stride has been made. However, there has been a scarcity of research on addressing a major challenge of FHE computation: client-side data owners cannot verify the integrity of the calculations performed by the service and computation providers, hence cannot be assured of the correctness of computation results. This is particularly concerning when the service or computation provider may act in an untrustworthy, unreliable, or malicious manner and tampers the computational results. Prior work on ensuring FHE computational integrity has been non-universal or incurring too much overhead. We propose vFHE to add computational integrity to FHE without losing universality and without incurring high performance overheads.
Qian Lou, Muhammad Santriaji, Ardhi Wiratama Baskara Yudha, Jiaqi Xue, Yan Solihin
2023-03-15T19:12:53Z
http://arxiv.org/abs/2303.08886v1
# vFHE: Verifiable Fully Homomorphic Encryption with Blind Hash ###### Abstract Fully homomorphic encryption (FHE) is a powerful encryption technique that allows for computation to be performed on ciphertext without the need for decryption. FHE will thus enable privacy-preserving computation and a wide range of applications, such as secure cloud computing on sensitive medical and financial data, secure machine learning, etc. Prior research in FHE has largely concentrated on improving its speed, and great stride has been made. However, there has been a scarcity of research on addressing a major challenge of FHE computation: client-side data owners cannot verify the integrity of the calculations performed by the service and computation providers, hence cannot be assured of the correctness of computation results. This is particularly concerning when the service or computation provider may act in an untrustworthy, unreliable, or malicious manner and tampers the computational results. Prior work on ensuring FHE computational integrity has been _non-universal_ or _incurring too much overhead_. We propose vFHE to add computational integrity to FHE without losing universality and without incurring high performance overheads. Fully Homomorphic Encryption; Integrity; Blind Hash; Secure Computation; ## I Introduction Privacy-preserving technology in cloud computing is crucial, as it allows for deploying cloud-based applications where data privacy is paramount. This property is especially important in cases where the data is highly sensitive or when compliance with increasingly stringent privacy regulations is required. Fully Homomorphic Encryption (FHE) [7, 26, 33, 39] is a distinct privacy-preserving technique that enables computation on encrypted data without the need for decryption. FHE enables privacy-preserving computation in a variety of applications. For example, data owners such as Alice can gain new insights from their private data through service providers such as Bob, who can perform computations, manipulations, and even aggregations on the data without access. This technique has numerous use cases, such as secure cloud computing for sensitive medical and financial data [1, 4, 30, 31, 36, 51] and secure machine learning [9, 13, 27, 32, 46]. Prior research efforts in FHE have mainly concentrated on boosting its speed, leading to notable advancements in this domain. For instance, recent studies [10, 19, 28, 37, 42, 43, 44, 45] have improved the efficiency of FHE through innovative schemes and hardware acceleration, such as the use of GPU support for FHE [48] and ciphertext batching [37, 47], which substantially reduce latency and enhance throughput for privacy-preserving computation. Despite the significant progress in FHE research, there has been a scarcity of attention given to one of its major challenges: the inability of client-side data owners to verify the integrity of computations performed by service and computation providers, leading to uncertainties about the accuracy of results. This concern is especially pressing in cases where the provider is untrustworthy, unreliable, or acts maliciously and may tamper with the computational results. Prior work ensuring FHE computational integrity has been _non-universal_ or _incurring too much overhead_. The majority of research focuses on two approaches: cryptographic integrity checking protocol [5, 6, 8, 11, 20, 22, 23, 24, 29, 50, 56], including homomorphic message authentication code (MAC and MAC' [12]) and zero-knowledge proofs (ZKP and ZKP' [54]); and relying on trusted execution environment (TEE) hardware [16, 49, 52, 55]. The former approach suffers from non-universality from incompatibility with Fig. 1: Illustrating vFHE with blind hash, which enables the verification of Fully Homomorphic Encryption (FHE) against both malicious tampering and computational errors. ring-based FHE schemes. Different FHE schemes, such as BGV [7] and CKKS [15], operate over different encoding methods, polynomial structures, and ciphertext space. Thus a cryptographic integrity method, e.g., MAC, MAC', ZKP', that alters the inner encoding and algorithms in a specific FHE scheme needs redesign for a different scheme. The generic ZKP without optimization for a specific FHE scheme suffers from a large overhead. Modifying cryptographic integrity to make it efficient and compatible with all modern FHE schemes without impacting FHE functionalities has so far remained elusive [12]. For these reasons, there is a need for a generic and efficient integrity verification solution that is compatible with various FHE schemes. In this context, we propose a novel approach called the _blind hash_ method, which enables verifiable FHE. The proposed method is based on a plug-in algorithm that can be used with different FHE schemes. The method involves generating a blind hash of the raw data, which is then used to verify the integrity of the computation. We present our vFHE module with blind hash in Figure 1. In the context of a client-side data owner and a server-side service/computation provider, let's assume that the data owner wants to compute a function \(f()\) over an input \(x\) resulting in \(f(x)\). In order to do so, the raw data \(x\) is preprocessed by our proposed _blind hash_ function to generate \(x_{c}\). This \(x_{c}\) can be encoded and encrypted into ciphertext \([x_{c}]\) using any fully homomorphic encryption (FHE) schemes. The server can perform the function \(f([x_{c}])\) without decrypting the data. The encrypted results obtained from the server side can be decrypted by the data owner. Our proposed verifiable FHE (vFHE) method allows for verification of the integrity and correctness of \(f(x)\) against any computational errors or malicious tampering. Specifically, if the server-side computation results in \(f^{\prime}([x_{c}])=f([x_{c}])\) where \(f^{\prime}()\) is a tampered result that yields a different output than \(f()\), the vFHE method would detect this and alert the data owner of the tampering attempt. ## II Background and Related Work In this section, we briefly describe FHE and the current state of the art to guarantee integrity in FHE. ### _Fully Homomorphic Encryption_ An encryption scheme transforms plaintext (unencrypted data) into ciphertext (encrypted data) using an encryption algorithm to make the ciphertext unintelligible to unauthorized parties. Encryption schemes are used to protect the confidentiality and privacy of data, as well as to ensure the integrity and authenticity of data. FHE stands for Fully Homomorphic Encryption. This type of encryption scheme allows computations to be performed on ciphertexts without first decrypting them. In other words, it enables the computation of functions directly on encrypted data. Currently, various Fully HE (FHE) schemes such as BGV [7] and BFV [33], as well as CKKS [15], are based on ring-based learning with errors and operate on different polynomials in a ring-based structure. Since FHE operations are approximately 10,000 times slower than non-FHE operations [28], a significant portion of FHE and FHE-based privacy-preserving machine learning research [10, 28, 37, 43, 45, 47] has focused on improving the efficiency of FHE through innovative schemes and hardware acceleration, such as incorporating GPU support for FHE [48] and utilizing techniques like ciphertext batching [37, 47] which significantly reduce latency and improve the throughput of privacy-preserving computation. ### _Comparison with Related Works_ In table I, we compare our blind hash method with the related works. Research on computational integrity primarily focuses on two approaches: cryptographic integrity checking protocols [5, 6, 8, 11, 20, 22, 23, 24, 29, 50, 56], e.g., MAC, MAC', ZKP, and ZKP'; and utilizing trusted execution environments (TEE) hardware [16, 49, 52, 55]. Several studies have presented MAC-based approaches in scholarly literature [6, 8, 20, 22, 23]. A notable example of a MAC [21] focuses on verifying ciphertext computations by specifically examining quadratic functions within a particular variant of the BV scheme [8]. However, these methods are not universally applicable to all FHE schemes and have limitations in scalability, efficiency and functionality [12]. In comparison, the MAC' method [12] improves efficiency and universality by implementing alternative encoding methods. Nonetheless, the applicability of this method to all FHE schemes universally remains uncertain, in contrast to our vFHE approach, which does not necessitate alterations to encoding techniques. Additionally, the overhead associated with this method is no less than twice that of the original computation. Modulus residue approaches [3][38] are derivations of the MAC scheme. They offer lower overhead by creating an unencrypted verification encoding with reduced data size. The efficiency of these residue methods, which feature unencrypted verification and an unprotected verification function, is achieved at the expense of compromised security. Common FHE schemes, such as BFV, BGV, and CKKS, enable FHE computation with plaintext. A malicious actor could manipulate the data and bypass the verification by simply altering both the data and the checksum through arithmetic plaintext operations on encrypted data. In comparison, our blind hash scheme maintains security while minimizing overhead. Although these residue methods are well-suited for safeguarding integrity against faulty hardware, they are not as effective against sophisticated adversaries. Despite their significantly lower overhead relative to MAC schemes, their limited scalability arises from the linear increase in verification size as the message size grows. The ZKP method [23] has been explored for maintaining computational integrity; however, employing a Zero-Knowledge Proof technique for universal FHE verification without optimization specific to a particular FHE scheme results in considerable overhead. This overhead intensifies in proportion to the size of the data. In contrast, ZKP' [54], optimized for a distinct FHE scheme, might not attain full universality. The hardware-based strategy, which executes complete or partial FHE operations within a Trusted Execution Environment (TEE) [49, 54], exhibits universality; however, it imposes considerable performance overheads. The reported overheads range from \(3-30\times\)[12, 16, 49, 55], highlighting the significant performance implications of this approach. This is due to one key limitation: TEEs were designed for non-FHE computation. Hence their designs mismatch with what FHE needs. ## III Threat Model We consider an outsourcing scheme between a client-side data owner \(\mathcal{C}\) and a server \(\mathcal{S}\), where \(\mathcal{S}\) executes a function \(f(x):X\to Y\) on data provided by \(\mathcal{C}\). The function \(f\) can either belong to \(\mathcal{C}\) (e.g., in IaaS/PaaS [2]), or to the server (e.g., in SaaS/API [2]). We adopt a more realistic threat model in which the server \(\mathcal{S}\) is not fully trustworthy and may be malicious or vulnerable to tampering with the ciphertext \(f(x)\). This departs from the traditional semi-honest setting, in which \(\mathcal{S}\) is assumed to be honest but curious (HBC) about inferring \(\mathcal{C}\)'s data privacy. Our scheme should satisfy the following security properties: **Integrity**: \(\mathcal{C}\) could detect an integrity attack when interacting with \(\mathcal{S}\) for any input \(x\) and ensure the correctness of \(y=f(x)\). **Data Privacy**: \(\mathcal{S}\) cannot learn any information about input \(x\). While the function \(f()\) can be made public to both the client and server in many applications [17, 40], it is still important to note that in certain applications and scenarios [53], preserving the privacy of the function may be a critical concern. **Function Privacy**: If \(f()\) is provided by \(\mathcal{C}\), \(\mathcal{S}\) cannot learn information about \(f()\) beyond its size. If \(f()\) belongs to \(\mathcal{S}\), \(\mathcal{C}\) cannot learn more about \(f()\) than what is revealed by \(y=f(x)\). ## IV vFHE Design **Design Principle.** Our _blind hash_ scheme was initially inspired by algorithm-based fault tolerance (ABFT) techniques [18, 34, 35, 41] that utilize checksums to detect computational errors due to faults in systems. The checksum provides redundant mathematical relationship of data that is preserved in the output, such that computational error becomes detectable through verifying the relationship in the output. However, while effective in detecting faults, ABFT techniques have a fundamental flaw if used for detecting errors due to attacks in untrusted environments: attackers can bypass the detection by manipulating both data and the checksum, leveraging the knowledge of the computation process. Even though the checksum is encrypted by FHE, it is mathematically related to data in ways known or guessable by the attacker, creating a security gap between fault detection and attack detection. Our _blind hash_ bridges the gap and improves upon ABFT by following the main principle of _concealing the checksum computing process_ from the untrusted environments: _blind hash_ incorporates an extra layer of security by adding a blind hash function into the checksum computation process. The blind hash function is only visible to the data owners and generated checksum is encrypted by FHE, making it secure and tamper-resistant. _Blind hash_ not only detects faults, as in ABFT, but also detects attacks. **Workflow.** Our development of _blind hash_ will take the following steps: (i) blind hash calculation. Given data \(x\) and predefined hash vector \(h^{x}\), the data owner calculates the blind hash checksum values by \(hash(x,h^{x})=h^{x}x\), attaches it to the original data as \(x^{\prime}\), and encrypts \(x^{\prime}\) using FHE to generate \([x^{\prime}]\), here \([x^{\prime}]\) represents ciphertext of plaintext \(x^{\prime}\). (ii) data sharing. The data owner shares encrypted \([x^{\prime}]\) with the service provider. (iii) private computation for result and proof. The service provider performs arithmetic function \([f([x^{\prime}])]\) on encrypted data \([x^{\prime}]\) directly, where \(f()\) function can be any arithmetic function, including convolution, matrix operation, etc, and it usually depends on parameters \(w\), e.g., convolution filter, machine learning weights. (iv) client's integrity checking. The client decrypts \([f([x^{\prime}])]\), i.e., (\([f([hash(x,h^{x})])]\) and \([f([x])]\)), and verifies the integrity and correctness by checking the results and proof \(f([hash(x,h^{x})])\) and \(hash(h^{x},f([x]))\). **Function Privacy.** The function \([f([x^{\prime}],w)]\) can be directly performed in many applications [17, 40] where the variable \(w\) is public to both the client and the server. However, for applications where the weight \(w\) is kept confidential by the client, the client can compute the blind hash on \(w\) and encrypt it in the same manner as the input \(x\). The encrypted \(w^{\prime}\) can be shared with the server, allowing the server to perform multiplication with the encrypted \(x^{\prime}\). On the other hand, if \(w\) is private to the server, the server would perform the computations with the encrypted \(x^{\prime}\) and \(w^{\prime}\), and then add noise to the intermediate results \([f([x^{\prime}],w)]\) through the noise flooding technique [25] to enhance the privacy of the function. **Blind Hash Illustration.** We illustrate the process of a blind hash by utilizing matrix multiplication, as depicted in Figure 2. Assume the data owner has matrix \(A\) with a size of \(m\times n\) and requests matrix multiplication service of \(C=A\times B\) from the server, and \(B\) has a length of \(n\times k\). Directly encrypting \(A\) as \([A]\) using FHE and allowing the server to calculate the matrix multiplication of \([A]\) and \(B\) may pose integrity risks for two reasons. Firstly, the server can manipulate the outcome \(C\) into \(C^{\prime}\), or even fabricate a result without actually performing the computation. Secondly, the FHE computations executed by the server are susceptible to errors, including excessive FHE noise and hardware malfunctions. While ABFT [18, 34, 35, 41] effectively detects errors, it falls short in detecting attacks in untrusted environments. _Blind hash_ not only detects faults, as in ABFT but also detects attacks since it incorporates an extra layer of security by adding a blind hash function into the checksum computation process. Our blind hash method ensures Fully Homomorphic Encryption (FHE) integrity by following these steps: The client creates a hash encoding vector \(h^{A}\) with dimensions \(1\times m\), where \(m\) represents the number of rows in data \(A\), and each element is randomly chosen from the plaintext space. The client then multiplies \(h^{A}\) by \(A\) to produce the hashed value \(hash(A,h^{A})\), as depicted in Figure 2(a). The client encrypts the pair \((^{A}_{hash(A,h^{A})})\) into ciphertext \([A]\) using FHE and shares this with the server. This FHE encryption guarantees that the hashed checksum value \(hash(A,h^{A})\) remains hidden. The server carries out FHE matrix multiplication between \([A]\) and \(B\), resulting in the encrypted outcome and proof, denoted as \((^{[C}_{CA}])\), where \([C]\) is the result and \(C^{A}\) is the computational proof, as demonstrated in Figure 2(b). Upon receiving the encrypted outcome and proof, the client decrypts them into plaintexts \(C\) and \(C^{A}\). The client then multiplies \(C\) by the blind hash vector \(h^{A}\) to Fig. 2: Illustrating vFHE with blind hash, which enables the verification of Fully Homomorphic Encryption (FHE) against both malicious tampering and computational errors. compute \(hash(C,h^{A})\). The integrity of the computation is confirmed by comparing \(hash(C,h^{A})\) with \(C^{A}\). If they match, the computation is deemed valid. Otherwise, an integrity issue exists. This procedure is illustrated in Figure 2(c). ## V Scheme Analysis **Security Analysis**. If the hash vector \(h^{A}\) is exposed to other parties, such as the server, the checksum will no longer be considered _blind_, and its integrity can be compromised. For instance, the server could add \(h^{A}M\) to both \(C\) and \(h^{A}C\) simultaneously, where \(M\) shares the same size as \(C\). This type of manipulation would go undetected by the client. In our blind hash, however, \(h^{A}\) is kept hidden from the server as \(h^{A}A\) is encrypted using FHE, and the encrypted \([h^{A}A]\) is transmitted to the server. Without knowledge of \(h^{A}\), it becomes difficult for the server to attack the checksum verification. **Ensuring Security**. One might contend that the blind hash method is not secure if the data matrix is invertible, allowing an attacker to extract the value of \(h^{A}\) by calculating the matrix inverse \(A^{-1}\) and applying \(h^{A}AA^{-1}\) to obtain \(h^{A}\). We propose two solutions to address this concern. First, since only a square matrix is invertible, we alter matrix \(A\) to be non-square. For example, the client can configure the FHE scheme such that the number of slots exceeds the data size of \(A\). Second, the client can introduce errors into the blind hash, which serves to impede the extraction of \(A\)'s inverse. In the first approach, the blind hash functions effectively when the matrix is non-square. If the matrix is square, one can either partition it into two non-square matrices or introduce padding to transform it into a non-square matrix. In the second approach, we introduce a blind hash with the error shown in Figure 3 based on the original blind hash. Given a matrix \(A\), the data owner calculates the blind hash \(hash(A,h^{A},r^{A})\) by \(h^{A}A+r^{A}\) where \(r^{A}\) is the _error_, i.e., a random secret vector as the Figure 3(a) shows. Subsequently, the data owner appends the \(hash(A,h^{A},r^{A})\) to matrix \(A\) and encrypts them together. The same procedure can be applied to matrix \(B\), as illustrated in Figure 3(b). The FHE computation step in this case is identical to that of the blind hash. During verification, the data owner must execute two steps. First, the data owner computes the \(hash(C,h^{A})\), resulting in \(h^{A}C\). Second, the data owner subtracts \(hash(C,h^{A})\) from \(C^{A}\) and modulo the error \(r^{A}\), and verifies if the outcome equals 0, as shown in Figure 3(c). **Overhead Analysis.** We demonstrate that our implementation of a _blind hash_ results in minimal increases in various areas, including plaintext expansion, server-side computational overhead, ciphertext expansion, and client computation. (i) Plaintext expansion. When applied to a matrix with dimensions of \(m\times n\), the resulting _blind hash_ will have dimensions of \(1\times n\). The plaintext expansion rate of the _blind hash_ can be calculated as \(\frac{1}{m}\), indicating a relatively small increase in the size of the original matrix. (ii) FHE computation. The implementation of a blind hash does not require any modifications to the server. A _blind hash_ has a minimal impact on computational resources, particularly in the case of large matrix multiplications. The computational complexity of multiplying two matrices, \(A\) and \(B\), with dimensions of \(m\times n\) and \(n\times k\) respectively, is expressed as \(\mathcal{O}(mnk)\). A _blind hash_ increases the computational number from \(mnk\) to \((m+1)nk\), yet the overall computational complexity remains unchanged at \(\mathcal{O}(mnk)\). This demonstrates that the FHE computation overhead of a _blind hash_ is negligible for sufficiently-sized matrices. (iii) Ciphertext expansion. The ciphertext expansion rate, which determines the communicational overhead between the client and the server, is equal to or less than \(1+\frac{1}{m}\) due to the ability to pack multiple values into a single ciphertext. (iv) Client computation. The client's computational overhead is comprised of two components: the generation of the hash vector and checksum vector, and the verification of the proof. The generation of the hash vector can be performed in advance, while the generation of the checksum vector involves a vector-matrix multiplication Fig. 3: Blind Hash with Error enhances the security of Blind Hash with a computational complexity of \(\mathcal{O}(mn)\), where the vector and matrix have dimensions of \(1\times m\) and \(m\times n\), respectively. The overhead ratio of this process to the original computation is \(\mathcal{O}(\frac{1}{k})=\frac{\mathcal{O}(mn)}{\mathcal{O}(mnk)}\). The verification process of the proof involves a vector-matrix multiplication with a computational complexity of \(\mathcal{O}(mk)\), i.e., \(\frac{1}{n}\) of matrix multiplication, and a comparison of two vectors with dimensions of \(1\times k\), which has a complexity of \(\mathcal{O}(k)\), i.e., \(\frac{1}{mn}\) of matrix multiplication. We propose the use of power-of-2 hash values and inexpensive shift operations to reduce the costly \(\mathcal{O}(mn)\) multiplication with inexpensive shift operations, leading to a significant decrease in the client's computational overhead. As convolution operations in deep neural networks can be represented mathematically as matrix multiplications [41], we only analyze the complexity of representative matrix multiplications. ## VI Result To examine the potential runtime overhead of vFHE, we have implemented vFHE of code based on SEAL [14] to insert an FHE integrity verification. Previous research has demonstrated that the FHE-in-TEE approach yields exceptional efficiency for variable FHE, as reported in [54]. Accordingly, our baseline is established by incorporating SEAL-based BFV and AMD-SEV TEE. Figure 4(a) shows execution time overheads as a function of the matrix size, over unprotected execution (FHE-only), with various arithmetic FHE schemes, including BFV, BGV, and CKKS. As shown in the figure, the overhead of our baseline FHE-in-TEE over the FHE-only method executing matrix multiplication is far more than \(1000\%\). With _blind hash_ support, the overhead decreases to about 4% for diagonal matrix multiplication with size \(n=64\). The runtime overhead of _blind hash_ is reduced as the matrix size, \(n\), increases, with a worst-case overhead of fewer than 5\(\times\) when \(n=1\). When the matrix size is large enough, its overhead will be near zero. We use Table (b) in Figure 4 to show execution time comparisons of our _blind hash_ method in vFHE with the BGV scheme against our baseline and FHE-only method on a diagonal matrix-matrix multiplication with size \(n=64\). Our baseline FHE-in-TEE offers verifiable FHE without adding any additional runtime overhead on the client side about encryption (Enc.), decryption (Dec.), and verification (Verf.). However, it does substantially increase overhead on the server side, by approximately 21.6 times, compared to the FHE-only method. In contrast, the FHE computation equipped with our blind hash in vFHE increases \(2.1\%\) and \(2.2\%\) runtime overhead on the client and server sides, respectively, over the FHE-only method. The preliminary results demonstrate that, with the proposed techniques, vFHE with _blind hash_ can be applied universally to various FHE schemes (BFV, BGV, CKKS) and can achieve efficiency, indicating the promise of vFHE as a new paradigm for verifiable FHE computation. ## VII Conclusion In this research paper, we present the blind hash, an innovative approach designed to ensure data integrity in FHE computations. Our proposed method offers several key advantages, including scalability, low computational overhead, and compatibility with a diverse range of popular FHE schemes currently available. By addressing data integrity concerns, the blind hash technique contributes to the growing body of research on secure FHE computations and has the potential to enhance the practicality of FHE-based solutions across numerous applications. Fig. 4: vFHE runtime overhead.
2310.16558
A multiplicity formula for the Milnor number of smoothable curves
We derive a multiplicity formula for the Milnor number of a reduced smoothable curve singularity generalizing a well-known formula due to L\^e, Greuel and Teissier for complete intersection curves. We obtain a multiplicity characterization of Whitney equisingularity for families of locally smoothable curves.
Andrei Benguş-Lasnier, Terence Gaffney, Antoni Rangachev
2023-10-25T11:23:32Z
http://arxiv.org/abs/2310.16558v2
# A multiplicity formula for the Milnor number of smoothable curves ###### Abstract. We derive a multiplicity formula for the Milnor number of a reduced smoothable curve singularity generalizing a well-known formula due to Le, Greuel and Teissier for complete intersection curves. We obtain a multiplicity characterization of Whitney equisingularity for families of locally smoothable curves. Key words and phrases:Milnor number, Euler characteristic, delta invariant, conormal space, polar varieties, Hilbert-Samuel and Buchsbaum-Rim multiplicities, Whitney equisingularity 2010 Mathematics Subject Classification: 32S15, 32S30, 32S60, 14C17, 13H15 ###### Contents * 1 Introduction * 2 Conormal spaces, polar varieties, and multiplicities * 2.1 Conormal spaces * 2.2 Polar varieties * 2.3 Multiplicities * 3 A flatness result * 4 Proofs * 5 Examples * 6 Appendix ## 1. Introduction Let \((X_{0},0)\subset(\mathbb{C}^{n},0)\) be a reduced curve singularity. In [1] Buchweitz and Greuel define the Milnor number of \((X_{0},0)\) as follows. Let \(n\colon(\overline{X_{0},0})\to(X_{0},0)\) be the normalization morphism. Denote by \(\omega_{X_{0},0}\) the dualizing module of Grothendieck. Denote by \(d\) the composition of \(\Omega^{1}_{X_{0},0}\to n_{*}\Omega^{1}_{\overline{X_{0},0}}\cong n_{*} \omega_{\overline{X_{0},0}}\to\omega_{X_{0},0}\) and the exterior derivation \(\mathcal{O}_{X_{0},0}\to\Omega^{1}_{X_{0},0}\). Then the Milnor number \(\mu\) of \((X_{0},0)\) is defined as \[\mu=\mu(X_{0},0):=\dim_{\mathbb{C}}(\omega_{X_{0},0}/d\mathcal{O}_{X_{0},0}).\] In [1, Prp. 1.2.1] Buchweitz and Greuel showed that \(\mu=2\delta+r-1\), where \(\delta:=\dim_{\mathbb{C}}(\overline{\mathcal{O}_{X_{0},0}}/\mathcal{O}_{X_{0 },0})\) and \(r\) is the number of branches of \((X_{0},0)\). This is a formula discovered for plane curves by Milnor. When \((X_{0},0)\) is smoothable, Bassein [1] proved that \(\mu\) equals the first Betti number of the smoothing; in particular, for complete intersection curves \(\mu\) coincides with the usual Milnor number. In [1, VIII]) it is shown that \(\omega_{X_{0},0}\) is canonically isomorphic to the module of Rosenlicht's regular differential forms \(\omega_{X_{0},0}^{R}\) on \((X_{0},0)\). When the parametrization (normalization) of \((X_{0},0)\) is known, using residues, one can compute \(\mu\) in practice. If the parametrization of \((X_{0},0)\) is not known, one would like to have an efficient way for computing \(\mu\) (or \(\delta\)) just from the defining equations of \((X_{0},0)\) in \((\mathbb{C}^{n},0)\). Denote by \(\operatorname{Jac}(X_{0})\) the Jacobian ideal of \((X_{0},0)\) in \(\mathcal{O}_{X_{0},0}\), which is the first Fitting ideal of \(\Omega^{1}_{X_{0},0}\). Denote by \(e(\operatorname{Jac}(X_{0}))\) the _Hilbert-Samuel (HS) multiplicity_ of \(\operatorname{Jac}(X_{0})\) and denote by \(m\) the multiplicity of \((X_{0},0)\). Because \((X_{0},0)\) is Cohen-Macaulay these multiplicites are easy to compute. Each one of them is the colength of a principle ideal in \(\mathcal{O}_{X_{0},0}\). The HS multiplcity of \(\operatorname{Jac}(X_{0})\) is the colength of a general \((n-1)\times(n-1)\) minor of the Jacobian matrix of \(X_{0}\), and the multiplicity of \((X_{0},0)\) is the colength of the ideal generated by a general linear combinations of the generators of the maximal ideal of \(\mathcal{O}_{X_{0},0}\). Teissier [17, Proposition II.1.2] for plane curves and more generally, Le [14] and Greuel [13] for complete intersection curves, derived the following formula \[\mu=e(\operatorname{Jac}(X_{0}))-m+1.\] We say that \((X_{0},0)\) is _smoothable_ if there exists a flat deformation \(h\colon(X,0)\to(Y,0)\) with \(Y\) smooth of dimension one such that \((h^{-1}(0),0)=(X_{0},0)\) and the fiber \(h^{-1}(y):=X_{y}\) is smooth for \(y\neq 0\) close enough to \(0\). The main result of this paper generalizes the formula above to the case of smoothable curves by introducing a correction term that loosely speaking measures how far \((X_{0},0)\) is from a complete intersection. Let \((Z_{0},0)\subset(\mathbb{C}^{n},0)\) be a generic complete intersection curve which contains \((X_{0},0)\) as a subvariety. Set \(W_{0}:=\overline{Z_{0}\setminus X_{0}}\). Denote by \(I_{0}(X_{0},W_{0})\) the intersection number of \(X_{0}\) and \(W_{0}\) at \(0\) (see [10, pg. 179]). It is equal to \(\dim_{\mathbb{C}}\mathcal{O}_{\mathbb{C}^{n},0}/(I_{X_{0}}+I_{W_{0}})\), where \(I_{X_{0}}\) and \(I_{W_{0}}\) are the ideals of \(X_{0}\) and \(W_{0}\) in \(\mathcal{O}_{\mathbb{C}^{n},0}\), respectively. **Theorem 1.1**.: _Suppose \((X_{0},0)\subset(\mathbb{C}^{n},0)\) is a reduced smoothable curve. Then_ \[\mu=e(\operatorname{Jac}(X_{0}))-I_{0}(X_{0},W_{0})-m+1. \tag{1}\] When \((X_{0},0)\) is a complete intersection, then \(I_{W}=(1)\) and so \(I_{0}(X_{0},W_{0})\) vanishes. Thus (1) recovers the Le-Greuel-Teissier formula. Let \((X,0)\to(Y,0)\) be a one-parameter embedded flat deformation of \((X_{0},0)\subset(\mathbb{C}^{n},0)\) with \(X_{y}\) smooth for small enough \(y\neq 0\). Denote by \(Z\) the induced deformation on \(Z_{0}\) and define \(W\) accordingly. Key to the proof of Theorem 1.1 is interpreting \(e(\operatorname{Jac}(X_{0}))\) as the Buchsbaum-Rim multiplicity of a module whose presentation matrix corresponds to the Jacobian matrix of \(Z_{0}\). Then Gaffney's Multiplicity-Polar Theorem implies that \(e(\operatorname{Jac}(X_{0}))\) equals to the sum of the covering degree over \((Y,0)\) of the relative polar curve of \((X,0)\) and \(I(X_{y},W_{y})\), which by a flatness argument is equal to \(I_{0}(X_{0},W_{0})\). By Morse theory, the covering degree of the polar curve is equal to \(\chi(X_{y}\cap H)-\chi(X_{y})\), where \(H\) is general hyperplane in \(\mathbb{C}^{n}\), which is in turn equal to \(\mu+m-1\) by the result of Bassein. Combining these equalities we obtain (1). **Whitney equisingularity.** Let \((X,0)\subset(\mathbb{C}^{n+k},0)\) be a reduced complex analytic space of pure dimension \(k+1\), and let \((Y,0)\) be a linear subspace of \((X,0)\) of dimension \(k\). Choose an embedding of \((X,0)\) in \(\mathbb{C}^{n+k}=\mathbb{C}^{n}\times\mathbb{C}^{k}\), so that \((Y,0)\) is represented by \(0\times U\), where \(U\) is an open neighborhood of \(0\) in \(\mathbb{C}^{k}\). Let \(\operatorname{pr}:\mathbb{C}^{n}\times\mathbb{C}^{k}\to\mathbb{C}^{k}\) be the projection on the second factor. View \(X\) as the total space of the family \(\operatorname{pr}_{|X}:(X,0)\to(Y,0)\). For each \(y\in Y\) set \(X_{y}:=X\cap\operatorname{pr}^{-1}(y)\). We say \(H\) is a tangent hyperplane at \(x\in X-Y\) if \(H\) is a hyperplane in \(\mathbb{C}^{n+k}\) that contains the tangent space \(T_{x}X\). Let \((x_{i})\) be a sequence of points from \(X-Y\) and \((y_{i})\) be a sequence of points from \(Y\) both converging to \(0\). Suppose that the sequence of secants \((\overline{x_{i}y_{i}})\) has limit \(l\) and the sequence of tangent hyperplanes \(\{T_{x_{i}}X\}\) has limit \(T\). We say that \((X,0)\to(Y,0)\) is _Whitney equisingular_, or that the pair of strata \((X-Y,Y)\) satisfies the Whitney conditions at \(0\) if \(l\subset T\). By a result of Le and Teissier [11]\((X,0)\to(Y,0)\) is Whitney equisingular if and only if \(X_{y}\) is equimultiple along \(Y\) and there exists a homeomorphism \(g\colon(\mathbb{C}^{n+k},0)\to(\mathbb{C}^{n+k},0)\) whose restriction to \(X\) induces the homeomorphism \((X,0)\cong(X_{0}\times Y,0)\). For families of reduced curves it is shown in [1, Thm. III.3] that \((X,0)\to(Y,0)\) is Whitney equisingular if and only if the Milnor number \(\mu(X_{y})\) and the multiplicity \(m(X_{y})\) at \(0\) are constant along \(Y\). We say that \(X_{y}\) is _locally smoothable_ if the germ of \(X_{y}\) at each of its singular points is smoothable. Denote by \(e(\operatorname{Jac}(X_{y},0))\) the HS-multiplicity of the Jacobian ideal of the germ \((X_{y},0)\). As a direct consequence of Theorem 1.1 and [1, Thm. III.3] we have the following corrolary. **Corollary 1.2**.: Assume that the fibers \(X_{y}\) are reduced curves with locally smoothable singularities. The following holds: 1. Assume \(e(\operatorname{Jac}(X_{y},0))-I_{0}(X_{y},W_{y})\) is independent of \(y\) near \(0\). Then the union of the singular points of \(X_{y}\) is \(Y\) and the pair \((X-Y,Y)\) satisfies the Whitney conditions at \(0\). 2. Suppose \((X-Y,Y)\) satisfies the Whitney conditions at \(0\). Then \(e(\operatorname{Jac}(X_{y},0))-I_{0}(X_{y},W_{y})\) is independent of \(y\) near \(0\). **Aknowledgements.** The authors would like to thank Steven Kleiman and Bernard Teissier for helpful and stimulating discussions. The first author was supported by the Bulgarian Ministry of Education and Science Scientific Programme "Enhancing the Research Capacity in Mathematical Sciences (PIKOM)", No. DO1-67/05.05.2022. The third author was supported by a "Petar Beron i NIE" fellowship [KP-06-D15-1] from the Bulgarian Science Fund and was partially funded by a Marie Skladowska-Curie fellowship GTSP-101111114 from the European Commission. ## 2. Conormal spaces, polar varieties, and multiplicities ### Conormal spaces Let \(h\colon(X,0)\to(Y,0)\) be a flat embedded deformation of a reduced curve \((X_{0},0)\subset\mathbb{C}^{n}\) with \((Y,0)\) smooth of dimension one. First we describe the _relative conormal variety_\(C_{\operatorname{rel}}(X)\) of \(X\) in \(\mathbb{C}^{n+1}\) using the relative Jacobian module of \(X\). Suppose \(X\) is reduced. Let \(X\) be defined by the vanishing of some analytic functions \(f_{1},\ldots,f_{p}\) on a Euclidean neighborhood of \(0\) in \(\mathbb{C}^{n+1}\). Consider the following conormal exact sequence \[I/I^{2}\begin{CD}@>{\delta}>{}>\Omega^{1}_{\mathbb{C}^{n}\times Y/Y}|X \longrightarrow\Omega^{1}_{X/Y}\longrightarrow 0\end{CD} \tag{2}\] where \(I\) is the ideal of \(X\) in \(\mathcal{O}_{\mathbb{C}^{n}\times Y,0}\) and the map \(\delta\) sends a function \(f\) vanishing on \(X\) to its differential \(df\). Dualizing we obtain the following nested sequence of torsion-free sheaves: \[\operatorname{Image}(\delta^{*})\subset(\operatorname{Image}\,\delta)^{*} \subset(I/I^{2})^{*}.\] Observe that locally the sheaf \(\operatorname{Image}(\delta^{*})\) can be viewed as the column space of the _relative Jacobian matrix_ of \(X\). If \((x_{1},\ldots,x_{n},t)\) are coordinates on \((\mathbb{C}^{n+1},0)\), where \(t\) is the \(Y\)-coordinate, then the relative Jacobian matrix is simply the \(p\) by \(n\) matrix \((\partial f_{i}(\mathbf{x},t)/\partial x_{j})\). Denote \(\operatorname{Image}(\delta^{*})\) by \(J_{\operatorname{rel}}(X)\) and call it _the relative Jacobian module_ of \(X\). It's a module contained in the free module \(\mathcal{O}^{p}_{X,0}\). Define the _Rees algebra_\(\mathcal{R}(J_{\operatorname{rel}}(X))\) to be the subalgebra of \(\operatorname{Sym}(\mathcal{O}^{p}_{X,0})\) generated by the generators of \(J_{\operatorname{rel}}(X)\). Define the _relative conormal space_\(C_{\operatorname{rel}}(X)\) of \(X\) to be the closure in \(X\times\mathbb{P}^{n-1}\) of the set of pairs \((x,H)\) where \(x\) is a smooth point of \(X_{h^{-1}(h(x))}\) and \(H\) is a (tangent) hyperplane in \(\mathbb{C}^{n}\) at \(x\) containing \(T_{x}X_{h^{-1}(h(x))}\). We have \[C_{\operatorname{rel}}(X)=\operatorname{Projan}(\mathcal{R}(J_{\operatorname{ rel}}(X))).\] Indeed, observe that both sides are equal over the smooth part of the fibers \((X_{y})_{\rm sm}\) to the set of pairs \((x,H)\) where \(H\) is a tangent hyperplane in \(\mathbb{C}^{n}\) to the simple point \(x\in(X_{y})_{\rm sm}\). The left side is the closure of this set, and so is the right side simply because the Rees algebra is by construction a subalgebebra of the symmetric algebra \({\rm Sym}(\mathcal{O}^{p}_{X,0})\). ### Polar varieties Consider the following diagram The dimension of the generic fiber of \(\pi\) is \(n-2\), so \(\dim C_{\rm rel}(X)\) is \(n-1\). By Kleiman's transversality theorem [10] (take the transitive action \(PGL(n,\mathbb{C})\) on \(\check{\mathbb{P}}^{n-1}\) considered as the set of hyperplanes of \(\mathbb{C}^{n}\)), for a generic plane \(H\subset\check{\mathbb{P}}^{n-1}\) of codimension \(n-2\), the pullback \(\lambda^{-1}(H)\) is going to be of dimension at most \(\dim C_{\rm rel}(X)-\operatorname{codim}(H)=n-1-(n-2)=1\). The projection to \(X\) of this curve \(\Gamma^{1}_{\rm rel}(X):=\pi(\lambda^{-1}(H))\) is called the relative polar variety of dimension one, or simply the _relative polar curve_ of \(h\colon(X,0)\to(Y,0)\). Again by Thm. 2 (ii) and Rmk. 7 in [10], choosing \(H\) generic enough guarantees that \(\Gamma^{1}_{\rm rel}(X)\) is reduced because by assumption \(X\) is reduced. An equivalent way to obtain \(\Gamma^{1}_{\rm rel}(X)\) is to consider a linear form \(F\) on \(\mathbb{C}^{n}\) that defines a hyperplane that is not a limiting tangent hyperplane to \(X_{0}\) at \(0\). Then \(\Gamma^{1}_{\rm rel}(X)\) is the closure of the set of critical points of \(F\) restricted to the smooth part of the fibers of \(h\) ([11, Prp. 3.4]). Suppose \(h\colon(X,0)\to(Y,0)\) is an embedded deformation of \((X_{0},0)\subset(\mathbb{C}^{n},0)\), which is a smoothing of \((X_{0},0)\). Let \(F\colon(\mathbb{C}^{n},0)\to(\mathbb{C},0)\) be a general linear functional. Denote by \(\deg_{Y}\Gamma^{1}_{\rm rel}(X)\) the number of points the fiber \(\Gamma^{1}_{\rm rel}(X)_{y}\) for \(y\neq 0\) generic. Denote by \(m\) the multiplicity of \((X_{0},0)\). **Proposition 2.1**.: _We have the following Euler characteristic-degree relation._ \[\deg_{Y}\Gamma^{1}_{\rm rel}(X)=\chi(X_{y}\cap F^{-1}(0))-\chi(X_{y})=\mu+m-1. \tag{3}\] Proof.: Assume \((X,0)\subset(\mathbb{C}^{n+1},0)\). Let \(X_{0}\) be embedded in a small open ball \(B_{0}:=B(0,\epsilon)\subset(\mathbb{C}^{n},0)\). Replace \(Y\) with a small open disk centered at \(0\). We may assume that \(X\) is a closed analytic subset of \(B_{0}\times Y\) and we can assume that \(X\) is smooth outside the origin. Let \((x_{1},\dots,x_{n},t)\) be coordinates for \((\mathbb{C}^{n+1},0)\). Let \(F\colon(\mathbb{C}^{n+1},0)\to(\mathbb{C},0)\) be the linear functional \((x_{1},\dots,x_{n},t)\to(\sum_{i=1}^{n}\alpha_{i}x_{i})\) where \(\alpha_{i}\) are complex numbers subject to finitely many genericity conditions specified below. Denote by \(\Sigma_{X}(F)\) and by \(\Sigma_{X_{y}}(F)\) the critical loci of \(F\) on \(X\) and \(X_{y}\), respectively. Note that by ([11, Prp. 3.4]) \(\Gamma^{1}_{\rm rel}(X)\) is the closure of \(\Sigma_{X_{y}}(F)\) for \(y\neq 0\) close enough to \(0\). Thus \(\deg_{Y}\Gamma^{1}_{\rm rel}(X)=\#(\Sigma_{X_{y}}(F))\). Because \(\Gamma^{1}_{\rm rel}(X)\) meets \(\partial B_{0}\times Y\) at finitely many points, then for \(y\) close enough to \(0\), the functional \(F_{|X_{y}}\) does not have critical points on \(S^{2n-1}_{e}\). Note that in case \(Y=0\times\mathbb{C}\subset X\), then by choosing the \(\alpha_{i}\)s generic enough we can guarantee that \(\Sigma_{X}(F)\cap(Y,0)=0\) and thus \(0\in\mathbb{C}^{n}\) is not \(\Sigma_{X_{y}}(F)\) for small enough \(y\neq 0\). By [12, Thm. A.5] (cf. [1, Thm. 1.4]) \[\#(\Sigma_{X_{y}}(F))=\chi(F^{-1}(0)\cap X_{y})-\chi(X_{y}).\] By [14] (cf. [1, Cor. 4.2.3]) \(\mu(X_{0},0)=1-\chi(X_{y})\). Thus it remains to show that \(\chi(F^{-1}(0)\cap X_{y})=m\). Let \(\mathfrak{m}_{0}\) be the maximal ideal of \(\mathcal{O}_{X_{0},0}\). Set \(f:=\sum_{i=1}^{n}\alpha_{i}x_{i}\). For each \(y\in Y\) denote by \(f_{y}\) the image of \(f\) in \(\mathcal{O}_{X_{y}}\). Because \((X_{0},0)\) is Cohen-Macaulay, for generic \(\alpha_{i}\) we have \(m=\dim_{\mathbb{C}}\mathcal{O}_{X_{0},0}/f_{0}\). Because \(h\) is flat and \(X_{0}\) is reduced, then \((t,f)\) is a regular sequence in \(\mathcal{O}_{X,0}\), and so is \((f,t)\), because \(\mathcal{O}_{X,0}\) is local. Thus \(\dim_{\mathbb{C}}\mathcal{O}_{X_{0},0}/f_{0}=\dim_{\mathbb{C}}\mathcal{O}_{X_{ y}}/f_{y}\). But \(\dim_{\mathbb{C}}\mathcal{O}_{X_{y}}/f_{y}=\#(F^{-1}(0)\cap X_{y})=\chi(F^{-1}( 0)\cap X_{y})\). Thus \(\chi(F^{-1}(0)\cap X_{y})=m\). ### Multiplicities Let \((V,0)\) be a reduced complex analytic variety. Let \(M\) be an \(\mathcal{O}_{V}\)-module such that its rank at the generic point of each irreducible component of \(V\) is \(e\). Assume \(M\) is contained in a free module \(\mathcal{O}_{V}^{p}\) for some \(p\). Denote by \([M]\in\operatorname{Mat}(p\times c,\mathbb{C})\) the presentation matrix of \(M\). Let \(A\in\operatorname{Mat}(e\times p,\mathbb{C})\) be a generic matrix such that \(A[M]\) has rank \(e\) at the generic point of each irreducible component of \((V,0)\). Let \(M_{e}\) be the \(\mathcal{O}_{V}\)-module generated by the columns of \(A[M]\). Then \(M_{e}\) is contained in a free module \(F:=\mathcal{O}_{V}^{e}\). Denote by \(\mathcal{R}(M)\) and \(\mathcal{R}(M)\) the Rees algebras of \(M\) and \(M_{e}\), respectively. **Proposition 2.2**.: _We have_ \[\operatorname{Projan}(\mathcal{R}(M))=\operatorname{Projan}(\mathcal{R}(M_{e})).\] Proof.: By construction the row spaces of the presentation matrices \(M\) and of \([M_{e}]\) agree on the Zariski open dense subset of \(V\) where the rank of these matrices is \(e\). Over this Zariski open set the spaces \(\operatorname{Projan}(\mathcal{R}(M))\) and \(\operatorname{Projan}(\mathcal{R}(M_{e}))\) are just the projectivization of these row spaces, hence they are equal. Since the spaces agree on a Zariski open dense set they agree everywhere. Suppose now that \((V,0)\) is a reduced curve. Then \(\operatorname{Supp}_{V}(F/M_{e})=\{0\}\). The inclusion \(M_{e}\subset F\) induces an inclusion of the Rees algebra \(\mathcal{R}(M_{e})\) in the symmetric algebra \(\operatorname{Sym}(F)\). Denote by \(M_{e}^{l}\) and \(F^{l}\) the \(l\)th graded components of \(\mathcal{R}(M_{e})\) and \(\operatorname{Sym}(F)\), respectively. Then by results of [1] for \(l\) large enough \[\dim_{\mathbb{C}}F^{l}/M_{e}^{l}=e(M_{e},F)l^{e}/e!+O(l^{e-1}).\] The coefficient \(e(M_{e},F)\) is called the _Buchsbaum-Rim (BR) multiplicitiy_ of the pair \((M_{e},F)\). The BR-multiplicity generalizes the HS-multiplicity of ideals to modules. A submodule \(M_{e}^{\prime}\) of \(M_{e}\) is called a _reduction_ of \(M_{e}\) if \(\mathcal{R}(M)\) is an integral extension of \(\mathcal{R}(M_{e})\). By [1, Cor. 16.4.7] there exists a reduction of \(M_{e}\) generated by \(e\) generic linear combinations of the columns of \(M_{e}\), or in other words for a generic matrix \(B\in\operatorname{Mat}(c\times e,\mathbb{C})\) such that the module generated by the columns of \([M_{e}]B\) is a reduction of \(M_{e}\). By [1, Cor. 16.5.7]\(e(M_{e},F)=e(M_{e}^{\prime},F)\). Because \((V,0)\) is Cohen-Macaulay by [1, 4.5, p. 223]\(e(M_{e}^{\prime},F)=\dim_{\mathbb{C}}(\mathcal{O}_{V,0}/\mathrm{Fitt}_{0}(F/M_{e }^{\prime}))\), where \(\mathrm{Fitt}_{0}(F/M_{e}^{\prime})\) is the determinant of the \(e\times e\) matrix \([M_{e}^{\prime}]\). The Fitting ideal \(\mathrm{Fitt}_{p-e}(\mathcal{O}_{V}^{p}/M)\) is generated by the \(e\times e\) minors of \([M]\). It's an ideal in \(\mathcal{O}_{V,0}\) primary to the maximal ideal; thus it has a well-defined HS-multiplicity \(e(\mathrm{Fitt}_{p-e}(\mathcal{O}_{V}^{p}/M))\). **Proposition 2.3**.: _For a generic choice of \(A\) we have \(e(M_{e},F)=e(\mathrm{Fitt}_{p-e}(\mathcal{O}_{V}^{p}/M))\)._ Proof.: Recall that \([M]\in\operatorname{Mat}(p\times c,\mathbb{C})\). By \(K\) and \(L\) we will denote \(e\)-element subsets of \(\{1,\dots,p\}\) and \(\{1,\dots,c\}\), respectively. Denote by \([M]_{K,L}\) the \(e\ \times e\) minor of \([M]\) with \(K\) rows and \(L\) columns. Denote by \(A_{K}\) the \(e\times e\) minor of \(A\) with \(K\) columns, and denote by \(B_{L}\) the \(e\times e\) minor with \(L\) rows. By construction \([M_{e}^{\prime}]=A[M]B\). Applying the Cauchy-Binet formula we obtain \[\det[M_{e}^{\prime}]=\sum_{K,L}A_{K}[M]_{K,L}B_{L}.\] Note that \([M]_{K,L}\) give the \(e\times e\) minors of \([M]\). Set \(I:=\mathrm{Fitt}_{p-e}(\mathcal{O}_{V}^{p}/M)\). The exceptional divisor \(D\) of \(\mathrm{Bi}_{I}V\) is set-theoretically a finite set of points in \(\mathbb{P}^{l-1}\). Let \([\alpha_{K,L}]\in\mathbb{P}^{\ell-1}\) be such a point. In order for a general linear combination \(\sum_{K,L}\beta_{K,L}M_{K,L}\) to be a reduction, one needs to have \(\alpha\cdot\beta:=\sum_{K,L}\alpha_{K,L}\beta_{K,L}\neq 0\) for all \([\alpha_{K,L}]\in D\). The linear combinations that come from matrices \(A\) and \(B\) are of the form \(\beta_{K,L}=A_{K}\cdot B_{L}\) so in fact we have the following generic condition on \(A\) and \(B\) \[\sum_{K,L}\alpha_{K,L}A_{K}B_{L}\neq 0. \tag{4}\] The sum is a polynomial expression \(P(A,B)\) in the entries of \(A\) and \(B\) so it's enough to show that \(P(A,B)\not\equiv 0\), or that there exists a pair of matrices \((A_{0},B_{0})\) such that \(P(A_{0},B_{0})\neq 0\). Without loss of generality suppose that \(\alpha_{K_{0},L_{0}}\neq 0\) for \(K_{0}=L_{0}=\{1,\ldots,e\}\). Set \(A_{0}=(\delta_{i,j})_{1\leq i\leq e,1\leq j\leq p}\) and \(B_{0}=(\delta_{s,t})_{1\leq s\leq g,1\leq t\leq e}\), where \(\delta_{p,q}\) is the Kronecker delta symbol. A direct computation of the minors gives \[(A_{0})_{K}=\left\{\begin{array}{ll}1&\mbox{if $K=K_{0}$}\\ 0&\mbox{otherwise}\end{array}\right.\quad\mbox{and}\quad(B_{0})_{L}=\left\{ \begin{array}{ll}1&\mbox{if $L=L_{0}$}\\ 0&\mbox{otherwise}.\end{array}\right.\] Thus \(P(A_{0},B_{0})=\alpha_{K_{0},L_{0}}\neq 0\) and so (4) holds for generic \(A\) and \(B\). Intersecting finitely many open sets in \(\operatorname{Mat}(e\times p,\mathbb{C})\) and \(\operatorname{Mat}(c\times e,\mathbb{C})\) we get that (4) holds for each \([\alpha_{K,L}]\in D\) and for generic \(A\) and \(B\). Therefore, for such \(A\) and \(B\) the ideal generated by \(\det[M^{\prime}_{e}]\) is a reduction of \(\operatorname{Fitt}_{p-e}(\mathcal{O}^{p}_{V}/M)\). By [1, Prop. 11.10]\(e(\operatorname{Fitt}_{p-e}(\mathcal{O}^{p}_{V}/M))=\dim_{\mathbb{C}}\mathcal{ O}_{V,0}/(\det[M^{\prime}_{e}])\). But as observed above \[e(M_{e},F)=\dim_{\mathbb{C}}(\mathcal{O}_{V,0}/\operatorname{Fitt}_{0}(F/M^{ \prime}_{e}))=\dim_{\mathbb{C}}\mathcal{O}_{V,0}/(\det[M^{\prime}_{e}]).\] Therefore, \(e(M_{e},F)=e(\operatorname{Fitt}_{p-e}(\mathcal{O}^{p}_{V}/M))\). To get the equality in Proposition 2.3 selecting \(A\) so that Proposition 2.2 holds is not enough as indicated in the \((3,4,5)\) example in Section 5. Finally, we remark that \(e(M_{e},F)\) was considered in a more general setting in [13, Sect. 7]. To define \(e(M_{e},F)\) from the point of view of Kleiman and Thorup, one intersects \(\operatorname{Projan}(\operatorname{Sym}(\mathcal{O}^{p}_{V}))=X\times\mathbb{ P}^{p-1}\) with \(p-e\) general hyperplanes from \(\mathbb{P}^{p-1}\), blowing up this intersection with the image of the ideal generated by \(M\) in \(\operatorname{Sym}(\mathcal{O}^{p}_{V})\), and then deriving \(e(M_{e},F)\) as a sum of certain intersection numbers of the exceptional divisor. ## 3. A flatness result Let \((X_{0},0)\subset(\mathbb{C}^{n},0)\) be a reduced complex analytic curve which is not a complete intersection. Let \((Z_{0},0)\subset(\mathbb{C}^{n},0)\) be a complete intersection which contains \((X_{0},0)\) as a subvariety. Let \((X,0)\to(Y,0)\) be a flat embedded deformation of \((X_{0},0)\) with \((Y,0)\) smooth of dimension one. Denote by \(Z\) the induced deformation on \((Z_{0},0)\) from \((X,0)\to(Y,0)\). Set \(W:=\overline{Z\setminus X}\) and \(S:=X\times_{\mathbb{C}^{n}\times Y}W\). Define \(I(X_{y},W_{y}):=\sum_{s_{y}\in S_{y}}I_{s_{y}}(X_{y},W_{y})\). In this section we prove that the correcting term \(I(X_{y},W_{y})\) is constant along a flat family \((X,0)\to(Y,0)\). In fact, we will show in Section 4 that for a generic choice of \(Z_{0}\) the irreducible components of \(S\) that meet the singular locus of \(X\) only at \(0\) are reduced. In particular, if \((X,0)\to(Y,0)\) is a smoothing of \((X_{0},0)\) and \(Z_{0}\) is generic, then \(S\) is reduced. We begin with an algebraic result. **Lemma 3.1**.: _Assume \((R,\mathfrak{m})\) is a local Noetherian ring with two ideals \(I\) and \(J\) such that \(\operatorname{depth}(R/I\cap J)\geqslant 2\), \(\operatorname{depth}(R/I)\geqslant 1\) and \(\operatorname{depth}(R/J)\geqslant 1\). Then \(\mathfrak{m}\notin\operatorname{Ass}_{R}(R/I+J)\)._ Proof.: Phrased in terms of local cohomology the conclusion of our statement is equivalent to \(H^{0}_{\mathfrak{m}}(R/I+J)=0\). Consider the short exact sequence \[\begin{CD}0@>{}>{}>R/I\cap J@>{}>{}>R/I\oplus R/J@>{}>{}>R/I+J@>{}>{}>0.\end{CD}\] Extract from the long exact sequence of local cohomology the following exact sequence By the vanishing theorem of local cohomology (see for instance [1, Cor. 6.2.8]), we have \[H^{0}_{\mathfrak{m}}(R/I\oplus R/J)=0\quad\text{and}\quad H^{1}_{\mathfrak{m}}(R/ I\cap J)=0.\] Therefore, \(H^{0}_{\mathfrak{m}}(R/I+J)=0\). **Proposition 3.2**.: _In the setup above, \(S\) is flat over \(Y\). In particular, \(I(X_{y},W_{y})\) is constant for all \(y\) close to \(0\)._ Proof.: We apply Lemma 3.1 with \(R=\mathcal{O}_{\mathbb{C}^{n}\times Y,0}\), \(I:=I_{X}\), \(J:=I_{W}\), and \(\mathfrak{m}\) the ideals of \(X\), \(W\) and \(0\) in \(\mathcal{O}_{\mathbb{C}^{n}\times Y,0}\), respectively. Note that \(I_{X}\cap I_{X}=I_{Z}\) and the ideal of \(S\) in \(\mathcal{O}_{\mathbb{C}^{n}\times Y,0}\) is \(I_{X}+I_{W}\). Let \(t\) be the uniformizing parameter of \((\mathcal{O}_{Y},0)\). Observe that the depth hypothesis of Lemma 3.1 are satisfied. Indeed, \(Z\) is a complete intersection of dimension \(2\), so \(\operatorname{depth}(\mathcal{O}_{Z})=2\); \(X\) is flat over \(Y\), and so \(t\) is nonzero divisor of \(\mathcal{O}_{X}\) and thus \(\operatorname{depth}(\mathcal{O}_{X})\geq 1\). Also, because \(I_{W}\) is obtained from the primary decomposition of \(I_{Z}\) by removing some of its minimal primes, then \(t\) avoids the minimal primes of \(I_{W}\). Thus, \(\operatorname{depth}(\mathcal{O}_{W})\geq 1\). If \(S\) is empty there is nothing to prove. So we can assume that either \(S\) is set-theoretically equal to \(0\), or \(S\) is of dimension one. The first case is impossible because by Lemma 3.1\(\mathfrak{m}\notin\operatorname{Ass}(\mathcal{O}_{S})\). Suppose \(S\) has a vertical component, i.e. \(X_{0}\) and \(W_{0}\) share an irreducible component. Identify \(I_{X}\) and \(I_{W}\) with their images in \(\mathcal{O}_{Z}\). Suppose that as sets \(X_{0}\subset W_{0}\). But \(Z_{0}=X_{0}\cup W_{0}\), so \(W_{0}=Z_{0}\). This means that for each \(f\in I_{W}\) we have \(f=tf_{1}\) with \(f_{1}\in\mathcal{O}_{Z}\). Because \(\mathcal{O}_{W}\) is flat over \(\mathcal{O}_{Y}\), then \(f_{1}\in I_{W}\). So \(I_{W}=tI_{W}\) in \(\mathcal{O}_{Z}\). By Nakayama's lemma \(I_{W}=I_{Z}\), which is impossible. Assume that \(X_{0}\) is not contained in \(W_{0}\) set-theoretically. Then there exists an irreducible component of \(X_{0}\), which is not not a component of \(W_{0}\). Consider the intersection of all such components of \(X_{0}\) that are not components of \(W_{0}\) and denote it by \(X^{\prime}_{0}\). Pick an element \(r\in\mathcal{O}_{Z}\) such that its image \(\overline{r}\) in \(\mathcal{O}_{Z_{0}}:=\mathcal{O}_{Z}/t\mathcal{O}_{Z}\) is in the ideal of \(X^{\prime}_{0}\) but not in the ideal of \(X_{0}\). Then for each \(f\in I_{W}\), there exists \(f_{1}\in\mathcal{O}_{Z}\) such that \(rf=tf_{1}\). Again, because \(\mathcal{O}_{W}\) is flat over \(\mathcal{O}_{Y}\), then \(f_{1}\in I_{W}\). Thus \(rI_{W}\subset(t)I_{W}\). By the Cayley-Hamilton theorem \(\det(A)I_{W}=0\) where \(A=(\delta_{i,j}r-a_{i,j})\) is a square matrix of size \(m\times m\) for some \(m\), \(\delta_{i,j}\) is Kronecker's delta function and \(a_{i,j}\in(t)\). But \(I_{X}=((0)\colon\mathcal{O}_{Z}I_{W})\). Thus \(\det(A)\in I_{X}\). Denote by \(\tilde{r}\) the image of \(r\) in \(\mathcal{O}_{X_{0}}\). Because \(\det(A)\) is a monic polynomial in \(r\) with coefficients in \((t)\), then \(\tilde{r}^{m}=0\) in \(\mathcal{O}_{X_{0}}\). But this is impossible because we chose \(r\) so that it's not in the nilradical of \(\mathcal{O}_{X_{0}}\). We reached a contradiction with the assumption that \(S\) has a vertical component. Again, by Lemma 3.1, the origin cannot be an embedded component of \(S\). Thus, \(t\) is not a zero divisor of \(\mathcal{O}_{S}\). So, \((S,0)\to(Y,0)\) is flat, and thus \(y\to I(X_{y},W_{y})\) is constant for all \(y\) close to \(0\). ## 4. Proofs Let \(h\colon(X,0)\to(Y,0)\) be a one-parameter embedded flat deformation of \((X_{0},0)\subset(\mathbb{C}^{n},0)\) with \(X_{y}\) smooth for small enough \(y\neq 0\). Let \((Z_{0},0)\subset(\mathbb{C}^{n},0)\) be a complete intersection curve which contains \((X_{0},0)\) as a subvariety. Denote by \(Z\) the induced deformation on \(Z_{0}\). If \((X,0)\) is cut out from \((\mathbb{C}^{n}\times Y,0)\) by the vanishing of the analytic functions \(f_{1},\ldots,f_{p}\), then the equations for \(Z\) are given by \(A(f_{1},\ldots,f_{p})^{T}\) for some \(A\in\operatorname{Mat}((n-1)\times p,\mathbb{C})\). Set \(W:=\overline{Z\setminus X}\) and set \(S:=X\times_{\mathbb{C}^{n}\times Y}W\). Denote by \(J_{\operatorname{rel}}(X)\) the relative Jacobian module and denote by \(J_{\operatorname{rel}}(Z):=AJ_{\operatorname{rel}}(X)\) the \(\mathcal{O}_{X}\)-module generated by the relative Jacobian matrix of \(Z\). For each \(y\) denote by \(J(Z_{y})\) the image of \(J_{\mathrm{rel}}(Z)\) in \(\mathcal{O}_{X_{y}}^{n-1}.\) It's the \(\mathcal{O}_{X_{y}}\)-module generated by the columns of the Jacobian matrix of \(Z_{y}\). We will repeatedly use the fact that the formation of Fitting ideals commutes with base change. In particular, \[\mathrm{Fitt}_{0}(\mathcal{O}_{X}^{n-1}/J_{\mathrm{rel}}(Z))\otimes\mathcal{O}_ {X_{y}}=\mathrm{Fitt}_{0}(\mathcal{O}_{X_{y}}^{n-1}/J(Z_{y})).\] **Proposition 4.1**.: _Let \(X_{y}\) be a smooth fiber and let \(s_{y}\in S_{y}\). Let \(u\) be a uniformizing parameter of \(\mathcal{O}_{X_{y},s_{y}}\). Suppose_ \[\mathrm{Fitt}_{0}(\mathcal{O}_{X}^{n-1}/J_{\mathrm{rel}}(Z))\otimes\mathcal{O} _{X_{y},s_{y}}=(u). \tag{5}\] _Then \((Z_{y},s_{y})\) is a normal crossing divisor in \((\mathbb{C}^{2},s_{y})\). In particular, \(I_{s_{y}}(X_{y},W_{y})=1\)._ Proof.: Without loss of generality we can assume that \((X,0)=\mathbb{V}(f_{1},\ldots,f_{p})\) and \((Z,0)=\mathbb{V}(f_{1},\ldots,f_{n-1}).\) Because \(\mathcal{O}_{X_{y},s_{y}}\) is a PID, then locally at \(s_{y}\), the matrix \([J(Z_{y})]\) has a Smith normal form with invariant factors \(u^{a_{1}},\ldots,u^{a_{r}}\). The matrix \([J(Z_{y})]\) is of size \((n-1)\times n\). The ideal of \((n-1)\times(n-1)\) minors of \([J(Z_{y})]\) is the same as that of its Smith normal form. By (5) we get \(r=n-1\) and all the exponents \(a_{i}\) vanish but one, which is equal to \(1\). Thus the rank of \([J(Z_{y})]\) at \(s_{y}\) is \(n-2\). Without loss of generality we can assume that Jacobian matrix of the variety \(\mathbb{V}(f_{2},\ldots,f_{n-1})\) has the maximal possible rank \(n-2\) at \(s_{y}\). So \(\mathbb{V}(f_{2},\ldots,f_{n-1})\) is smooth at \(s_{y}\). After an analytic change of variables in \((\mathbb{C}^{n},s_{y})\) we can assume that \(f_{i}\) for \(i\geq 2\) are part of a coordinate system for \((\mathbb{C}^{n},s_{y})\). Thus \((Z_{y},s_{y})\) is contained in \((\mathbb{C}^{2},s_{y})\). Assume \((u,x)\) are local coordinates on \((\mathbb{C}^{2},s_{y})\). Then \((Z_{y},s_{y})=\mathbb{V}(ug(u,x))\) where \(\mathbb{V}(u)=(X_{y},s_{y})\) and \(\mathbb{V}(g(u,x))=(W_{y},s_{y})\). Because of (5) we have \[\min\{\mathrm{ord}_{u}(\partial ug(u,x)/\partial u),\mathrm{ord}_{u}(\partial ug (u,x)/\partial x)\}=1.\] Therefore, by the chain rule \(\mathrm{ord}_{u}(g(u,x))=1\). Thus \((W_{y},s_{y})\) is smooth at \(s_{y}\) and \((X_{y},s_{y})\) intersects \((W_{y},s_{y})\) transversally at \(s_{y}\). Proof of Theorem 1.1 Choose \(y\) close enough to \(0\). Select \(A\in\mathrm{Mat}((n-1)\times p,\mathbb{C})\) generic enough so that by [14, Rmk. 6] the determinantal variety cut out from \(X_{y}\) by \(\mathrm{Fitt}_{0}(\mathcal{O}_{X_{y}}^{n-1}/J(Z_{y}))\) is either zero-dimensional and smooth or empty. Denote by \(S^{\prime}\) the subscheme of \(X\) cut out by \(\mathrm{Fitt}_{0}(\mathcal{O}_{X}^{n-1}/J_{\mathrm{rel}}(Z))\). If \(S^{\prime}_{y}\) is nonempty, then by generic smoothness applied to \(S^{\prime}\to Y\) we can assume that \(S^{\prime}_{y}\) is smooth for all \(y\neq 0\) after possibly replacing \(Y\) by a smaller neighborhood of \(0\in Y\). Since \([J_{\mathrm{rel}}(Z)]\) is the relative Jacobian matrix of \(Z\), then \(S^{\prime}\) is the union of the singular points of \(Z_{y}\) in \(X_{y}\). As \(X_{y}\) is smooth for \(y\neq 0\), then \(S^{\prime}_{y}\) is set-theoretically \(S_{y}\) - the intersection of \(W_{y}\) and \(X_{y}\). If \(S^{\prime}_{y}\) is empty for \(y\neq 0\), then \(S_{y}\) is empty. Thus, by Proposition 3.2\(S\) is empty, each \(X_{y}\) is therefore a complete intersection, and so \(I(X_{y},W_{y})=0\) for all \(y\). If \(S^{\prime}_{y}\) is nonempty, then so is \(S\). By Proposition 3.2 and Proposition 4.1 for \(y\neq 0\) we have \[I_{0}(X_{0},W_{0})=I(X_{y},W_{y})=\sum_{s_{y}\in S_{y}}I_{s_{y}}(X_{y},W_{y})= \#(S_{y}). \tag{6}\] By Proposition 2.2\(C_{\mathrm{rel}}(X)=\mathrm{Projan}(\mathcal{R}(J_{\mathrm{rel}}(Z))\). Let \(B\in\mathrm{Mat}(n\times e,\mathbb{C})\). Consider the \(\mathcal{O}_{X}\)-module \(J_{\mathrm{rel}}(Z)^{\prime}\) generated by the columns of \([J_{\mathrm{rel}}(Z)]B\). Denote the images of \(J_{\mathrm{rel}}(Z)^{\prime}\) and \(J_{\mathrm{rel}}(Z)\) in \(\mathcal{O}_{X_{0}}^{n-1}\) by \(J(Z_{0})^{\prime}\) and \(J(Z_{0})\), respectively. Choose \(B\) generic so that \(J(Z_{0})^{\prime}\) is a reduction \(J(Z_{0})\) in \(\mathcal{O}_{X_{0}}^{n-1}\). Because \(J_{\mathrm{rel}}(Z)^{\prime}\) is generated by \(n-1\) elements, each of which defines a generic hyperplane \(\check{\mathbb{P}}^{n-1}\), then the ideal \(J_{\operatorname{rel}}(Z)^{\prime}\mathcal{R}(J_{\operatorname{rel}}(Z))\) cuts out \(\Gamma^{1}_{\operatorname{rel}}(X)\) before the projection to \(X\). Thus \(\operatorname{Supp}_{X}(J_{\operatorname{rel}}(Z)/J_{\operatorname{rel}}(Z)^{ \prime})=\Gamma^{1}_{\operatorname{rel}}(X)\). The genericity condition on \(B\) and Kleiman's transversality theorem ensure that \(\Gamma^{1}_{\operatorname{rel}}(X)\) meets \(S\) only at \(0\) after possibly replacing the representative for \((X,0)\) with a smaller one. Choosing \(B\) generic enough by [13, Rmk. 6] it follows that the principle ideal \((g):=\operatorname{Fitt}_{0}(\mathcal{O}_{X,0}^{n-1}/J_{\operatorname{rel}}(Z) ^{\prime})\) defines a reduced subvariety of \(X\), which is also Cohen-Macaulay, because it is determinantal, and therefore it is flat over \(Y\). For \(y\in Y\) denote the image of \(g\) in \(\mathcal{O}_{X_{y}}\) by \(g_{y}\). By flatness \(\dim_{\mathbb{C}}\mathcal{O}_{X_{0}}/(g_{0})=\dim_{\mathbb{C}}\mathcal{O}_{X _{y}}/(g_{y}).\) But \(\dim_{\mathbb{C}}\mathcal{O}_{X_{0}}/(g_{0})=e(J(Z_{0}),\mathcal{O}_{X_{0}}^{ n-1})\). By Proposition 2.3 we have \(e(J(Z_{0}),\mathcal{O}_{X_{0}}^{n-1})=e(\operatorname{Jac}(X_{0}))\). Thus \[e(\operatorname{Jac}(X_{0}))=\dim_{\mathbb{C}}\mathcal{O}_{X_{y}}/(g_{y}). \tag{7}\] It remains to interpret the right-hand side of (7). We need to count the points cut out from \(X_{y}\) by \(\operatorname{Fitt}_{0}(\mathcal{O}_{X_{y}}^{n-1}/J(Z_{y})^{\prime})\). There are two disjoint set of points: those that constitute the support of \(J_{\operatorname{rel}}(Z_{y})/J_{\operatorname{rel}}(Z_{y})^{\prime}\) and those that are not in it. The first set of points is precisely \(\Gamma^{1}_{\operatorname{rel}}(X)_{y}\). By definition the latter set is \(S_{y}^{\prime}\) which is the same as \(S_{y}\) as determined above. Thus \[\dim_{\mathbb{C}}\mathcal{O}_{X_{y}}/(g_{y})=\#(\Gamma^{1}_{\operatorname{ rel}}(X)_{y})+\#(S_{y}). \tag{8}\] Combining (6), (7), (8) and (3) we obtain \(e(\operatorname{Jac}(X_{0}))-I_{0}(X_{0},W_{0})=\#(\Gamma^{1}_{\operatorname{ rel}}(X)_{y})=\mu+m-1\). **Remark.** A consequence of the formula we just proved is that \(I_{0}(X_{0},W_{0})\) is an intrinsic invariant for a reduced smoothable \((X_{0},0)\). The approach of selecting a generic complete intersection \(Z_{0}\) to define an invariant of \((X_{0},0)\) does not work in dimension greater than \(1\), because the components of \(Z_{0}\) must intersect in codimension \(1\), so \(J(Z_{0})\) will not have finite colength in \(\mathcal{O}_{X_{0}}^{e}\) where \(e=\operatorname{codim}(X_{0},\mathbb{C}^{n})\). _Proof of Corollary 1.2_ By Theorem 1.1 we have \(e(\operatorname{Jac}(X_{y}))-I_{0}(X_{y},W_{y})=\mu(X_{y})+m(X_{y})-1\), where \(\mu(X_{y})\) and \(m(X_{y})\) are the Milnor number and the multiplicity of \(X_{y}\) at \(0\in\mathbb{C}^{n}\). Because \(\mu(X_{y})\) and \(m(X_{y})\) are upper semi-continuous invariants the function \(y\xrightarrow{}e(\operatorname{Jac}(X_{y}))-I_{0}(X_{y},W_{y})\) is constant along \(Y\) if and only if \(\mu(X_{y})\) and \(m(X_{y})\) are constant along \(Y\). Note that if \(m(X_{y})\) is constant along \(Y\), the singular locus does not split, i.e. the union of singularities of the fibers \(X_{y}\) is \(Y\). The rest of the proof follows from [1, Thm. III.3]. ## 5. Examples ### Three lines in \(\mathbb{C}^{3}\) Let \(X_{0}\) be the union of the three axis of \(\mathbb{C}^{3}\) given by the equations \(f_{1}=xy,f_{2}=yz,f_{3}=xz\). The Jacobian matrix of \(X_{0}\) is \[\begin{pmatrix}y&x&0\\ 0&z&y\\ z&0&x\end{pmatrix}.\] The Jacobian ideal \(\operatorname{Jac}(X_{0})\) in \(\mathcal{O}_{X_{0}}\) is generated by \(x^{2},y^{2}\) and \(z^{2}\). A reduction of it is generated by \(x^{2}+y^{2}+z^{2}\). Thus \(e(\operatorname{Jac}(X_{0}))=\dim_{\mathbb{C}}\mathbb{C}[x,y,z]/(x^{2}+y^{2}+ z^{2},xy,yz,xz)=6\). The multiplicity of \((X_{0},0)\) is \(3\). To compute the intersection multiplicity term we chose \(Z_{0}=\mathbb{V}(f_{1}+f_{2},f_{1}+f_{3})\) so that \(W_{0}=\mathbb{V}(x-y,y+z)\) and thus \(I_{0}(X_{0},W_{0})=2\). Thus \[\mu(X_{0})=e(\operatorname{Jac}(X_{0}))-I_{0}(X_{0},W_{0})-m+1=6-2-3+1=2.\] This agrees with [1, Lemma 1.2.4]. A smoothing of \(X_{0}\) is given by the equations \(F_{1}=xy+tx+t^{2},F_{2}=yz+ty+t^{2},F_{3}=zx+tz+t^{2}\), where \(X_{t}\) for \(t\neq 0\) is a smooth irreducible complete intersection curve. For this family \(W\) is the plane \(\mathbb{V}(y+z,x+z+2t)\). The line \(W_{t}\) for \(t\neq 0\) intersects \(X_{t}\) in two distinct points. **The \((3,4,5)\) curve in \(\mathbb{C}^{3}\)** Next we consider the space curve \(X_{0}\) given by the following equations \(f_{1}=x^{2}y-z^{2},f_{2}=x^{3}-yz,f_{3}=y^{2}-xz\), which come from the \(2\times 2\) minors of the matrix \(\begin{pmatrix}x&y&z\\ y&z&x^{2}\end{pmatrix}.\) The parametrization for \(X_{0}\) is given by \(\{(t^{3},t^{4},t^{5})|t\in\mathbb{C}\}\subset\mathbb{C}^{3}\). It is a well known example of a non-planar curve, which is Cohen-Macaulay of codimension \(2\), but not Gorenstein. Its Jacobian matrix is \[\begin{pmatrix}2xy&x^{2}&-2z\\ 3x^{2}&-z&-y\\ -z&2y&-x\end{pmatrix}.\] Let \(f_{0}=2y^{2}+xz+3yz+z^{2}-4x^{3}-7x^{2}y-2xy^{2}-7x^{2}z-2xyz-3x^{4}\) be a generic combination of the \(2\times 2\) minors of the Jacobian matrix. Then (via a Singular computation [DGPS]) \(e(\operatorname{Jac}(X_{0}))=\dim_{\mathbb{C}}\mathcal{O}_{X_{0},0}/(f_{0})=8.\) To compute the intersection term use the complete intersection \(Z=\mathbb{V}(f_{1}+f_{2},f_{1}+f_{3})\), which gives a generic complete intersection, such that \(e(J(X_{0})_{2},\mathcal{O}_{X_{0}}^{2})=e(\operatorname{Jac}(X_{0}))\), where \(J(X_{0})_{2}:=\begin{pmatrix}1&1&0\\ 1&0&1\end{pmatrix}[J(X_{0})]\). By a quotient ideal computation we find \[W_{0}=\mathbb{V}(x+y+z,y+z+x^{2})\text{ and }I_{0}(X_{0},W_{0})=\dim_{\mathbb{C}} \mathcal{O}_{\mathbb{C}^{3},0}/I(X_{0})+I(W_{0})=2.\] Thus \(\mu(X_{0})=e(\operatorname{Jac}(X_{0}))-I_{0}(X_{0},W_{0})-m+1=8-2-3+1=4.\) Now let's choose \(Z^{\prime}=\mathbb{V}(f_{1},f_{3})\). Denote by \(J(X_{0})_{2}^{\prime}\) the submodule in \(\mathcal{O}_{X_{0}}^{2}\) generated by the columns of the matrix obtained from the Jacobian matrix of \(X_{0}\) by erasing the second row. Then \(e(J(X_{0})_{2}^{\prime},\mathcal{O}_{X_{0}}^{2})=9>e(J(X_{0})_{2},\mathcal{O}_ {X_{0}}^{2})\). But \(W_{0}^{\prime}=(y,z)\). So \(I_{0}(X_{0},W_{0}^{\prime})=3\). A smoothing of \(X_{0}\) is given by the \(2\times 2\) minors of \(\begin{pmatrix}x&y&z\\ y&z&x^{2}+t\end{pmatrix}.\) Then \(X\times_{\mathbb{C}^{n+1}}W=\mathbb{V}(x,y,z)\cup\mathbb{V}(y,z,x^{2}+t)\) and \(X_{t}\) and \(Y_{t}\) meet transversally at \(3\) points for \(t\neq 0\). As predicted by Gaffney's Multiplicity-Polar Theorem (see Theorem 6.1) \[e(J(X_{0})_{2},\mathcal{O}_{X_{0}}^{2})-I_{0}(X_{0},W_{0})=e(J(X_{0})_{2}^{ \prime},\mathcal{O}_{X_{0}}^{2})-I_{0}(X_{0},W_{0}^{\prime})=6.\] The BR-multiplicities \(e(J(X_{0})_{2},\mathcal{O}_{X_{0}}^{2})\) and \(e(J(X_{0})_{2}^{\prime},\mathcal{O}_{X_{0}}^{2})\) are computed through a sum of intersection numbers of the exceptional divisors of the blowup of \(\operatorname{Projan}(\operatorname{Sym}(\mathcal{O}_{X_{0}}^{2}))=X\times \mathbb{P}^{1}\) with respect to the ideals \(J(X_{0})_{2}\text{Sym}(\mathcal{O}_{X_{0}}^{2})\) and \(J(X_{0})_{2}^{\prime}\text{Sym}(\mathcal{O}_{X_{0}}^{2})\) ([14, Sct. 5]). The reason \(e(J(X_{0})_{2},\mathcal{O}_{X_{0}}^{2})\neq e(J(X_{0})_{2}^{\prime},\mathcal{O }_{X_{0}}^{2})\) is that although \(\operatorname{Projan}(\mathcal{R}(J(X_{0})_{2}))=\operatorname{Projan}(\mathcal{ R}(J(X_{0})_{2}^{\prime}))\) the two blowups are not isomorphic. **A Whitney equisingular family** The following example is taken from [1, Sct. 7]. Consider the one-parameter family of irreducible curves \((X,0)\to(D,0)\) given parameterically by \(\mathbb{C}\{u^{4},u^{7}+tu^{6},u^{9},u^{10}\}\), where \(t\) is the uniformazing parameter of \(\mathcal{O}_{D}\). Using Singular we compute that the base space of miniversal deformations is irreducible and that \(X_{0}\) is smoothable. Thus \(X_{t}\) is smoothable for each \(t\). We compute that for each \(t\) the delta invariant of \(X_{t}\) is \(5\) and so \(\mu(X_{t},0)=10\) (in [1] it is wrongly claimed that \(\mu(X_{t},0)=12\)). Thus, for each \(t\in D\) we have \[e(\operatorname{Jac}(X_{t},0))-I_{0}(X_{t},W_{t})=\mu(X_{t},0)+m(X_{t},0)-1=10+4 -1=13.\] However, a computation with Singular reveals that \(e(\operatorname{Jac}(X_{0}))=21\) and \(I_{0}(X_{0},W_{0})=8\), whereas \(e(\operatorname{Jac}(X_{t},0))=19\) and \(I_{0}(X_{t},W_{t})=6\) for \(t\neq 0\). This computation shows that apart from the case of families of complete intersection curves, \(e(\operatorname{Jac}(X_{t},0))\) is not a Whitney equisingularity invariant in general. So the presence of the correcting term \(I_{0}(X_{t},W_{t})\) to \(e(\operatorname{Jac}(X_{t},0))\) is necessary for obtaining numerical control of Whitney equisingularity. ## 6. Appendix In the proof of Theorem 1.1 we implicitly used a special case of Gaffney's Multiplicity-Polar Theorem. Here we give a new proof of this particular version of his result inspired by our proof of Theorem 1.1. Let \(h\colon(X,x_{0})\to(Y,y_{0})\) be a morphism with equidimensional fibers of positive dimension \(d\) between equidimensional complex analytic varieties such that \(X\) is generically reduced and \(Y\) is smooth of dimension \(k\). Let \(N\subset F\) be coherent \(\mathcal{O}_{X}\)-modules such that \(F\) is free of rank \(e\). Assume \(N\) is free of rank \(e\) at the generic point of each irreducible component of \(X\). Let \(\mathcal{R}(N)\) be the Rees algebra of \(N\) - it is the subalgebra of \(\operatorname{Sym}(F)\) generated in degree one by the generators of \(N\). For each \(y\in Y\) denote by \(F_{y}\) the restriction of \(F\) to \(X_{y}\) and by \(N(y)\) the image of \(N\) in \(F_{y}\). As before, define the Buchsbaum-Rim the multiplicity \(e(N(y),F(y))\) as the normalized leading coefficient of \(\dim_{\mathbb{C}}(F^{l}(y)/N^{l}(y))\) which is a polynomial of degree \(r:=d+e-1\) for \(l\) large enough, where \(F^{l}(y)\) and \(N^{l}(y)\) are the \(l\)th graded components of \(\operatorname{Sym}(F(y))\) and the Rees algebra \(\mathcal{R}(N(y))\), respectively. Note that \(\operatorname{Projan}(\mathcal{R}(N))\subset X\times\mathbb{P}^{g(N)-1}\) where \(g(N)\) is the number of a generating set for \(N\) as an \(\mathcal{O}_{X}\)-module. Denote by \(\pi\colon\operatorname{Projan}(\mathcal{R}(N))\to X\) the structure morphism. Set \(T:=\operatorname{Supp}(F/N)\). Consider the composition of maps \[\pi^{-1}(T)\hookrightarrow X\times\mathbb{P}^{g(N)-1}\xrightarrow{pr_{2}} \mathbb{P}^{g(N)-1}.\] As \(N\) is generically free of rank \(e\), by Kleiman's Transversality Theorem [13], the intersection of \(\pi^{-1}(T)\) with a general plane \(H_{r}\) from \(\mathbb{P}^{g(N)-1}\) of codimension \(r\) is of dimension at most \(\dim Y-1\). Denote by \(\Gamma^{k}(N)\) the projection of \(\operatorname{Proj}(\mathcal{R}(N))\cap H_{r}\) to \(X\). This is what Gaffney [1] calls the \(k\)-_dimensional polar variety_ of \(N\). For \(y\) in a Zariski open subset \(U\) of \(Y\), the fiber of \(\Gamma^{k}(N)\) over \(y\) consists of the same number of points, each of them appearing with multiplicity one because \(X\) is generically reduced, and because locally at each one of them \(N\) is free. Denote this number by \(\deg_{Y}\Gamma^{k}(N)\). The following is a special case of Gaffney's Multiplicity-Polar Theorem (see [1] and [11, Sect. 2] for generalizations). **Theorem 6.1** (Gaffney).: _Suppose \(X\) is Cohen-Macaulay and \(T\) is finite over \(Y\). Then for each \(y\) in Zariski open subset \(U\) in \(Y\) we have_ \[e(N(y_{0}),F(y_{0}))-e(N(y),F(y))=\deg_{Y}\Gamma^{k}(N).\] Proof.: We follow the proof of Theorem 1.1. Let \(N^{\prime}\) be a submodule of \(N\) generated by \(r\) generic linear combinations of generators for \(N\) such that \(N^{\prime}(y_{0})\) is a reduction of \(N(y_{0})\). Set \(\mathcal{I}:=\operatorname{Fitt}_{0}(F/N^{\prime})\). Denote by \(T^{\prime}\) the subspace of \(X\) defined by \(\mathcal{I}\). Because \(\operatorname{Supp}(F(y_{0})/N^{\prime}(y_{0}))\) is finite and because \(T\subset\operatorname{Supp}(F/N^{\prime})\), then \(\dim T^{\prime}=k\). Since the codimension of \(\mathcal{O}_{X}/\mathcal{I}\) is right, then \(T^{\prime}\) is determinantal, and so it is Cohen-Macaulay because \(X\) is. Because \(Y\) is smooth, then \(T^{\prime}\to Y\) is flat. Thus \(\dim_{\mathbb{C}}(\mathcal{O}_{X_{y_{0}}}/\mathcal{I}(y_{0}))=\dim_{\mathbb{C }}(\mathcal{O}_{X_{y}}/\mathcal{I}(y))\) for each \(y\in Y\) Also, \(h\colon(X,x_{0})\to(Y,y_{0})\) is flat, because \(X\) is Cohen-Macaulay, \(X\) and the fibers of \(h\) are equidimensional, and \(Y\) is smooth. Thus \(X_{y}\) is Cohen-Macaulay for each \(y\in Y\). By [1, p. 223]\(e(N(y_{0}),F(y_{0}))=\dim_{\mathbb{C}}(\mathcal{O}_{X_{y_{0}}}/\mathcal{I}(y_{0}))\). It remains to interpret \(\dim_{\mathbb{C}}(\mathcal{O}_{X_{y}}/\mathcal{I}(y))\) for generic \(y\in Y\). Set \(T^{\prime\prime}:=\operatorname{Supp}(N/N^{\prime})\). Note that set-theoretically, \(T^{\prime}=T\cup T^{\prime\prime}\). The generators of \(N^{\prime}\) give the ideal of the plane \(H_{r}\) in \(\mathbb{P}^{g(N)-1}\) used to define \(\Gamma^{k}(N)\). Denote by \(U\) the complement in \(Y\) of the Zariski closure of \(h\circ\pi(\pi^{-1}(T)\cap H_{r})\). Then \(T^{\prime\prime}_{y}=\Gamma^{k}(N)_{y}\) for \(y\in U\). By construction \(T_{y}\) and \(T^{\prime\prime}_{y}\) are disjoint for \(y\in U\). Thus \[\dim_{\mathbb{C}}(\mathcal{O}_{X_{y}}/\mathcal{I}(y))=\sum_{t_{y}\in T_{y}} \dim_{\mathbb{C}}(\mathcal{O}_{X_{y},t_{y}}/\mathcal{I}(y,t_{y}))+\deg_{Y} \Gamma^{k}(N)\] where \(\mathcal{I}(y,t_{y})\) is the image of \(\mathcal{I}\) in \(\mathcal{O}_{X_{y},t_{y}}\). Because the formation of Fitting ideals commutes with base change and because \(X_{y},t_{y}\) is Cohen-Macaulay, [1, p. 223] gives \(e(N(y),F(y))=\sum_{t_{y}\in T_{y}}\dim_{\mathbb{C}}(\mathcal{O}_{X_{y},t_{y}} /\mathcal{I}(y,t_{y}))\). The proof of the theorem is now complete.
2305.12588
Two-colour dissipative solitons and breathers in microresonator second-harmonic generation
Frequency conversion of dissipative solitons associated with the generation of broadband optical frequency combs having a tooth spacing of hundreds of giga-hertz is a topical challenge holding the key to practical applications in precision spectroscopy and data processing. The work in this direction is underpinned by fundamental problems in nonlinear and quantum optics. Here, we present the dissipative two-colour bright-bright and dark-dark solitons in a quasi-phase-matched microresonator pumped for the second-harmonic generation in the near-infrared spectral range. We also found the breather states associated with the pulse front motion and collisions. The soliton regime is found to be typical in slightly phase-mismatched resonators, while the phase-matched ones reveal broader but incoherent spectra and higher-order harmonic generation. Soliton and breather effects reported here exist for the negative tilt of the resonance line, which is possible only via the dominant contribution of second-order nonlinearity.
Juanjuan Lu, Danila N. Puzyrev, Vladislav V. Pankratov, Dmitry V. Skryabin, Fengyan Yang, Zheng Gong, Joshua B. Surya, Hong X. Tang
2023-05-21T22:52:41Z
http://arxiv.org/abs/2305.12588v1
# Two-colour dissipative solitons and breathers in microresonator second-harmonic generation ###### Abstract Frequency conversion of dissipative solitons associated with the generation of broadband optical frequency combs having a tooth spacing of hundreds of giga-hertz is a topical challenge holding the key to practical applications in precision spectroscopy and data processing. The work in this direction is underpinned by fundamental problems in nonlinear and quantum optics. Here, we present the dissipative two-colour bright-bright and dark-dark solitons in a quasi-phase-matched microresonator pumped for the second-harmonic generation in the near-infrared spectral range. We also found the breather states associated with the pulse front motion and collisions. The soliton regime is found to be typical in slightly phase-mismatched resonators, while the phase-matched ones reveal broader but incoherent spectra and higher-order harmonic generation. Soliton and breather effects reported here exist for the negative tilt of the resonance line, which is possible only via the dominant contribution of second-order nonlinearity. ## I Introduction Optical frequency comb generation in microresonators has attracted significant attention in recent years [1; 2]. The key results in this area are the demonstration of temporal dissipative Kerr solitons [3] and octave-spanning combs suitable for self-referencing [4; 5; 6]. These developments have enabled a broad range of applications, such as optical clocks, coherent optical communication, exoplanet detection and many others, see, e.g., [7; 8; 9]. Beyond becoming an outstanding test-bed for dissipative solitons, nonlinear and quantum effects in microresonators have made a profound impact in such interdisciplinary areas as pattern formation [10], synchronization of oscillators [11; 12], light crystals and topological physics in space and time [13; 14; 15; 16; 17]. Dissipative bright solitons and associated frequency combs in microresonators possessing Kerr nonlinearity require anomalous group-velocity dispersion (GVD) at the pump wavelength [1; 3], and the dark ones are observed in the normal GVD regime [18]. Simultaneous bright and dark Kerr soliton pairs spectrally located on different sides of the zero GVD wavelength have also been recently observed [19]. Normal-dispersion Kerr resonators with the modulated circumference of the inner ring, which couples forward and backward waves, have been used to demonstrate the continuum of bright and dark dissipative Kerr soliton states [20]. The need to expand the family of microresonator combs in the visible and mid-infrared ranges stimulates interest in modelocking involving simultaneous harmonic generation, e.g., using \(\chi^{(2)}\), i.e., second-order, nonlinearity. \(\chi^{(2)}\) effect allows generating combs at twice or half of the pump frequency at the comparatively low input powers [21; 22; 23; 24; 25; 26; 27]. An attractive feature of microresonator \(\chi^{(2)}\) combs is their immediate octave width, which offers a pathway to compact self-referencing arrangements in the integrated setups [6]. Reliable generation of \(\chi^{(2)}\) solitons in microresonators remains a challenge. The existence of such solitons assumes, as a necessary condition, mutual modelocking of the two groups of modes located around the pump frequency and either half- or second-harmonic [28]. One of the obstacles is therefore the large accumulated dispersion across the octave bandwidth. As a result, the group-velocity difference between the two modal groups dominates the nonlinear frequency shifts, which complicates the generation of solitons. Also, the spectral non-equidistance of neighbouring mode pairs in microresonators is very substantial if compared to fiber-loop, open multi-mirror, and other types of low-repetition-rate and low-finesse resonators, where modelocking using \(\chi^{(2)}\)-effects has been demonstrated [29; 30; 31; 32]. Ref. [33] provides an overview of theoretical studies of \(\chi^{(2)}\) solitons in resonators between the 1990s and our days. Lithium niobate is one of the favoured \(\chi^{(2)}\) materials to use in nano-fabrication for nonlinear and quantum optics applications [34]. However, lithium niobate and the other \(\chi^{(2)}\) materials possess appreciable \(\chi^{(3)}\), i.e., Kerr, nonlinearity. In particular, the prior soliton demonstrations in the thin-film LiNbO\({}_{3}\) resonators [35, 36] have been attributed to the Kerr effect. Also, recent experiments with comb generation in AlN microresonators have revealed the strong competition between \(\chi^{(2)}\) and Kerr effects [37, 38]. The AlN microresonator used for the parametric down-conversion has allowed observation of bright solitons in the infrared signal accompanied by the non-localised modelocked waveform in the near-infrared pump [39]. Thus, the challenge of observing the two-colour bright-bright or dark-dark frequency-comb solitons in the \(\chi^{(2)}\)-mediated high-repetition-rate microresonator frequency conversion has so far remained unresolved. Our present work demonstrates the dissipative two-colour solitons and breathers in a quasi-phase-matched microresonator pumped for second-harmonic generation in the near-infrared spectral range. The soliton regime is found to be typical for phase-mismatched resonators, while phase-matched ones reveal broader but incoherent spectra and higher-order harmonic generation. Positive phase-mismatching by less than one free spectral range induces tilting of the resonance line towards negative detunings, which is possible only via the dominant contribution of second-order nonlinearity. ## Results Here, we study the second-harmonic generation from the infrared (1550 nm) to near-infrared (780 nm) spectrum in the periodically polled thin-film LiNbO\({}_{3}\) microresonator. Our experimental setup is illustrated in Fig. 1. Our resonator radius is 70 \(\mu\)m, which provides the high repetition rate, \(\simeq 290\) GHz. We use the quasi-phase-matching grating to provide large controllable positive phase mismatch, which interplay with the \(\chi^{(2)}\) nonlinearity makes the resonance peak to tilt towards negative detunings, see Fig. 2(a). Below, we present measurements of the optical and radio-frequency (RF) spectra, which we interpret by the existence of bright-bright and dark-dark two-colour soliton pairs and breathers. We demonstrate that the bright and dark solitons merge into a single family continuous on the variation of the system parameters. The merging becomes possible through angular periodicity and small ring sizes. The blurred difference between the bright and dark solitons manifests itself in the measured and modelled periodic expansion and shrinking of the solitons. Our experiment deals with the case when the pump experiences large anomalous GVD, and the second-harmonic is in the large normal GVD range. Despite this, the pump and second-harmonic solitons have the same type, e.g., if one is bright, the other is too, which appears to be the case not yet met in the resonator and modelocking contexts. The detailed numerical analysis guides our interpretations of the data. The width of the resonator ridge is 1.8 \(\mu\)m, and the vertical dimensions are 410 nm and 590 nm to the air-LiNbO\({}_{3}\) and LiNbO\({}_{3}\)-SiO\({}_{2}\) interfaces, respectively. A bus waveguide is specifically designed for the simultaneous telecom and near-visible light coupling. Following the design principle elaborated in [40], the waveguide width, wrap-around angle and resonator-waveguide coupling gap are optimized to be 1.8 \(\mu\)m, 60\({}^{\circ}\), and 400 nm. The resonator spectrum near the pump, \(\zeta=a\), and second-harmonic, \(\zeta=b\), is approximated by \(\omega_{\mu\zeta}=\omega_{0\zeta}+\sum_{n}D_{n\zeta}\mu^{n}/n!\). Here, \(\omega_{0a}\) and \(\omega_{0b}\) are the resonator frequencies with the mode numbers \(M\) and \(2M+Q\), respectively, where \(2\pi R/Q\) is the poling period. \(Q\) equals 150 in the resonator sample used in the experiments described below. \(\mu=0,\pm 1,\pm 2,\dots\) is the relative mode number. The phase mismatch is characterised by parameter \(\varepsilon\)[33], \[\varepsilon=2\omega_{0a}-\omega_{0b}, \tag{1}\] which is determined by \(Q\) and temperature tuning. The resonator repetition rates are \(D_{1a}/2\pi=286.24\) GHz, \(D_{1b}/2\pi=289.24\) GHz, and make the 3 GHz difference. Second order dispersion is large anomalous near the Figure 1: **Experimental setup and resonator dispersion.** (a) Scanning-electron-microscopy image of the lithium niobate microresonator. The radius of the microresonator is 70 \(\mu\)m, corresponding to 286 GHz repetition rate. (b) Photograph of the chip. (c) Measured (blue circles) and computed (blue and red lines) integrated dispersion, \(\omega_{\mu\zeta}-\omega_{0\zeta}-D_{1\zeta}\mu\), vs \(\mu\). Blue marks the infrared pump, \(\zeta=a\), and red marks the near-infrared second harmonic, \(\zeta=b\). (d) Measurement setup: EDFA: erbium-doped fiber amplifier; WDM: wavelength-division multiplexer; ESA: electrical spectrum analyser; OSA: optical spectrum analyzer; FBG: fiber Bragg grating. pump, \(D_{2a}/2\pi=14\,\mathrm{MHz}\), and large normal near the second harmonic, \(D_{2b}/2\pi=-18\,\mathrm{MHz}\). Linewidths are rounded to \(\kappa_{a}/2\pi=600\,\mathrm{MHz}\) and \(\kappa_{b}/2\pi=1.2\,\mathrm{GHz}\). The other parameters, as well as the coupled-mode equations, are described in Methods. The microresonator chip is placed on a piezo-positioning stage with a standard laser temperature controller set at \(130\,\mathrm{\SIUnitSymbolCelsius}\), limiting the variations to less than \(0.01\,\mathrm{\SIUnitSymbolCelsius}\) and providing \(\varepsilon/2\pi=95\,\mathrm{GHz}\) for the pump around \(1550\,\mathrm{nm}\). The pump is coupled into the microresonator using an aspheric lens (numerical aperture \(=0.6\)) and out of waveguide with a butt-coupled fiber. The pump is amplified in an erbium-doped fiber amplifier, whose output power is set at \(27\,\mathrm{dBm}\). The coupling loss for the \(1550\,\mathrm{nm}\) and \(780\,\mathrm{nm}\) light are estimated to be \(7\)-\(8\,\mathrm{dB}\)/facet and \(12\)-\(13\,\mathrm{dB}\)/facet, respectively. The output spectra are recorded using two grating-based optical spectrum analyzers covering \(350\)-\(1750\,\mathrm{nm}\) and \(1500\)-\(3400\,\mathrm{nm}\). The generated dual-band frequency combs are spectrally separated using a wavelength-division multiplexer. The comb's optical and RF spectra are characterized using the grating-based optical spectrum analyzer and electrical spectrum analyzer, respectively. The resolution bandwidth of \(100\,\mathrm{kHz}\) is utilized for the RF noise measurement. Detuning, \(\delta=\omega_{0a}-\omega_{p}\), between the laser frequency, \(\omega_{p}\), and the resonance at \(\omega_{0a}\), is scanned from its negative (blue detuned) to the positive (red detuned) values. The comb generation occurs when the pump frequency moves towards the resonance and makes the intra-resonator power exceed the modulational instability threshold [41]. It triggers the simultaneous growth of sidebands around the pump and its second harmonic, which further develops into the dual-band comb; see the experimental and numerical spectra in Fig. 3. To compute the regions of instabilities of the single-mode, i.e., continuous-wave (CW), operation relative to the generation of the \(\pm\mu\) pairs, we apply the approximation-free part of the formalism developed in Ref. [41], see Fig. 2(a). The parameter space in Fig. 2(a) is span by the pump detuning \(\delta\) and intra-resonator pump power in the \(\mu=0\) mode, \(|a_{0}|^{2}\). The CW states are shown with the magenta lines. When the CW crosses into an instability tongue, it becomes unstable relative to the respective \(\pm\mu\) mode pair. Two crossing points of the yellow (\(\mu=0\) instability) and magenta lines limit the range of the CW bistability for a given on-chip laser power, \(\mathcal{W}\). The negative direction of the tilt of the resonance curve, see Fig. 2, is determined by the \(\chi^{(2)}\) effect and \(\varepsilon>0\). Hence, solitons and breathers reported below for negative detunings, \(\delta<0\), are attributed to the \(\chi^{(2)}\) interaction. Kerr nonlinearity is accounted for in all our simulations, see Methods, and plays only a corrective role since the dominant \(\chi^{(3)}\) effect would cause the opposite, i.e., positive tilt of the resonance well known from the theory and observations of Kerr solitons [1; 3]. The power conversion efficiency of the infrared (\(1550\,\mathrm{nm}\)) frequency comb is defined as \(\mathcal{W}_{\mathrm{IR}}/\mathcal{W}\), where \(\mathcal{W}_{\mathrm{IR}}\) is the total power in all infrared (IR) comb lines excluding the central one. The measured conversion increases from \(\sim 5\%\) to \(\sim 25\%\) as the pump detuning increases, which could be further improved by optimizing the extraction efficiency in the infrared. The improvement in conversion with growing \(\delta\) is evident from the measured and simulated spectra, where the central peak first dominates over the infrared comb, see Fig. 3(f), and then blends with it, see Figs. 3(g)-3(h). By solving the coupled-mode equations we have found a family of the stationary modelocked pulses, i.e., solitons, associated with the observed spectra, see Figs. 2(b) and 3, and determined the corresponding soliton repetition rate \(\widetilde{D}_{1}\neq D_{1}\). The soliton branch splits from the unstable high power CW state, follows the snaking trajectory and ends on the lower CW state. The snake line in Fig. 2(b) starts and ends at the points where the CW magenta line Fig. 2(a) becomes unstable relative to the Figure 2: **Nonlinear single-mode solutions and soliton families.** (a) Magenta lines show the single-mode (continuous-wave (CW)) solutions numerically computed for the on-chip powers \(\mathcal{W}=60\,\mathrm{mW}\) and \(300\,\mathrm{mW}\). Blue-yellow colours show the CW instability boundaries relative to the generation of the \(\pm\mu\) sideband pairs. \(\mu\) values are indicated where possible. (b) A family of the soliton states (red is unstable and black is stable) and CW state (black is stable and magenta is unstable) vs detuning: \(\varepsilon/2\pi=95\,\mathrm{GHz}\), pump wavelength is \(1552\,\mathrm{nm}\). generation of the \(\mu=\pm 1\) sidebands. Pulse profiles near the upper and lower CW states correspond to the two-colour dark-dark and bright-bright solitons, respectively. The pulse profiles in the infrared and NIR are practically the same, while the infrared power is around one order of magnitude smaller. Snaking soliton line in Fig. 2(b) corresponds to the definition of the collapsed snaking used to describe a sequence of bifurcations of dark localised structures in the Kerr models with normal dispersion [42], see also the earlier results in, e.g., Ref. [43]. A feature of our resonator sample is that the pulse size is comparable with the ring circumference. Therefore, periodic boundary conditions make the dark soliton transform into the bright one after several turns of the snake, see Figs. 3(u)-3(y). The above-mentioned solitons with the high conversion into the NIR comb correspond to the nearly vertical region of the snake trajectory. The duty cycles of the respective pulses in Figs. 3(u)-3(y) match the measured conversion efficiency. The low (-90 dB) levels of noise in RF spectra are characteristic of the high degree of coherence typical for the dissipative multi-mode solitons, see Figs. 3(e), 3(j). While doing the detuning scan and before entering the soliton regime, we first observed the characteristic three-peak spectra, see Fig. 4. The corresponding RF spectra are characterised by several well defined peaks, see Figs. 4(c)-4(f) and similar RF spectra of breathers in Kerr microresonators [44; 45]. Numerical analysis reveals that such regimes correspond to the two wavefronts moving in the ring resonator. These fronts eventually meet to create a pulse, which then starts expanding again, and the cycle repeats periodically; see numerical spectra and the space-time dynamics in Figs. 4(g)-4(i) and Fig. (5), respectively. Our experiments have also revealed the trade-off between the soliton regimes for the resonator samples with large \(\varepsilon\) and the generation of the high-bandwidth incoherent combs and higher order harmonics for the phase-matched resonators with \(\varepsilon\) close to zero achieved by tuning the temperature to \(T=30\,^{\circ}\)C. The low-noise RF noise operation becomes inaccessible for \(\varepsilon\) close to zero. Experimentally recorded combs in the \(\varepsilon=0\) resonator feature the broad bandwidth incoherent spectra centred around 1560 nm (pump), 780 nm (2nd harmonic), 520 nm (3rd harmonic) and 390 nm (4th harmonic), which are plotted in blue, orange, green and purple in Fig. 6. The measured on-chip pump-to-second-harmonic-comb conversion efficiencies is around 20.7%, which is mainly limited by the bus waveguide-microring coupling condition Figure 3: **Experimental and numerical data for soliton states.** Panels (a-i) show four pairs of the low-noise soliton spectra experimentally measured with the increase of the pump detuning, \(\delta\). Panels (e) and (j) show the experimental RF spectra with the \(-90\) dB noise levels representative for the simultaneous soliton formation at 1550 nm and 780 nm. Black lines in (e) and (j) mark background electrical noise. Spectral envelopes shown with black lines in (a-d) and (f-i) are computed numerically and correspond to points (i)-(iv) in Fig. 2(b). Panels (k-t) show numerical soliton spectra and (u-y) show the respective pulse shapes at the points (i)-(v) marked in Fig. 2(b) and between the third and fourth columns here. Transition from the dark to bright solitons is clear from (u-y), which follow the collapsed snaking trajectory in Fig. 2(b). Left and right vertical axes in (u-y) mark power for the 1550 nm and 780 nm pulses, respectively. at both near-infrared and near-visible wavelength bands. The inset shows a visible light emission from the resonator captured using a CCD camera. ## Discussion Our observations demonstrate two-colour dissipative solitons in the thin-film periodically polled LiNbO\({}_{3}\) microresonator with the \(~{}\sim 300\,\mathrm{GHz}\) pulse repetition rate. A short resonator circumference limit the soliton number to one, makes possible the merging of bright and dark soliton families, and plays a role in the front-motion instability triggering breather dynamics. Solitons, breathers, and frequency comb generation reported here happen for the negative tilt of the resonance line, which is only possible via the dominant contribution of second-order nonlinearity. Future research directions include, e.g., implementing resonance tilting towards positive detunings, engineering different combinations of dispersion signs, generation of shorter solitons and \(\chi^{(2)}\) soliton crystals. **Data availability** The data supporting the findings of this study are available from corresponding authors on reasonable request. **Code availability** The codes for data processing are available from authors on reasonable request. ## Methods **Device fabrication.** Devices are fabricated from a commercial lithium niobate (LN) on insulator wafer (supplied by NANOLN), in which a \(590\,\mathrm{nm}\) thick Z-cut LN layer sits on top of \(2\,\mathrm{\mu m}\) silicon dioxide on a silicon handle. The pattern is first defined using electron beam lithography (EBL) with a negative FOX-16 resist and subsequently transferred onto the LN layer using an optimized inductively couple plasma reactive ion etching process with Ar\({}^{+}\) plasma. A thin layer of hafnium oxide is deposited on top of the fabricated photonic device using the atomic layer deposition technique, which serves as a protection layer from metal contamination induced during the poling process and also aids in confining the electric field for high fidelity poling as a high-k material. The radial nickel electrodes are patterned on top of the LN microring concentrically via the EBL with alignment and following bi-layer lift-off process. An optimized poling sequence was then applied to create the desired poling pattern. Afterwards, the electrodes and oxide interface were sequentially removed by wet etching. Finally, the chip is cleaved to expose the waveguide facets for fiber-to-chip coupling. **Numerical Simulation.** Multimode intra-resonator pump field (\(1552\mathrm{nm}\), TM polarized) and its second Figure 4: **Experimental and numerical data for breathers.** (a,b) and (d,e) Experimentally measured infrared and NIR spectra found on approach to the soliton range. The corresponding RF spectra in (c) and (f) reveal onset of modelocking via formation of the soliton breather. Optical and RF spectra corresponding to the numerically found breathers are shown in (g-i). \(\delta/\kappa_{a}=-1.64\), which is left from the snake turns in Fig. 2(b) and soliton data in Fig. 3. harmonic (TM polarized) are expressed via their mode expansions as [33] \[\begin{split}& Ae^{iM\vartheta-i\omega_{p}t}+c.c.,\ A=\sum_{\mu}a_{\mu}(t)e^{i\mu \theta},\\ & Be^{i(2M+Q)\vartheta-i2\omega_{p}t}+c.c.,\ B=\sum_{\mu}b_{\mu}(t )e^{i\mu\theta},\\ &\theta=\vartheta-\widetilde{D}_{1}t.\end{split} \tag{2}\] Here, \(\vartheta=(0,2\pi]\) is the angular coordinate in the laboratory frame, \(\theta\) is the coordinate in the reference frame rotating with the rate \(\widetilde{D}_{1}\), \(Q=150\) is the number of the poling periods, \(M=515\) and \(\mu=0,\pm 1,\pm 2,\dots\) is the relative mode number. \(a_{\mu}\), \(b_{\mu}\) are the amplitudes of the pump and second-harmonic modes. The resonator spectrum around pump and second-harmonic is approximated as \[\begin{split}&\omega_{\mu\zeta}=\omega_{0\zeta}+\mu D_{1\zeta}+ \tfrac{1}{2}\mu^{2}D_{2\zeta}+\tfrac{1}{3!}\mu^{3}D_{3\zeta}+\tfrac{1}{4!}\mu ^{4}D_{4\zeta},\\ &\zeta=a,b,\end{split} \tag{3}\] where, \(D_{1\zeta}\) are the linear repetition rates, \(D_{2\zeta}\) are the second order dispersions, and \(D_{3\zeta}\), \(D_{4\zeta}\) are the third and fourth order ones. \(D_{1b}-D_{1a}\) is the walk-off parameter, i.e., the repetition rate difference. The values of the dispersion coefficients are specified in Fig. 1. The pump laser, \(\omega_{p}\), is tuned around the 1552.3nm wavelength and targets the \(\omega_{0a}\) resonance. The pump detuning is defined as \[\delta=\omega_{0a}-\omega_{p}. \tag{4}\] Coupled-mode equations governing the evolution of \(a_{\mu}(t)\), \(b_{\mu}(t)\) include both \(\chi^{(2)}\) and \(\chi^{(3)}\) nonlinearities [33; 37]. The equations have been derived under the assumption that the \(e^{iQ\vartheta}\) Fourier component of the quasi-phase-matching grating, \(\chi^{(2)}G(\vartheta)=\chi^{(2)}G(\vartheta+2\pi/Q)\), provides the required phase-matching [33], \[\begin{split} i\partial_{t}a_{\mu}=\delta_{\mu a}a_{\mu}& -\frac{i\kappa_{a}}{2}\big{(}a_{\mu}-\widehat{\delta}_{\mu,0} \mathcal{H}\big{)}\\ &-\gamma_{2a}\sum_{\mu_{1}\mu_{2}}\widehat{\delta}_{\mu,\mu_{1} -\mu_{2}}b_{\mu_{1}}a_{\mu_{2}}^{*}\\ &-\gamma_{3a}\sum_{\mu_{1}\mu_{2}\mu_{3}}\widehat{\delta}_{\mu, \mu_{1}+\mu_{2}-\mu_{3}}a_{\mu_{1}}a_{\mu_{2}}a_{\mu_{3}}^{*}\\ &-2\gamma_{3a}\sum_{\mu_{1}\mu_{2}\mu_{3}}\widehat{\delta}_{\mu, \mu_{1}+\mu_{2}-\mu_{3}}a_{\mu_{1}}b_{\mu_{2}}b_{\mu_{3}}^{*},\\ i\partial_{t}b_{\mu}=\delta_{\mu b}b_{\mu}&-\frac{i \kappa_{b}}{2}b_{\mu}\\ &-\gamma_{2b}\sum_{\mu_{1}\mu_{2}}\widehat{\delta}_{\mu,\mu_{1} +\mu_{2}}a_{\mu_{1}}a_{\mu_{2}}\\ &-\gamma_{3b}\sum_{\mu_{1}\mu_{2}\mu_{3}}\widehat{\delta}_{\mu, \mu_{1}+\mu_{2}-\mu_{3}}b_{\mu_{1}}b_{\mu_{2}}b_{\mu_{3}}^{*}\\ &-2\gamma_{3b}\sum_{\mu_{1}\mu_{2}\mu_{3}}\widehat{\delta}_{\mu, \mu_{1}+\mu_{2}-\mu_{3}}b_{\mu_{1}}a_{\mu_{2}}a_{\mu_{3}}^{*}.\end{split} \tag{5}\] Here, \(\widehat{\delta}_{\mu,\mu^{\prime}}=1\) for \(\mu=\mu^{\prime}\) and is zero otherwise. \(\mathcal{H}\) is the pump parameter, \(\mathcal{H}^{2}=\eta\mathcal{F}\mathcal{W}/2\pi\), where \(\mathcal{W}\) is the laser power, and \(\mathcal{F}=D_{1a}/\kappa_{a}\) is finesse [33]. \(\eta\) is the coupling coefficient, which was used as the fitting parameter, \(\eta=0.33333\). \(\delta_{\mu\zeta}\) are the modal detuning parameters in the rotating reference frame, \[\delta_{\mu a} =(\omega_{\mu a}-\omega_{p})-\mu\widetilde{D}_{1}, \tag{6}\] \[\delta_{\mu b} =(\omega_{\mu b}-2\omega_{p})-\mu\widetilde{D}_{1},\] Figure 6: **Broadband incoherent spectra in phase-matched microresonator.** Measurements of the broadband incoherent frequency comb generation spanning across four octaves in the phase-matched microresonator, \(\varepsilon=0\). The on-chip pump power is \(\mathcal{W}=100\,\mathrm{mW}\). Figure 5: **Numerically computed breathers.** Space-time evolution of the two-colour breather state, which spectra are shown in Figs. 4(g)-4(i). (a) is the 1550nm field intensity, i.e., \(\left|A\right|^{2}\) vs \(t\) and \(\theta\), and (b) is the 780nm field, \(\left|B\right|^{2}\), see Eq. (2) in Methods. where \(\delta_{0a}=\delta\), \(\delta_{0b}=2\delta-\varepsilon\) and \(\varepsilon\) is the phase mismatch parameter [33], \[\varepsilon=2\omega_{0a}-\omega_{0b}=2\frac{cM}{Rn_{M}}-\frac{c(2M+Q)}{Rn_{2M+Q}}. \tag{7}\] Here, \(c\) is the vacuum speed of light, \(R\) is the resonator radius, \(n_{M}\) is the effective refractive index experienced by the resonator mode with the number \(M\). The value of \(\varepsilon\) can be controlled by the temperature and pump wavelength. \(T=130^{o}\)C yields \(\varepsilon/2\pi\approx 95\)GHz and was set to get the soliton and breather generation in Figs. 2-4. \(T=30^{o}\)C gives \(\varepsilon\approx 0\) and was used to generate the incoherent multi-octave spectra in Fig. 5. \(\gamma_{2\zeta}\) and \(\gamma_{3\zeta}\) are parameters specifying strength of the second and third-order nonlinear effects [33]. Using a simplifying assumption that the effective mode area, \(S\), does not disperse, we estimate for \(\gamma_{2\zeta}\) as \[\gamma_{2\zeta}=\frac{d\omega_{0\zeta}q}{3n_{0}^{2}},\ q=\sqrt{\frac{2\mathcal{ Z}_{vac}}{Sn_{0}}},\ \zeta=a,b, \tag{8}\] where \(n_{0}=2.2\) is the linear refractive index, \(\omega_{0a}/2\pi=193\)THz, \(\omega_{0b}/2\pi=386\)THz, \(\mathcal{Z}_{vac}=1/\epsilon_{vac}c=377\) V\({}^{2}\)/W is the free space impedance, and the averaged effective area \(S\approx 1.5\mu\)m\({}^{2}\). These values yield \(q\approx 15\times 10^{6}\)W\({}^{-1/2}\)V/m. \(d\sim\chi^{(2)}\) is the relevant element of the reduced \(\chi^{(2)}\) tensor, \(d\approx 20\)pm/V and \(\gamma_{2a}/2\pi\approx 4\)GHz/\(\sqrt{\text{W}},\ \gamma_{2b}/2\pi\approx 8\)GHz/\(\sqrt{\text{W}}\). Kerr parameters \(\gamma_{3\zeta}\) are estimated using the results derived in Ref. [46] \[\gamma_{3\zeta}=\frac{\omega_{0\zeta}n_{2}}{2Sn_{0}}, \tag{9}\] where \(n_{2}=9\times 10^{-20}\)m\({}^{2}\)/W\({}^{2}\) (\(\chi^{(3)}=1.6\times 10^{-21}\)m\({}^{2}\)/V\({}^{2}\)) is the Kerr coefficient of LiNbO\({}_{3}\) and \(\gamma_{3a}/2\pi\approx 2.5\)MHz/W, \(\gamma_{3b}/2\pi\approx 5\)MHz/W. According to the estimates based on the comparison of the nonlinear resonance shifts induced by the \(\chi^{(2)}\) and \(\chi^{(3)}\) terms [41], the latter are expected to play a notable role for \(|\varepsilon|\) becoming close to and exceeding \(\varepsilon_{\text{cr}}\), \[\varepsilon_{\text{cr}}=\gamma_{2a}\gamma_{2b}/\gamma_{3a}\approx 2\pi\times 2 \text{THz}, \tag{10}\] which corresponds to the mismatch by about six modes in a resonator with \(300\)GHz repetition rate, and is much larger than \(\varepsilon/2\pi\simeq 0.1\) THz in our resonator. Typical time-dependent simulations of Eq. (5) were performed using the fourth-order Runge-Kutta method applying \(\widetilde{D}_{1}=D_{1a}\). Stationary soliton profiles where found using the Newton method after \(\partial_{t}\) was set to zero and \(\widetilde{D}_{1}\) assumed as one of the unknowns. The value of \(\widetilde{D}_{1}\) after calculations was close, but not equal, to \(D_{1a}\). Typical number of modes around \(\omega_{p}\) and \(2\omega_{p}\) used in the modelling was 256, i.e., \(\mu=-127,\ldots,0,\ldots,128\). To differentiate between stable and unstable solitons we have analysed the linear stability of the soliton family. We perturbed the time-independent soliton amplitudes, \(\hat{a}_{\mu}\), \(\hat{b}_{\mu}\) (\(\partial_{t}\hat{a}_{\mu}=\partial_{t}\hat{b}_{\mu}=0\)) with small perturbations, \(\varepsilon_{a\mu}(t)=x_{a\mu}(t)+y_{a\mu}^{*}(t)\) and \(\varepsilon_{b\mu}(t)=x_{b\mu}(t)+y_{b\mu}^{*}(t)\), i.e., \[a_{\mu}(t)=\hat{a}_{\mu}+x_{a\mu}(t)+y_{a\mu}^{*}(t),\] \[b_{\mu}(t)=\hat{b}_{\mu}+x_{b\mu}(t)+y_{b\mu}^{*}(t). \tag{11}\] and then linearised Eq. (5). By substituting \(\{x_{a\mu}(t),y_{a\mu}(t),x_{b\mu}(t),y_{b\mu}(t)\}=\{X_{a\mu},Y_{a\mu},X_{b\mu },Y_{b\mu}\}\,e^{\lambda t}\), we have reduced the linearised differential equations to the algebraic \((4\times 256)\times(4\times 256)\) eigenvalue problem, which was solved numerically [41]. The soliton is stable if all Re\(\lambda<0\). Stability of the CW state, \(a_{\mu\neq 0}=0\), \(b_{\mu\neq 0}=0\), was found from the simpler four-by-four eigenvalue problem [41]. In this case, each eigenvalue \(\lambda\) is attributed to a particular pair of \(\pm\mu\) sidebands producing its own instability boundary, which are all plotted in Fig. 2(b).
2307.04593
DWA: Differential Wavelet Amplifier for Image Super-Resolution
This work introduces Differential Wavelet Amplifier (DWA), a drop-in module for wavelet-based image Super-Resolution (SR). DWA invigorates an approach recently receiving less attention, namely Discrete Wavelet Transformation (DWT). DWT enables an efficient image representation for SR and reduces the spatial area of its input by a factor of 4, the overall model size, and computation cost, framing it as an attractive approach for sustainable ML. Our proposed DWA model improves wavelet-based SR models by leveraging the difference between two convolutional filters to refine relevant feature extraction in the wavelet domain, emphasizing local contrasts and suppressing common noise in the input signals. We show its effectiveness by integrating it into existing SR models, e.g., DWSR and MWCNN, and demonstrate a clear improvement in classical SR tasks. Moreover, DWA enables a direct application of DWSR and MWCNN to input image space, reducing the DWT representation channel-wise since it omits traditional DWT.
Brian B. Moser, Stanislav Frolov, Federico Raue, Sebastian Palacio, Andreas Dengel
2023-07-10T14:35:12Z
http://arxiv.org/abs/2307.04593v1
# DWA: Differential Wavelet Amplifier for Image Super-Resolution ###### Abstract This work introduces Differential Wavelet Amplifier (DWA), a drop-in module for wavelet-based image Super-Resolution (SR). DWA invigorates an approach recently receiving less attention, namely Discrete Wavelet Transformation (DWT). DWT enables an efficient image representation for SR and reduces the spatial area of its input by a factor of 4, the overall model size, and computation cost, framing it as an attractive approach for sustainable ML. Our proposed DWA model improves wavelet-based SR models by leveraging the difference between two convolutional filters to refine relevant feature extraction in the wavelet domain, emphasizing local contrasts and suppressing common noise in the input signals. We show its effectiveness by integrating it into existing SR models, e.g., DWSR and MWCNN, and demonstrate a clear improvement in classical SR tasks. Moreover, DWA enables a direct application of DWSR and MWCNN to input image space, reducing the DWT representation channel-wise since it omits traditional DWT. Keywords:Differential Wavelet Amplifier Image Super-Resolution. ## 1 Introduction Image Super-Resolution (SR) has an impressive legacy in Computer Vision (CV) yet still presents an exhilarating challenge [21, 31]. SR is a task of enhancing Low-Resolution (LR) images to High Resolution (HR). It is challenging because many High Resolution (HR) images can correspond to a given Low-Resolution (LR) image, rendering the task mathematically ill-posed. In recent years, deep learning has fueled rapid development in SR, leading to tremendous progress [6, 7]. While many techniques have improved the overall quality of image reconstructions, there remains a pressing need for methods capable of producing high-frequency details, particularly when dealing with high magnification ratios [24]. Addressing this issue is crucial for the continued advancement of SR. Influenced by achievements on other CV tasks, recent research focused on trending approaches like Transformer-based networks [27, 16, 28], Denoising Diffusion Probabilistic Models [23, 15, 24] or Generative Adversarial Networks [29, 30]. Despite astonishing reconstruction capabilities, they often lack an explicit focus on generating high-frequency details, i.e., local variations. This work aims to advance the field of SR by exploring wavelet-based networks. Unfortunately, this technique has received less attention despite its significant potential [21]. We seek to provide a fresh perspective and revive research by re-evaluating these approaches. Discrete Wavelet Transformation (DWT) enables an efficient image representation without losing information compared to its naive spatial representation, i.e., traditional RGB format. It does so by separating high-frequency details in distinct channels and reducing the spatial area of input image representation by a factor of 4. Therefore, a smaller receptive field is required to capture the input during feature extraction. Using DWT like in DWSR [9] and MWCNN [17] reduces the overall model size and computational costs while performing similarly to state-of-the-art image SR architectures. This work introduces a new Differential Wavelet Amplifier (DWA) module inspired by differential amplifiers from electrical engineering [2]. Differential amplifiers increase the difference between two input signals and suppress the common voltage shared by the two inputs, called Common Mode Rejection (CMR) [11]. In other words, it mitigates the impact of noise (e.g., electromagnetic interference, vibrations, or thermal noise) affecting both source inputs while retaining valuable information and improving the integrity of the measured input signal. Our proposed DWA layer adapts this idea to deep learning and can be used as a drop-in module to existing SR models. This work shows its effectiveness as exemplary for wavelet-based SR approaches. DWA leverages the difference between two convolutional filters with a stride difference to enhance relevant feature extraction in the wavelet domain, emphasizing local contrasts and suppressing common noise in the input signals. We demonstrate the effectiveness of DWA through extensive experiments and evaluations, showing improved performance compared to existing wavelet-based SR models without DWA: DWSR with DWA shows overall better performance w.r.t. PSNR and SSIM, and MWCNN with DWA achieves better SSIM scores with comparable PSNR values on the testing datasets Set5 [4], Set14 [32], and BSDS100 [20]. Taken together, our work makes the following key contributions: * Introduction of Differential Wavelet Amplifier (DWA): a novel module that leverages the difference between two convolutional filters horizontally and vertically in a wavelet-based image representation, which is applicable as drop-in addition in existing network architectures. * Comprehensive evaluation demonstrating the improved performance by using DWA on popular SR datasets such as Set5 [4], Set14 [32], and BSDS100 [20] by adding DWA to existing wavelet-based SR models, namely, DWSR [9] and MWCNN [17]. * Experimental analysis showing that DWA enables a direct application of DWSR and MWCNN to the input space by avoiding the DWT on the input image. This application reduces the input channel-wise to 3 instead of 12 channels for RGB images while keeping the spatial reduction benefit of DWT. * Visual examination of reconstructions showcasing that the DWSR with the DWA module captures better distinct edges and finer details, which are also closer to the ground truth residuals. ## 2 Background This chapter provides comprehensive background information on 2D Discrete Wavelet Transform (2D-DWT), how SR models (DWSR [9] and MWCNN [17]) use it, and related work to Differential Wavelet Amplifiers (DWA). Additionally, we introduce differential amplifiers from electrical engineering, which inspired our proposed method DWA. ### Discrete Wavelet Transform in SR The 2D Discrete Wavelet Transform (2D-DWT) decomposes an image into four unique sub-bands with distinct frequency components: a low-frequency approximation sub-band and three high-frequency detail sub-bands representing horizontal, vertical, and diagonal details. Let \(x\left[n\right]\in\mathbb{R}^{N}\) be a signal. The 1D Discrete Wavelet Transformation (1D-DWT) with Haar wavelet passes the input signal first through a half-band high-filter \(h\left[n\right]\) and a low-pass filter \(l\left[n\right]\). Next, half of the sample is eliminated according to the Nyquist rule [9]. The wavelet coefficients are calculated by repeating the decomposition to each output coefficient iteratively [26]. In the case of images, it applies \(h\left[n\right]\) and \(l\left[n\right]\) in different combinations, resulting in four function applications. The DWSR [9] SR model exploits the wavelet domain and gets the DWT representation of the interpolated LR image as input. DWSR is composed of 10 convolution layers that are applied sequentially. It adds the interpolated LR input as residual for the final reconstruction step, which results in learning only the sparse residual information between the LR and HR domains. MWCNN [17] exploits multi-level DWT (multiple applications of DWT) and utilizes a U-Net architecture [22]. DWT replaces all downsizing steps, and the inverse operation of DWT replaces all upsampling steps. Ultimately, it uses the interpolated LR image as a residual connection for the final prediction. The standard MWCNN setup consists of 24 convolution layers. One caveat of DWSR and MWCNN in learning the residual is that they must translate its rich information input to sparse representation, e.g., the average band. To ease the burden, we present a Differential Wavelet Amplifier, which directly transforms the input into sparse representations inspired by differential amplifiers introduced next. ### Differential Amplifier An electronic amplifier is a standard electrical engineering device to increase a signal's power [2]. One type of electronic amplifier is the differential amplifier that increases the difference between two input signals and suppresses the common voltage shared by the two inputs [14]. Given two inputs \(V_{in}^{-},V_{in}^{+}\in\mathbb{R}^{N}\) and the differential gain of the amplifier \(A_{d}\in\mathbb{R}\), the output \(V_{out}\) is calculated as \[V_{out}=A_{d}\left(V_{in}^{+}-V_{in}^{-}\right) \tag{1}\] The purpose of differential amplifiers is to suppress common signals or noise sources that are present in multiple input channels while retaining valuable information. In the literature, this is called Common Mode Rejection (CMR) and is a critical property in many electrical engineering applications, particularly in systems that measure small signals in the presence of noise or interference, e.g., electromagnetic interference or thermal noise [11]. Hence, using CMR improves the signal-to-noise ratio, overall system performance, and signal integrity since the system can focus on the relevant differential signals. ### Differential Convolutions Closest to our work is Sargul et al. [25], which applies differential convolutions, i.e., the difference of two convolution layers, to emphasize contrasts for image classification, which is inherently different to image generation tasks such as image SR. Despite this, they do not consider a stride difference vital for capturing variations. Knutsson et al. [13] theoretically examine a normalized version of differential convolutions also with no stride difference. Due to the time of publication, they did not try it in the case of deep learning-based image SR. Newer applications like Canh et al. [5] consider learnable parameters to turn the Difference of Gaussians (DoG) [18] into a learnable framework, but has the same caveat: As Knutsson concluded, their approaches can be interpreted as a standard convolution weighted with the local energy minus the "mean" operator acting on the "mean" data, i.e., a more elaborate convolution operation. A similarity could also be seen in the approach of residual connections of ResNets [10] when the kernel parameters have a negative sign. However, residual connections are different since they force a convolution layer to learn to extract the sparse details that are not apparent in the input. In contrast, our proposed method with Differential Wavelet Amplifier (DWA) explicitly produces sparse details by design due to the subtraction operator. Therefore, DWA does not have to learn what input information should be removed for the residual information. It can focus on relevant features that persist when the stride convolution does not detect the same feature, thereby emphasizing local contrast. ## 3 Differential Wavelet Amplifier (DWA) This section presents our proposed Differential Wavelet Amplifier (DWA) module. Inspired by differential amplifiers in electrical engineering, DWA is designed to operate in the wavelet domain and exploits the difference between two input signals to improve the performance of image SR methods based on wavelet predictions.DWA is applied separately in the horizontal and vertical axis of the input image. In each direction, we perform two convolutions with a stride distance in one direction for both axis (from left to right, from top to bottom, as in MDLSTMs [8]), allowing a fine-grained feature extraction and emphasizing local contrasts while suppressing the common mode in the input, similar to CMR in electrical engineering. Figure 1 visualizes all processes involved in DWA. Let \(\mathbf{x}\in\mathbb{R}^{w\times h\times c_{in}}\) be an input image or feature map with \(c_{in}\) channels. We define \(\psi\left(\mathbf{x},\left(i,j\right)\right):\mathbb{R}^{w\times h\times c_{in} }\times\mathbb{N}^{2}\rightarrow\mathbb{R}^{k\cdot k\times c_{in}}\) as a function that extracts \(k\cdot k\) points around a spatial position \(\left(i,j\right)\). We can then express the resulting feature maps for the horizontal \(\mathbf{H}\left(\mathbf{x}\right)\) and vertical \(\mathbf{V}\left(\mathbf{x}\right)\) axis as \[\begin{split}\mathbf{H}\left(\mathbf{x}\right)_{i,j}& =f\left(\psi\left(\mathbf{x},\left(i,j\right)\right);\theta_{1} \right)-f\left(\psi\left(\mathbf{x},\left(i+s,j\right)\right);\theta_{2} \right),\\ \mathbf{V}\left(\mathbf{x}\right)_{i,j}&=f\left( \psi\left(\mathbf{x},\left(i,j\right)\right);\theta_{3}\right)-f\left(\psi \left(\mathbf{x},\left(i,j+s\right)\right);\theta_{4}\right),\end{split} \tag{2}\] where \(f:\mathbb{R}^{k\cdot k\times c_{in}}\rightarrow\mathbb{R}^{c_{f}}\) is a convolution operation with parameters \(\theta_{n}\) for \(0<n<4\), \(k\times k\) the kernel size and \(s\in\mathbb{N}\) a pre-defined stride difference. As a result, the local variance is captured in one direction for both axes, similar to MDLSTMs [8]: from left to right with parameters \(\theta_{1}\) and \(\theta_{2}\) and from top to bottom with parameters \(\theta_{3}\) and \(\theta_{4}\). We obtain two distinct feature maps that capture complementary input image information and provide richer feature representations for the wavelet-based SR task. The input is directly translated to sparse representations, which reduces the distance to residual target objectives in networks that use residual connections for final prediction. We concatenate the resulting feature maps alongside the input to ensure no information is lost during the DWA processing. This combination creates a comprehensive set of feature maps that retains the original input information while incorporating the directional features obtained from both axes. More formally: \[g\left(\mathbf{x}\right)=\mathbf{x}\odot\sigma\left(H\left(\mathbf{x}\right) \odot V\left(\mathbf{x}\right)\right), \tag{3}\] where \(\odot\) is a channel-wise concatenation operator and \(\sigma\) is a non-linear function like sigmoid, \(\tanh\) or ReLU [1]. The concatenated feature map is fed into an additional convolution layer \(f_{final}:\mathbb{R}^{k\cdot k\times\left(c_{in}+2\cdot c_{f}\right)} \rightarrow\mathbb{R}^{c_{final}}\) and parameters \(\theta_{final}\), which maps the channel size after concatenation to a desired target channel size \(c_{final}\) such that our module can easily be incorporated into existing models: \[\text{DWA}\left(\mathbf{x}\right)_{i,j}=f_{final}\left(\psi\left(g\left( \mathbf{x}\right),\left(i,j\right)\right);\theta_{final}\right) \tag{4}\] Figure 1: Visualization of DWA. It takes the difference of two convolutional filters with a stride difference of at least 1, vertically and horizontally. Next, it concatenates the input with horizontal and vertical feature maps. In the end, it applies a final convolution. A SR model utilizing this DWA module exploits the comprehensive feature map to learn the complex relationships between LR and HR images, ultimately reconstructing the HR image with reduced noise. By employing the DWA, we aim to harness the benefits of wavelet domain processing and the difference between two convolutional filters. We demonstrate the effectiveness of our approach through extensive experiments and evaluations in the following sections. ### Direct Application of DWA (DWA Direct) One way to circumvent additional computation steps is to apply DWA directly on the image space, omitting DWT and learning the transition between image and frequency space implicitly via DWA. Thus, the interpolation of the input, which effectively adds no additional information since it generates only approximated values, can be reduced by half for networks like DWSR or MWCNTs. Consequently, the network is better adapted to the given values of the LR input. In the experiments, we evaluate this alternative approach called DWA Direct and show that it further enhances the performances of DWSR and MWCNTs. ## 4 Experiments We evaluate our proposed DWA module by integrating it into the wavelet-based SR models DWSR and MWCNTs. We begin this section by describing the experiments. Next, we discuss the results quantitatively and qualitatively. We show the effectiveness of DWA and that a direct application of wavelet-based SR models with DWA to image space is feasible without forfeiting reconstruction quality. ### Experimental Setup We applied widely-used SR datasets to evaluate our method. In addition, we utilized standard augmentation techniques such as rotation, horizontal and vertical flipping. For testing, we employed the datasets Set5 [4], Set14 [32], BSDS100 [20]. For training, we used different settings for DWSR and MWCNTs to match the original works for a fair comparison, as dissected in the following. In all experiments, we train using the Adam optimizer [12] with a learning rate of \(10^{-4}\) with \(L2\) regularization of \(10^{-8}\) on a single A100 GPU. Moreover, we use a learning rate decay schedule, which reduces the learning rate by \(20\%\) every \(20\) epochs. **Ablation Study:** We use DIV2K [3] and follow the standard procedure by extracting sub-images of \(192\times 192\) for training. We iterate for \(40\) epochs over the training dataset. Since we compare with DWSR, we use \(L1\)-loss as the learning objective, as reported by the authors of DWSR. **DWSR-Scenario:** We use DIV2K [3] like in the ablation study, but we train for \(100\) epochs as reported in DWSR. **MWCNN-Scenario:** We collect \(800\) images from DIV2K [3], \(200\) images from BSD [20] and \(4\),\(744\) images from WED [19] and train for \(100\) epochs. Contrary to DWSR, we adapt the \(L2\)-loss like the authors of MWCNTs. For sub-image extraction, we use a size of \(240\times 240\) to match the training settings of MWCNTs. ## 5 Results This section presents the quantitative and qualitative analysis of this work. It shows that incorporating the DWA module into DWSR improves the performance in every dataset and for all scaling factors. Moreover, we consistently improve the SSIM scores by implementing DWA into MWCNN and achieve similar PSNR results. This section starts with an ablation study to investigate different striding settings and the effect of combining DWA with DWSR for the direct application and the regular DWT case (see Section 3.1). Next, we examine the performance scores of our DWA module on classical SR datasets with DWSR and MWCNN. Finally, we visually compare the quality of the reconstructions. #### 5.0.1 Ablation Study Table 1 shows the impact of different striding settings for DWSR with DWA with 2x and 4x scaling. We observe an improvement for striding settings greater than 0, significantly for PSNR and slightly for SSIM. The differences between striding settings greater than 0 are minimal, with a slight decrease for larger striding sizes. Nonetheless, they outperform DWA with no stride difference consistently. Thus, having a stride difference to capture local variations more effectively benefits the overall performance of DWSR. We further investigate the impact of various model configurations, DWSR with or without the DWA module, in a direct application or without (see Section 3.1). Figure 2 presents the results, where two graphs display the PSNR and SSIM values [21], respectively, for each method. We apply the ablation study with different model depths, ranging from 6 to 18, instead of using a standard depth of 10 for DWSR. As a result, DWSR with DWA or DWA Direct consistently outperforms the DWSR baseline across all model depths. This demonstrates the effectiveness of incorporating the DWA module as the first layer in the DWSR framework. Moreover, DWA Direct outperforms DWA applied to the DWT on the input with greater model depths. Furthermore, we observe a considerable performance drop in DWSR Direct without using the DWA module compared to all other evaluated methods. This indicates that the DWA module is crucial in enabling the Direct approach, as its absence significantly degrades performance. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Scale & Metric & no stride & s=1 & s=2 & s=3 \\ \hline \multirow{2}{*}{2x} & **PSNR**\(\uparrow\) & 31.8314 & **31.8660** & 31.8598 & 31.8588 \\ & **SSIM**\(\uparrow\) & 0.9058 & **0.9061** & 0.9060 & 0.9059 \\ \hline \multirow{2}{*}{4x} & **PSNR**\(\uparrow\) & 27.2870 & **27.3048** & 27.2927 & 27.2872 \\ & **SSIM**\(\uparrow\) & 0.7457 & **0.7471** & 0.7464 & 0.7466 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of different striding settings on BSDS100 (2x and 4x scaling). Performance Table 2 summarizes PSNR and SSIM scores when applying the DWA module to DWSR and MWCNTs for classical SR datasets on different scaling factors for a longer training span. We observe that incorporating the DWA module into DWSR improves the performance in every dataset and for all scaling factors. For MWCNTs with DWA, a similar observation can be made, especially for the SSIM scores, which show overall the best performances. However, it has slightly decreased PSNR values for some cases, e.g., for scaling factor 3. Both applications, DWSR with DWA and MWCNTs with DWA, are applied directly on the input image space, omitting a DWT of the input. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Scale} & \multicolumn{2}{c|}{DWSR} & MWCNN & DWA Direct & DWA Direct \\ & & [9] & [17] & [DWSR] & [MWCNN] \\ & & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM \\ \hline \multirow{2}{*}{Set5 [4]} & 2x & 37.43 / 0.9568 & 37.91 / 0.9600 & 37.79 / 0.9645 & **37.99** / **0.9652** \\ & 3x & 33.82 / 0.9215 & **34.18** / 0.9272 & 33.85 / 0.9310 & 34.09 / **0.9329** \\ & 4x & 31.39 / 0.8833 & 32.12 / 0.8941 & 31.76 / 0.8898 & **32.16** / **0.9054** \\ \hline \multirow{2}{*}{Set14 [32]} & 2x & 33.07 / 0.9106 & **33.70** / 0.9182 & 33.38 / 0.9237 & **33.70** / **0.9265** \\ & 3x & 29.83 / 0.8308 & **30.16** / 0.8414 & 29.90 / 0.8504 & 30.12 / **0.8545** \\ & 4x & 28.04 / 0.7669 & 28.41 / 0.7816 & 28.31 / 0.7928 & **28.70** / **0.8012** \\ \hline \multirow{2}{*}{BSDS100 [20]} & 2x & 31.80 / 0.8940 & **32.23** / 0.8999 & 32.01 / 0.9080 & 32.21 / **0.9102** \\ & 3x & n.a. & **29.12** / 0.8060 & 28.79 / 0.8174 & 28.93 / **0.8211** \\ & 4x & 27.25 / 0.7240 & 27.62 / 0.7355 & 27.38 / 0.7503 & **27.63** / **0.7573** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of DWSR, MWCNTs, and DWA Direct with DWSR and MWCNTs architecture on Set5, Set14, and BSDS100. Note that PSNR [dB] is a logarithmic scale, and SSIM reflects correlations (with values ranging from -1 to 1) [21]. Figure 2: Results of ablation study on BSDS100 with scaling factor 2x. We tested different configurations: Baseline, Direct (application on the image space), DWA, and DWA Direct (application on the image space). #### 4.2.3 Visual Comparison Figure 3 displays the ground truth HR image alongside the DWSR and DWA reconstructions. DWSR and DWA perform reasonably well in reconstructing the images. However, the DWA reconstructions exhibit more accurate and sharp details, particularly in the zoomed-in regions. Since the added bicubic interpolation of the LR image in the reconstruction process provides a robust base prediction, we also present the residual images, which are the differences between the bicubic interpolations and the ground truth images, to highlight the performance difference between both approaches. Figure 3: Comparison of an HR ground truth image (BSDS100, 2x scaling), DWSR, and DWA. First row: the entire image space of the HR image and the corresponding reconstructions obtained by the SR models. Second row: zoomed-in regions within the images from the first row. Third row: residual image representing the difference between the LR and HR images. As a result, the DWA model captures edges and details closer to the ground truth residuals, as opposed to the DWSR model (also regarding color). These residual images are the learning targets of the models to improve the reconstruction quality beyond interpolation. By comparing the residual images, we can see more clearly that the DWA model captures better distinct edges and finer details, which are also closer to the ground truth residuals, as opposed to the DWSR model. It has more substantial edges and finer points in the residual images, which are also closer in color (see red colored lines of DWSR reconstruction in Figure 3 as a comparison). This observation aligns with our quantitative results, where DWA outperforms DWSR regarding various performance metrics. To provide deeper insights into our proposed models, Figure 4 presents feature maps generated by the DWSR and DWA Direct models after the first layer. To ensure diversity, we selected the top five channels from each method based on the highest sum of distances between pairwise differences of all channels. Our analysis reveals that although DWSR operates on the frequency space, it still remains similar to the LR input and fails to capture the desired target residual. In contrast, DWA Direct extracts local contrasts and variations more effectively from the image space and performs better in mapping the target residual. ## 6 Conclusion and Future Work In this work, we presented a novel Differential Wavelet Amplifier (DWA) module, which can be used as a drop-in module to existing wavelet-based SR models. We showed experimentally on Set5, Set14, and BSDS100 for scaling factors 2, 3, and 4 that it improves the reconstruction quality of the SR models DWSR and MWCNN while enabling an application of them to the input image space directly without harm to performance. This module captures more distinct edges and finer details, which are closer to the ground truth residuals, which wavelet-based SR models usually learn. This work is an opportunity to seek further advancements for SR based on frequency-based representations. For future work, an exciting research avenue would be to explore ways to incorporate DWA on different DWT levels in MWCNN instead of only applying it initially. Figure 4: Feature maps of DWSR and DWA Direct. DWA Direct extracts local contrasts and variations more effectively, closer than DWSR to the residual target. ## Acknowledgments This work was supported by the BMBF projects SustainML (Grant 101070408) and by Carl Zeiss Foundation through the Sustainable Embedded AI project (P2021-02-009).
2310.09478
MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning
Large language models have shown their remarkable capabilities as a general interface for various language-related applications. Motivated by this, we target to build a unified interface for completing many vision-language tasks including image description, visual question answering, and visual grounding, among others. The challenge is to use a single model for performing diverse vision-language tasks effectively with simple multi-modal instructions. Towards this objective, we introduce MiniGPT-v2, a model that can be treated as a unified interface for better handling various vision-language tasks. We propose using unique identifiers for different tasks when training the model. These identifiers enable our model to better distinguish each task instruction effortlessly and also improve the model learning efficiency for each task. After the three-stage training, the experimental results show that MiniGPT-v2 achieves strong performance on many visual question-answering and visual grounding benchmarks compared to other vision-language generalist models. Our model and codes are available at https://minigpt-v2.github.io/
Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, Mohamed Elhoseiny
2023-10-14T03:22:07Z
http://arxiv.org/abs/2310.09478v3
# MiniGPT-v2: Large Language Model As a Unified Interface for Vision-Language Multi-task Learning ###### Abstract Large language models have shown their remarkable capabilities as a general interface for various language-related applications. Motivated by this, we target to build a unified interface for completing many vision-language tasks including image description, visual question answering, and visual grounding, among others. The challenge is to use a single model for performing diverse vision-language tasks effectively with simple multi-modal instructions. Towards this objective, we introduce MiniGPT-v2, a model that can be treated as a unified interface for better handling various vision-language tasks. We propose using unique identifiers for different tasks when training the model. These identifiers enable our model to better distinguish each task instruction effortlessly and also improve the model learning efficiency for each task. After the three-stage training, the experimental results show that MiniGPT-v2 achieves strong performance on many visual question-answering and visual grounding benchmarks compared to other vision-language generalist models. Our model and codes are available at [https://minigpt-v2.github.io/](https://minigpt-v2.github.io/). ## 1 Introduction Multi-modal Large Language Models (LLMs) have emerged as an exciting research topic with a rich set of applications in vision-language community, such as visual AI assistant, image captioning, visual question answering (VQA), and referring expression comprehension (REC). A key feature of multimodal large language models is that they can inherit advanced capabilities (e.g., logical reasoning, common sense, and strong language expression) from the LLMs [32; 49; 50; 8]. When tuned with proper vision-language instructions, multi-modal LLMs, specifically vision-language models, demonstrate strong capabilities such as producing detailed image descriptions, generating code, localizing the visual objects in the image, and even performing multi-modal reasoning to better answer complicated visual questions [59; 26; 55; 53; 7; 10; 58; 6; 60]. This evolution of LLMs enables interactions of visual and language inputs across communication with individuals and has been shown quite effective for building visual chatbots. However, learning to perform multiple vision-language tasks effectively and formulating their corresponding multi-modal instructions present considerable challenges due to the complexities inherent among different tasks. For instance, given a user input _"tell me the location of a person"_, there are many ways to interpret and respond based on the specific task. In the context of the referring expression comprehension task, it can be answered with one bounding box location of the person. For the visual question-answering task, the model might describe their spatial location using human natural language. For the person detection task, the model might identify every spatial location of each human in a given image. To alleviate this issue and towards a unified approach, we propose a task-oriented instruction training scheme to reduce the multi-modal instructional ambiguity, and a vision-language model, MiniGPT-v2. Specifically, we provide a unique task identifier token for each task. For example, we provide a _[vqa]_ identifier token for training all the data samples from the visual question answering tasks. In total, we provide six different task identifiers during the model training stages. Our model, MiniGPT-v2, has a simple architecture design. It directly takes the visual tokens from a ViT vision encoder [12] and project them into the feature space of a large language model [50]. For better visual perception, we utilize higher-resolution images (448x448) during training. But this will result in a larger number of visual tokens. To make the model training more efficient, we concatenate every four neighboring visual tokens into a single token, reducing the total number by 75%. Additionally, we utilize a three-stage training strategy to effectively train our model with a mixture of weakly-labeled, fine-grained image-text datasets, and multi-modal instructional datasets, with different training focus at each stage. To evaluate the performance of our model, we conducted extensive experiments on diverse vision-language tasks, including (detailed) image/grounded captioning, vision question answering, and visual grounding. The results demonstrate that our MiniGPT-v2 can achieve SOTA or comparable performance on diverse benchmarks compared to previous vision-language generalist models, such as MiniGPT-4 [59], InstructBLIP [10], LLaVA [26] and Shikra [7]. For example, our MiniGPT-v2 outperforms MiniGPT-4 by 21.3%, InstructBLIP by 11.3%, and LLaVA by 11.7% on the VSR benchmark [25], and it also performs better than the previously established strong baseline, Shikra, in most validations on RefCOCO, RefCOCO+, and RefCOCOg. Our model establishes new state-of-the-art results on these benchmarks among vision-language generalist models, shown in Fig. 1. ## 2 Related Work We briefly review relevant works on advanced large language models and multi-modal LLMs for visual aligning. **Advanced Large Language Models (LLMs).** Early-stage models such as GPT-2 [38] and BERT [11] are foundation models trained on web-scale text datasets, marking a breakthrough in the NLP field. Following the success of foundation models, LLMs with higher capacity and increased training data are developed, including GPT-3 [4], Megatron-turing NLG [46], PaLM [9], Gopher [39], Figure 1: Our MiniGPT-v2 achieves state-of-the-art performances on a broad range of vision-language tasks compared with other generalist models. Chinchilla [16], OPT [57], and BLOOM [41]. Most recently, the efforts have been focused on refining LLMs to work effectively with human instruction and feedback. Representative works in this direction are InstructGPT [34] and ChatGPT [32], which demonstrate strong capabilities such as answering a diverse range of language questions, engaging in conversations with humans, and learning to perform complex tasks like writing refinement and coding assistant. Concurrent with these advancements of LLMs is the rise of LLaMA [49] language models. To enable human instruction following abilities similar to ChatGPT, some works attempt to finetune the LLaMA model with additional high-quality instruction datasets [1]. Examples of these models include Alpaca [47], Vicuna [8], and MPT [48]. Some other open-sourced language models that learned from the human feedback data, such as Falcon [35] and LLaMA-2 [50], have also been introduced to the NLP community with impressive performance. **Visual Aligning with LLMs.** With the remarkable generalization abilities of LLMs, interesting studies have extended LLMs to multi-modal domains by aligning visual inputs with LLMs. Early works such as VisualGPT [5] and Frozen [51] used pre-trained language models to improve vision-language models on image captioning and visual question answering. This initial exploration paved the way for subsequent vision-language research such as Flamingo [2] and BLIP-2 [22]. More recently, GPT-4 has been released and demonstrates many advanced multi-modal abilities, e.g., generating website code based on handwritten text instructions. Those demonstrated capabilities inspired other vision-language LLMs, including MiniGPT-4 [59] and LLaVA [26], which align the image inputs with a large language model, Vicuna [8], using proper instructional tuning. These vision-language models also showcase many advanced multi-modal capabilities after the alignment. Recent works, such as Vision-LLM [53], Kosmos-2 [36], Shikra [7], and our concurrent work, Qwen-VL [3], also demonstrate that multi-model LLMs models can also perform visual grounding by generating the text format of bounding boxes through language model. ## 3 Method We start by introducing our vision-language model, MiniGPT-v2, then discuss the basic idea of a multi-task instruction template with task identifiers for training, and finally adapt our task identifier idea to achieve task-oriented instruction tuning. ### Model Architecture Our proposed model architecture, MiniGPT-v2, is shown in Fig. 2. It consists of three components: a visual backbone, a linear projection layer, and a large language model. We describe each component as follows: **Visual backbone.** MiniGPT-v2 adapts the EVA [12] as our visual backbone model backbone. We freeze the visual backbone during the entire model training. We train our model with the image resolution 448x448, and we interpolate the positional encoding to scale with a higher image resolution. **Linear projection layer.** We aim to project all the visual tokens from the frozen vision backbone into the language model space. However, for higher-resolution images such as 448x448, projecting all the image tokens results in a very long-sequence input (e.g., 1024 tokens) and significantly lowers the training and inference efficiency. Hence, we simply concatenate 4 adjacent visual tokens in the embedding space and project them together into one single embedding in the same feature space of the large language model, thus reducing the number of visual input tokens by 4 times. With this operation, our MiniGPT-v2 can process high-resolution images much more efficiently during the training and inference stage. Figure 2: **Architecture of MiniGPT-v2.** The model takes a ViT visual backbone, which remains frozen during all training phases. We concatenate four adjacent visual output tokens from ViT backbone and project them into LLaMA-2 language model space via a linear projection layer. **Large language model.** MiniGPT-v2 adopts the open-sourced LLaMA2-chat (7B) [50] as the language model backbone. In our work, the language model is treated as a unified interface for various vision-language inputs. We directly rely on the LLaMA-2 language tokens to perform various vision-language tasks. For the visual grounding tasks that necessitate the generation of spatial locations, we directly ask the language model to produce textual representations of bounding boxes to denote their spatial positions. ### Multi-task Instruction Template When training a single unified model for multiple different tasks such as visual question answering, image caption, referring expression, grounded image caption, and region identification, the multi-modal model might fail to distinguish each task by just aligning visual tokens to language models. For instance, when you ask "Tell me the spatial location of the person wearing a red jacket?", the model can either respond you the location in a bounding box format (e.g., \(<\) X\({}_{left}><\) Y\({}_{top}><\) X\({}_{right}><\) Y\({}_{bottom}>\)) or describe the object location using natural language (e.g., upper right corner). To reduce such ambiguity and make each task easily distinguishable, we introduce task-specific tokens in our designed multi-task instruction template for training. We now describe our multi-task instruction template in more details. **General input format.** We follow the LLaMA-2 conversation template design and adapt it for the multi-modal instructional template. The template is denoted as follows, _[INST] \(<\)Img\(>\) \(<\) ImageFeature\(>\) \(<\)/Img\(>\) [Task Identifier] Instruction [/INST]_ In this template, _[INST]_ is considered as the user role, and _[/INST]_ is considered as the assistant role. We structure the user input into three parts. The first part is the image features, the second part is the task identifier token, and the third part is the instruction input. **Task identifier tokens.** Our model takes a distinct identifier for each task to reduce the ambiguity across various tasks. As illustrated in Table 1, we have proposed six different task identifiers for visual question answering, image caption, grounded image captioning, referring expression comprehension, referring expression generation, and phrase parsing and grounding respectively. For vision-irrelevant instructions, our model does not use any task identifier token. **Spatial location representation.** For tasks such as referring expression comprehension (REC), referring expression generation (REG), and grounded image captioning, our model is required to identify the spatial location of the referred objects accurately. We represent the spatial location through the textual formatting of bounding boxes in our setting, specifically: "\(\{<X_{left}><\) Y\({}_{top}><\) X\({}_{right}><\) Y\({}_{bottom}>\}\)". Coordinates for X and Y are represented by integer values normalized in the range [0,100]. \(<\) X\({}_{left}>\) and \(<\) Y\({}_{top}>\) denote the x and y coordinate top-left corner of the generated bounding box, and \(<\) X\({}_{right}>\) and \(<\) Y\({}_{bottom}>\) denote the x and y coordinates of the bottom-right corner. ### Multi-task Instruction Training We now adapt our designed multi-task instruction template for instruction training. The basic idea is to take instruction with task-specific identifier token as input for task-oriented instruction training of MiniGPT-v2. When input instructions have task identifier tokens, our model will become more prone to multiple-task understanding during training. We train our model with task identifier instructions for better visual alignment in three stages. The first stage is to help MiniGPT-v2 build broad vision-language knowledge through many weakly-labeled image-text datasets, and high-quality fine-grained vision-language annotation datasets as well (where we will assign a high data sampling ratio for weakly-labeled image-text datasets). The second stage is to improve the model with only \begin{table} \begin{tabular}{l c c c c c c} Tasks & VQA & Caption & Grounded Caption & REC & REG & Object Parsing and Grounding \\ \hline Identifiers & [vqa] & [caption] & [grounding] & [refer] & [identify] & [detection] \\ \end{tabular} \end{table} Table 1: Task identifier tokens for 6 different tasks, including visual question answering, image captioning, grounded image captioning, referring expression comprehension (REC), referring expression generation, and object parsing and grounding (where the model extracts objects from the input text and determines their bounding box locations). fine-grained data for multiple tasks. The third stage is to finetune our model with more multi-modal instruction and language datasets for answering diverse multi-modal instructions better and behaving as a multi-modal chatbot. The datasets used for training at each stage are listed in Table 2. **Stage 1: Pretraining.** To have broad vision-language knowledge, our model is trained on a mix of weakly-labeled and fine-grained datasets. We give a high sampling ratio for weakly-labeled datasets to gain more diverse knowledge in the first-stage. For the weakly-labeled datasets, we use LAION [42], CC3M [44], SBU [33], and GRIT-20M from Kosmos v2 [36] that built the dataset for referring expression comprehension (REC), referring expression generation (REG), and grounded image captioning. For fine-grained datasets, we use datasets like COCO caption [24] and Text Captions [45] for image captioning, RefCOCO [20], RefCOCO+ [56], and RefCOCOg [29] for REC. For REG, we restructured the data from ReferCOCO and its variants, reversing the order from phrase \(\rightarrow\) bounding boxes to bounding boxes \(\rightarrow\) phrase. For VQA datasets, our training takes a variety of datasets, such as GQA [19], VQA-v2 [14], OCR-VQA [31], OK-VQA [30], and AOK-VQA [43]. **Stage 2: Multi-task training.** To improve the performance of MiniGPT-v2 on each task, we only focus on using fine-grained datasets to train our model at this stage. We exclude the weakly-supervised datasets such as GRIT-20M and LAION from stage-1 and update the data sampling ratio according to the frequency of each task. This strategy enables our model to prioritize high-quality aligned image-text data for superior performance across various tasks. **Stage 3: Multi-modal instruction tuning.** Subsequently, we focus on tuning our model with more multi-modal instruction datasets and enhancing its conversation ability as a chatbot. We continue using the datasets from the second stage and add instructional datasets, including LLaVA [26], Flickr30k dataset [37], our constructed mixing multi-task dataset, and the language dataset, Unnatural Instruction [17]. We give a lower data sampling ratio for the fine-grained datasets from stage-2 and a higher data sampling ratio for the new instruction datasets. **- LLaVA instruction data.** We add the multi-modal instruction tuning datasets, including the detailed descriptions and complex reasoning from LLaVA [26], with 23k and 58k data examples respectively. **- Flicker 30k.** After the second-stage training, our MiniGPT-v2 can effectively generate the grounded image caption. Nevertheless, these descriptions tend to be short and often cover very few number of visual objects. This is because the GRIT-20M dataset from KOSMOS-v2 [36] that our model was trained with, features a limited number of grounded visual objects in each caption, and our model lacks proper multi-modal instruction tuning to teach it to recognize more visual objects. To improve this, we fine-tune our model using the Flickr30k dataset [37], which provides more contextual grounding of entities within its captions. We prepare the Flickr30k dataset in two distinct formats for training our model to perform grounded image caption and a new task "object parsing and grounding": 1) **Grounded image caption.** We select captions with a minimum of five grounded phrases, containing around 2.5k samples, and we directly instruct the model to produce the grounded image caption. e.g., _a \(<\)\(p\)\(>\)wooden table\(<\)\(/\)\(<\)\(X_{left}\)\(>\)\(<\)\(Y_{top}\)\(>\)\(<\)\(X_{right}\)\(>\)\(<\)\(Y_{bottom}\)\(>\)\(/\) in the center of the room._ 2) **Object parsing and grounding.** This new task is to parse all the objects from an input caption and then ground each object. To enable this, we use the task identifier[_detection_] to differentiate this capability from other tasks. Also, we use Flickr30k to construct two types of instruction datasets: \begin{table} \begin{tabular}{l l c c c} \hline \hline Data types & Dataset & Stage 1 & Stage 2 & Stage 3 \\ \hline Weakly-labeled & GRIT-20M (REC and REG), LAION, CC3M, SBU & ✓ & ✗ & ✗ \\ Grounded caption & GRIT-20M & ✓ & ✗ & ✗ \\ Caption & COCO caption, Text Captions & ✓ & ✓ & ✓ \\ REC & RefCOCO, RefCOCO+, RefCOCOg, Visual Genome & ✓ & ✓ & ✓ \\ REG & RefCOCO, RefCOCO+, RefCOCOg & ✓ & ✓ & ✓ \\ VQA & GQA, VQAv2, OCR-VQA, OK-VQA, AOK-VQA & ✓ & ✓ & ✓ \\ Multimodal instruction & LLaVA dataset, Flickr30k, Multi-task conversation & ✗ & ✗ & ✓ \\ Language dataset & Unnatural Instructions & ✗ & ✗ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 2: The training datasets used for our model three-stage training. caption\(\rightarrow\) grounded phrases and phrase \(\rightarrow\) grounded phrase, each containing around 2.5k and 3k samples. Then we prompt our model with the instruction: _[detection] description_, the model will directly parse the objects from the input image description and also ground the objects into bounding boxes. **- Mixing multi-task dataset.** After extensive training with single-round instruction-answer pairs, the model might not handle multiple tasks well during multi-round conversations since the context becomes more complex. To alleviate this situation, we create a new multi-round conversation dataset by mixing the data from different tasks. We include this dataset into our third-stage model training. **- Unnatural instruction.** The conversation abilities of language model can be reduced after extensive vision-language training. To fix this, we add the language dataset, Unnatural Instruction [17] into our model's third-stage training for helping recover the language generation ability. ## 4 Experiments In this section, we present experimental settings and results. We primarily conduct experiments on (detailed) image/grounded captioning, vision question answering, and visual grounding tasks, including referring expression comprehension. We present both quantitative and qualitative results. **Implementation details.** Throughout the entire training process, the visual backbone of MiniGPT-v2 remains frozen. We focus on training the linear projection layer and efficient finetuning the language model using LoRA [18]. With LoRA, we finetune \(\mathcal{W}_{q}\) and \(\mathcal{W}_{v}\) via low-rank adaptation. In our implementation, we set the rank, \(r=64\). We trained the model with an image resolution of 448x448 during all stages. During each stage, we use our designed multi-modal instructional templates for various vision-language tasks during the model training. **Training and hyperparameters.** We use AdamW optimizer with a cosine learning rate scheduler to train our model. In the initial stage, we train on 8xA100 GPUs for 400,000 steps with a global batch size of 96 and an maximum learning rate of 1e-4. This stage takes around 90 hours. During the second stage, the model is trained for 50,000 steps on 4xA100 GPUs with a maximum learning rate \begin{table} \begin{tabular}{l c|c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Model types} & \multicolumn{3}{c}{RefCOCO} & \multicolumn{3}{c}{RefCOCO+} & \multicolumn{3}{c}{RefCOCOg} & \multirow{2}{*}{Avg} \\ & & val & test-A & test-B & val & test-A & test-B & val & test \\ \hline UNINEXT & \multirow{2}{*}{Specialist models} & 92.64 & 94.33 & 91.46 & 85.24 & 89.63 & 79.79 & 88.73 & 89.37 & 88.90 \\ G-DINO-L & & 90.56 & 93.19 & 88.24 & 82.75 & 88.95 & 75.92 & 86.13 & 87.02 & 86.60 \\ \hline VisionLLM-H & - & 86.70 & - & - & - & - & - & - & - \\ OFA-L & & 79.96 & 83.67 & 76.39 & 68.29 & 76.00 & 61.75 & 67.57 & 67.58 & 72.65 \\ Shikra (7B) & \multirow{2}{*}{Generalist models} & 87.01 & 90.61 & 80.24 & 81.60 & 87.36 & 72.12 & 82.27 & 82.19 & 82.93 \\ Shikra (13B) & \multirow{2}{*}{Generalist models} & 87.83 & 91.11 & 81.81 & **82.89** & **87.79** & 74.41 & 82.64 & 83.16 & 83.96 \\ Ours (7B) & & **88.69** & **91.65** & **85.33** & 79.97 & 85.12 & **74.45** & **84.44** & **84.66** & **84.29** \\ Ours (7B)-chat & & 88.06 & 91.29 & 84.30 & 79.58 & 85.52 & 73.32 & 84.19 & 84.31 & 83.70 \\ \hline \hline \end{tabular} \end{table} Table 4: **Results on referring expression comprehension tasks. Our MiniGPT-v2 outperforms many VL-generalist models including VisionLLM [53], OFA [52] and Shikra [7] and reduces the accuracy gap comparing to specialist models including UNINEXT [54] and G-DINO [27].** \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Method & Grounding & OKVQA & GQA & \begin{tabular}{c} VSR \\ (zero-shot) \\ \end{tabular} & \begin{tabular}{c} IconVQA \\ (zero-shot) \\ \end{tabular} & \begin{tabular}{c} VizWiz \\ (zero-shot) \\ \end{tabular} & \begin{tabular}{c} HM \\ (zero-shot) \\ \end{tabular} \\ \hline Flamingo-9B & ✗ & 44.7 & - & 31.8 & - & 28.8 & 57.0 \\ BLIP-2 (13B) & ✗ & 45.9 & 41.0 & 50.9 & 40.6 & 19.6 & 53.7 \\ InstructBLIP (13B) & ✗ & - & 49.5 & 52.1 & 44.8 & 33.4 & 57.5 \\ MiniGPT-4 (13B) & ✗ & 37.5 & 30.8 & 41.6 & 37.6 & - & - \\ LLaVA (13B) & ✗ & 54.4 & 41.3 & 51.2 & 43.0 & - & - \\ Shikra (13B) & ✓ & 47.2 & - & - & - & - & - \\ Ours (7B) & ✓ & 56.9 & **60.3** & 60.6 & 47.7 & 32.9 & 58.2 \\ Ours (7B)-chat & ✓ & **57.8** & 60.1 & **62.9** & **51.5** & **53.6** & **58.8** \\ \hline \hline \end{tabular} \end{table} Table 3: **Results on multiple VQA tasks. We report top-1 accuracy for each task. Grounding column indicates whether the model incorporates visual localization capability. The best performance for each benchmark is indicated in bold.** of 1e-5, adopting a global batch size of 64, and this training stage lasts roughly 20 hours. For the last stage, training is executed for another 35,000 steps on 4xA100 GPUs, using a global batch size of 24 and this training stage took around 7 hours, maintaining the same maximum learning rate of 1e-5. ### Quantitative Evaluation **Dataset and evaluation metrics.** We evaluate our model across a range of VQA and visual grounding benchmarks. For VQA benchmarks, we consider OKVQA [43], GQA [19], visual spatial reasoning (VSR) [25], IconVQA [28], VizWiz [15], HatefulMemes and (HM) [21]. For visual grounding, we evaluate our model on RefCOCO [20] and RefCOCO+[56], and RefCOCOg[29] benchmarks. To evaluate VQA benchmarks, we use an open-ended approach with a greedy decoding strategy. We evaluate each VQA question with the following instruction template: _"[vqa] question"_. Following the previous method [10], we evaluate the performance by matching the model's response to the ground-truth and reporting top-1 accuracy. For visual grounding benchmarks, we use the template _"[refer] give me the location of Referring expression"_ for each referring expression comprehension question, and a predicted bounding box is considered as correct for reporting accuracy if its IOU between prediction and ground-truth is higher than 0.5. **Visual question answering results.** Table 3 presents our experimental results on multiple VQA benchmarks. Our results compare favorably to baselines including MiniGPT-4 [59], Shikra [7], LLaVA [26], and InstructBLIP [10] across all the VQA tasks. For example, on QKVQA, our MiniGPT-v2 outperforms MiniGPT-4, Shikra, LLaVA, and BLIP-2 by 20.3%, 10.6%, 3.4%, and 11.9%. These results indicate the strong visual question answering capabilities of our model. Furthermore, we find that our MiniGPT-v2 (chat) variant shows higher performance than the version trained after the second stage. On OKVQA, VSR, IconVQA, VizWiz, and HM, MiniGPT-v2 (chat) outperforms MiniGPT-v2 by 0.9%, 2.3%, 4.2%, 20.7%, and 0.6%. We believe that the better performance can be attributed to the improved language skills during the third-stage training, which is able to benefit visual question comprehension and response, especially on VizWiz with 20.7% top-1 accuracy increase. **Referring expression comprehension results.** Table 4 compares our model to baselines on REC benchmarks. Our MiniGPT-v2 shows strong REC performance on RefCOCO, RefCOCO+, and RefCOCOg, performing better than other vision-language generalist models. MiniGPT-v2 outperforms OFA-L [52] by over 8% accuracy across all tasks of RefCOCO/RefCOCO+/RefCOCOg. Compared with a strong baseline, Shikra (13B) [7], our model still shows better results, e.g., 84.29% vs 83.96% accuracy in average. These results provide direct evidence for the competing visual grounding \begin{table} \begin{tabular}{l c c c} \hline \hline Method & CHAIR\({}_{I}\)\(\downarrow\) & CHAIR\({}_{S}\)\(\downarrow\) & Len \\ \hline MiniGPT-4 & 9.2 & 31.5 & 116.2 \\ mPLUG-Owl & 30.2 & 76.8 & 98.5 \\ LLaVA & 18.8 & 62.7 & 90.7 \\ MultiModal-GPT & 18.2 & 36.2 & 45.7 \\ MiniGPT-v2 (long) & 8.7 & 25.3 & 56.5 \\ MiniGPT-v2 (grounded) & 7.6 & 12.5 & 18.9 \\ MiniGPT-v2 (short) & **4.4** & **7.1** & **10.3** \\ \hline \hline \end{tabular} \end{table} Table 6: **Results on hallucination. We evaluate the hallucination of MiniGPT-v2 with different instructional templates and output three versions of captions for evaluation. For the “long” version, we use the prompt _generate a brief description of the given image_. For the “grounded” version, the instruction is _[grounding] describe this image in as detailed as possible_. For the “short” version, the prompt is _[caption] briefly describe the image_.** \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & OKVQA & GQA & WizViz & VSR & IconVQA & HM & Average \\ \hline Ours w/o task identifier & 50.5 & 53.4 & 28.6 & 57.5 & 44.8 & 56.8 & 48.6 \\ Ours & **52.1** & **54.6** & **29.4** & **59.9** & **45.6** & **57.4** & **49.8** \\ \hline \hline \end{tabular} \end{table} Table 5: Task identifier ablation study on VQA benchmarks. With task identifier during the model training can overall improve VQA performances from multiple VQA benchmarks capabilities of MiniGPT-v2. Although our model underperforms specialist models, the promising performance indicates its growing competence in visual grounding. **Ablation on task identifier.** We conduct ablation studies on the effect of the task identifier on the performance of MiniGPT-v2. We compare our model with the variant without using task identifiers on VQA benchmarks. Both models were trained on 4xA100 GPUs for 24 hours with an equal number of training steps for multiple vision-language tasks. Results in Table 5 demonstrate the performance on multiple VQA benchmarks and consistently show that token identifier training benefits the overall performance of MiniGPT-v2. Specifically, our MiniGPT-v2 with task-oriented instruction training achieves 1.2% top-1 accuracy improvement on average. These ablation results can validate the clear advantage of adding task identifier tokens and support the use of multi-task identifiers for multi-task learning efficiency. Figure 3: **Examples for various multi-modal capabilities of MiniGPT-v2.** We showcase that our model is capable of completing multiple tasks such as referring expression comprehension, referring expression generation, detailed grounded image caption, visual question answering, detailed image description, and directly parsing phrase and grounding from a given input text. **Hallucination.** We measure the hallucination of our model on image description generation and compare the results with other vision-language baselines, including MiniGPT-4 [59], mPLUG-Owl [55], LLaVA [26], and MultiModal-GPT [13]. Following the methodology from [23], we use CHAIR [40] to assess hallucination at both object and sentence levels. As shown in Table 6, we find that our MiniGPT-v2 tends to generate the image description with reduced hallucination compared to other baselines. We have evaluated three types of prompts in MiniGPT-v2. First, we use the prompt _generate a brief description of the given image_ without any specific task identifier which tends to produce more detailed image descriptions. Then we provide the instruction prompt _[grounding] describe this image in as detailed as possible_ for evaluating grounded image captions. Lastly, we prompt our model with _[caption] briefly describe the image_. With these task identifiers, MiniGPT-v2 is able to produce a variety of image descriptions with different levels of hallucination. As a result, all these three instruction variants have lower hallucination than our baseline, especially with the task specifiers of _[caption]_ and _[grounding]_. ### Qualitative Results We now provide the qualitative results for a complementary understanding of our model's multi-modal capabilities. Some examples can be seen in Fig. 3. Specifically, we demonstrated various abilities in the examples including a) object identification; b) detailed grounded image captioning; c) visual question answering; d) referring expression comprehension; e) visual question answering under task identifier; f) detailed image description; g) object parsing and grounding from an input text. More qualitative results can be found in the Appendix. These results demonstrate that our model has competing vision-language understanding capabilities. Moreover, notice that we train our model only with a few thousand of instruction samples on object parsing and grounding tasks at the third-stage, and our model can effectively follow the instructions and generalize on the new task. This indicates that our model has the flexibility to adapt on many new tasks. Note that our model still occasionally shows hallucinations when generating the image description or visual grounding. e.g., our model may sometimes produce descriptions of non-existent visual objects or generate inaccurate visual locations of grounded objects. We believe training with more high-quality image-text aligned data and integrating with a stronger vision backbone or large language model hold the potential for alleviating this issue. ## 5 Conclusion In this paper, we introduce MiniGPT-v2, a multi-modal LLM that can serve as a unified interface for various vision-language multi-tasking learning. To develop a single model capable of handling multiple vision-language tasks, we propose using distinct identifiers for each task during the training and inference. These identifiers help our model easily differentiate various tasks and also improve learning efficiency. Our MiniGPT-v2 achieves state-of-the-art results across many visual question answering and referring expression comprehension benchmarks. We also found that our model can efficiently adapt to new vision-language tasks, which suggests that MiniGPT-v2 has many potential applications in the vision-language community.
2308.04316
Geodesic complexity of a cube
The topological (resp. geodesic) complexity of a topological (resp. metric) space is roughly the smallest number of continuous rules required to choose paths (resp. shortest paths) between any points of the space. We prove that the geodesic complexity of a cube exceeds its topological complexity by exactly 2. The proof involves a careful analysis of cut loci of the cube.
Donald M. Davis
2023-08-08T15:08:32Z
http://arxiv.org/abs/2308.04316v1
# Geodesic complexity of a cube ###### Abstract. The topological (resp. geodesic) complexity of a topological (resp. metric) space is roughly the smallest number of continuous rules required to choose paths (resp. shortest paths) between any points of the space. We prove that the geodesic complexity of a cube exceeds its topological complexity by exactly 2. The proof involves a careful analysis of cut loci of the cube. Key words and phrases:Geodesic complexity, topological robotics, geodesics, cut locus, cube 2000 _Mathematics Subject Classification_: 53C22, 52B10, 55M30 ## 1. Introduction In [5], Farber introduced the concept of the _topological complexity_, \(\operatorname{TC}(X)\), of a topological space \(X\), which is the minimal number \(k\) such that there is a partition \[X\times X=E_{1}\sqcup\cdots\sqcup E_{k}\] with each \(E_{i}\) being locally compact and admitting a continuous function \(\phi_{i}:E_{i}\to P(X)\) such that \(\phi_{i}(x_{0},x_{1})\) is a path from \(x_{0}\) to \(x_{1}\). Here \(P(X)\) is the space of paths in \(X\) with the compact-open topology, and each \(\phi_{i}\) is called a motion-planning rule. If \(X\) is the space of configurations of one or more robots, this models the number of continuous rules required to program the robots to move between any two configurations. In [7], Recio-Mitter suggested that if \(X\) is a metric space, then we require that the paths \(\phi_{i}(x_{0},x_{1})\) be minimal geodesics (shortest paths) from \(x_{0}\) to \(x_{1}\), and defined the _geodesic complexity_, \(\operatorname{GC}(X)\), to be the smallest number \(k\) such that there is a partition \[X\times X=E_{1}\sqcup\cdots\sqcup E_{k}\] with each \(E_{i}\) being locally compact and admitting a continuous function \(\phi_{i}:E_{i}\to P(X)\) such that \(\phi_{i}(x_{0},x_{1})\) is a minimal geodesic from \(x_{0}\) to \(x_{1}\).1 Each function \(\phi_{i}\) is called a _geodesic motion-planning rule_ (GMPR). Footnote 1: Recio-Mitter’s definition of GC\((X)=k\) involved partitions into sets \(E_{0},\ldots,E_{k}\), which, for technical reasons, has become the more common definition of concepts of this sort, but we prefer here to stick with Farber’s more intuitive formulation. One example discussed by Recio-Mitter in [7] was when \(X\) is (the surface of) a cube. It is well-known that here TC\((X)=\) TC\((S^{2})=3\), and he showed that GC\((X)\geq 4\). In this paper we prove that in this case GC\((X)=5\). **Theorem 1.1**.: _If \(X\) is a cube, then GC\((X)=5\)._ For comparison, in [3] the author proved that for a regular tetrahedron \(T\), GC\((T)=4\) or \(5\), but was not able to establish the precise value. Here again TC\((T)=\) TC\((S^{2})=3\). Our work relies heavily on the work of the author and Guo in [4], where they analyzed the isomorphism classes as labeled graphs of cut loci on the cube. In Section 2, we review the relevant parts of that work. In Section 3, we prove that GC\((X)\leq 5\) by constructing five explicit geodesic motion planning rules. In Section 4, we prove GC\((X)\geq 5\), using methods similar to those used in [7] and [3]. ## 2. Background on cut loci of a cube In this section we present background material, mostly from [4], regarding cut loci for a cube. The _cut locus_ of a point \(P\) on a polyhedron is the closure of the set of points \(Q\) such that there is more than one shortest path (minimal geodesic) from \(P\) to \(Q\). The cut locus is a labeled graph with corner points of the polyhedron labeling the leaves and perhaps other vertices. Two labeled graphs are isomorphic if there is a graph bijection between them preserving labels. We let \(\mathbf{L}\) denote the isomorphism class of a cut locus. Figure 2.1, from [4], shows the partition of a face of a cube into 193 connected subsets with constant \(\mathbf{L}\). Figure 2.2, also from [4], is a reparametrized version of the regions in the left quadrant of Figure 2.1. **Figure 2.1. Decomposition of a face into subsets on which L is constant** **Figure 2.2. Regions in left quadrant of Figure 2.1** In [4] we listed, in stylized form, the \(\mathbf{L}\) for the various regions, but here, as we are interested in continuity of motion-planning rules, we are concerned about other aspects, such as the placement of edges of the cut locus with respect to one another. The cut loci are found by the method of star unfolding and Voronoi diagrams, as developed in [1] and [6]. We will use the same numbering of the corner points of the cube as was used in [4] and appears in Figure 2.3, also taken from [4], which, for future reference, includes an example of the cut locus of the midpoint of edge 5-8. **Figure 2.3. A cube with labeled corner points, and the cut locus for the middle point of an edge highlighted** In [4], we explain how the diagram on the right side of Figure 2.4 is obtained, depicting in bold red the cut locus of the point \(P\) in the left side of Figure 2.4. The numbers at half of the vertices of the polygon correspond to the corner points in Figure 2.3, and the labels \(P_{1},\ldots,P_{8}\) at the other vertices are different positions of the point \(P\) in an unfolding of the cube. Every point of the cube occurs exactly once inside or on the 16-gon in Figure 2.4, except that some occur on two boundary segments, and \(P\) occurs eight times. For example, the region in the right side of the 16-gon in Figure 2.4 bounded above and below by the segments coming in from the vertices labeled 6 and 7, on the right by \(P_{5}\), and on the left by the short vertical segment \(I\) is all the points that are closer to the \(P_{5}\) version of point \(P\) than to the others. This is called the Voronoi cell of \(P_{5}\). The segment \(I\) is equally close to versions \(P_{1}\) and \(P_{5}\). There are two equal minimal geodesics from \(P\) to points on \(I\); one crosses the segment connecting corner points 1 and 4, while the other crosses the segment connecting 6 and 7. It is proved in [4] that the top and bottom halves of cut loci of the cube can be considered separately. Although all the regions in Figure 2.2 have distinct \(\mathbf{L}\), some have isomorphic top halves. For example, as can be seen in [4, Figure 2.2], regions \(F\), \(E\), \(I\), \(C\), and \(H\) all have isomorphic top halves. We combine these here into a single region, which we will also call \(F\). Similarly regions \(D\), \(B\), and \(I^{\prime}\) in Figure 2.2 have the same top half of \(\mathbf{L}\) and are combined into a single region, \(D\). Also \(D^{\prime}\) and \(E^{\prime}\) combine to form \(D^{\prime}\), \(F^{\prime}\) and \(G^{\prime}\) combine to \(F^{\prime}\), and \(A\), \(G\), and \(H^{\prime}\) combine into \(A\). This simplifies Figure 2.2 into our schematic Figure 2.5, which only concerns top halves of \(\mathbf{L}\). We will discuss bottom halves later in this section. Figure 2.4. Voronoi cells and cut locus of \(P\) **Figure 2.5. Regions with same top half of cut locus** There are also curves \(DF\), \(FA\), \(DD^{\prime}\), \(D^{\prime}F^{\prime}\), and \(F^{\prime}A\) bounding these combined regions. There is also \(*\), the intersection point, and the left edge \(\mathcal{E}\). In Figure 2.6, we depict the top half of the cut loci for these regions, with arrows indicating convergence of points in a region to points in its boundary, in each of which an edge of the graph is collapsed. The bottom half of the cut locus of a point in a region \(R\) in Figure 2.2 is obtained from the top half of the cut locus of the vertical reflection of the point, which is in reflected region \(R^{\prime}\), by inverting it and applying the permutation \((1\ 4)(2\ 3)(5\ 8)(6\ 7)\) to the labels. The collecting of several regions of Figure 2.2 into a single region with the same bottom half of \(\mathbf{L}\) is essentially a vertical flip of what was done in forming Figure 2.5 for top halves. For example, the vertical reflection of the region \(D^{\prime}\) of Figure 2.5 contains regions \(D\) and \(E\) of Figure 2.2, and its cut locus bottom is as in Figure 2.7. **Figure 2.7. Cut locus bottom of flip of region \(D^{\prime}\) of Figure 2.5** Each region in the top quadrant of Figure 2.1 is obtained from the corresponding region in the left quadrant by a clockwise rotation of \(\pi/2\) around the center of the square. The cut locus of the new region is obtained from that of the old one by applying the permutation (1 4 3 2)(5 8 7 6) to the labels and then rotating the resulting figure \(\pi/2\) counter-clockwise. In Figure 2.8 we show the cut locus of points in region \(A\), in the rotated region \(A_{R}\), and in the half-diagonal separating them. **Figure 2.8. Cut locus of rotation of region** In [4], we were only concerned about isomorphism type as a graph, but here we care about the relative positions of the labeled arms. ## 3. Geodesic motion planning rules In this section, we construct five geodesic motion-planning rules for the cube. The remainder of this section is devoted to the proof of the following result. **Theorem 3.1**.: _If \(X\) is the cube, then \(X\times X\) can be partitioned into five locally-compact subsets \(E_{i}\) with a GMPR \(\phi_{i}\) on each._ We define \(E_{1}\) to be the set of pairs \((P,Q)\) such that there is a unique minimal geodesic from \(P\) to \(Q\), and let \(\phi_{1}(P,Q)\) be that path. It is well-known ([2, Chapter 1, 3.12 Lemma]) that such a function is continuous. Note that a corner point \(V\) at a leaf of the cut locus graph of a point \(P\) is not in the cut locus, so these \((P,V)\) are in \(E_{1}\). We define the _multiplicity_ of \((P,Q)\) (or of just \(Q\) if \(P\) is implicit) to be the number of distinct minimal geodesics from \(P\) to \(Q\). If \(Q\) is on an edge (resp. is a vertex) of the cut locus graph of \(P\), then the multiplicity of \((P,Q)\) equals \(2\) (resp. the degree of the vertex). We define \(E_{2}\) to be the set of all \((P,Q)\) of multiplicity \(2\). The points \(Q\) will, for the most part, be interiors of edges of the cut locus graph. It also includes any degree-\(2\) vertex, such as vertex \(2\) in the cut locus of \(\mathcal{E}\) in Figure 2.6. The function \(\phi_{2}\) is defined using an orientation of the cube; i.e., a continuous choice of direction of rotation around each point. The cut locus of \(P\) varies continuously with \(P\), unless \(P\) is a corner point. We will deal with the case with \(P\) a corner point later. The cut loci of points in a quadrant is a tree consisting of two parts connected by a segment parallel to the edge of the quadrant. See, for example, the cut loci of points in regions \(A\) and \(A_{R}\) pictured in Figure 2.8. For a \(3\)-dimensional example, see Figure 2.3. For points on the diagonals separating quadrants, the connecting "segment" consists of a single point. Think of rotating the cut locus around the center of that segment in the direction given by the orientation. We define \(\phi_{2}(P,Q)\) to be the geodesic from \(P\) to \(Q\) which approaches \(Q\) in the direction of the rotation. We will deal with the connecting segments shortly. In Figures 3.2 and 3.3, we add to Figures 2.8 and 2.6 red dots on the edges of several cut loci indicating the side from which \(Q\) should be approached if the orientation is clockwise. **Figure 3.2. Direction for \(\phi_{2}\) for some cut loci** **Figure 3.3. Direction for \(\phi_{2}\) in some top halves** Regarding the connecting segments, note that each edge of the cube bounds two quadrants, and all cut loci in those two quadrants have parallel connecting segments. Arbitrarily make a uniform choice of a side of these segments. Let \(\phi_{2}(P,Q)\) for \(Q\) in those connecting segments be the minimal geodesic from \(P\) to \(Q\) which approaches \(Q\) from the selected side. Because the quadrants are bounded by diagonals in which the connecting points of cut loci halves are vertices of degree 4 and so are not part of \(E_{2}\), compatibility of the GMPRs for connecting segments in distinct quadrant-pairs is not an issue. The cut locus of a corner point consists of the three edges and three diagonals emanating from the opposite corner point. Although it is not the case that the cut loci vary continuously with \(P\) as \(P\) approaches a corner point, we show that our defining \(\phi_{2}\) using rotation around a central point is still continuous at the corner point. In Figure 3.4, we depict the cut loci of corner point \(V_{8}\) and of points \(P\) close to \(V_{8}\) along the 5-8 edge, along the curve \(DE\) in Figure 2.2, and along the diagonal, adorned with red dots indicating the direction from which the side should be approached using \(\phi_{2}\) For \(P\) on the edge, or \(DE\), or the diagonal approaching \(V_{8}\), the points \(Q\) in the cut locus of \(P\) on the segment emanating from vertex number 8 approach a point \(Q_{0}\) which is not in the cut locus of \(V_{8}\). Then \((V_{8},Q_{0})\) is in \(E_{1}\), and so we don't have to worry about the limit of \(\phi_{2}(P,Q)\). The set \(E_{3}\) consists of the 56 points \((P,Q)\) such that \(Q\) is a vertex of the cut locus of \(P\) of degree 5 or 6. Since this is a discrete set, the function \(\phi_{3}\) can be defined arbitrarily. Eight of these points have \(P\) a corner point of the cube and \(Q\) the opposite corner point. The cut locus of a corner point was depicted in the left side of Figure 3.4. Another point in \(E_{3}\) has \(P\) equal to the point \(*\), which was introduced in Figure 2.5. The top half of its cut locus was shown in Figure 2.6; we show its entire cut locus in Figure 3.5. **Figure 3.5. Cut locus of \(*\)** For \(P=*\) and \(Q\) the indicated degree-5 vertex, we place \((P,Q)\) in \(E_{3}\). The vertical reflection \(*^{\prime}\) of \(*\) has cut locus a reindexed vertical reflection of Figure 3.5, and we place \((*^{\prime},Q^{\prime})\), where \(Q^{\prime}\) is its degree-5 vertex, in \(E_{3}\). Each quadrant has two analogous points in \(E_{3}\). There are 24 quadrants, so 48 such points altogether. Two more sets, \(E_{4}\) and \(E_{5}\), are required for \((P,Q)\) with \(Q\) a vertex of degree 3 or 4 of the cut locus of \(P\). In Figure 3.6, we depict this for \(Q\) in the top half of cut loci of points \(P\) in the left quadrant of the 5678 face. Because the degree-5 vertex of \(*\) has been placed in \(E_{3}\), we need not worry about continuity as \(*\) is approached. We place in \(E_{4}\) all \((P,Q)\) in which \(Q\) can be approached from the 2-5 region, and depict them by solid disks. In \(E_{5}\) we place those \((P,Q)\) not in \(E_{4}\) which can be approached from the 2-6 region, and depict them by open circles. The cases, in \(D\), \(F\), and \(DF\), where \(Q\) cannot be approached from the 2-5 or 2-6 regions are placed in \(E_{4}\) or \(E_{5}\) as indicated. Note that the degree-2 vertex when \(P\) is on the edge \(\mathcal{E}\) is in \(E_{2}\), which was already considered. The GMPRs \(\phi_{4}\) and \(\phi_{5}\) choose the minimal geodesic from \(P\) to \(Q\) which approach \(Q\) from region 2-5, 2-6, or 1-5. Each arrow in Figure 3.6 represents points \(P\) in a region approaching points in its boundary. A segment in a cut locus shrinks to a point. Continuity of each separate function \(\phi_{i}\) should be clear. All quadrants of all faces are handled similarly, using permutations of corner-point numbers. In particular, if \(P\) is in the analogue of the large region \(A\) in any quadrant, and \(Q\) is a vertex of the cut locus of \(P\), then \((P,Q)\) is in \(E_{4}\). Since regions \(A\) are the only regions abutting a diagonal, (see Figures 2.1 or 2.5) if, for the degree-\(3\) and degree-\(4\) vertices \(Q\) of the cut locus of points \(P\) in the diagonals of the quadrants, we place \((P,Q)\) in \(E_{5}\), then there is no worry about continuity of \(\phi\) functions at these points, as long as we make consistent choices. The cut locus of the center of a quadrant has four arms emanating from a central vertex, with a degree-\(2\) vertex on each arm. In the \(5678\) face, it is obtained from the cut locus of the diagonal Figure 3.6. Approach to vertices of cut loci pictured in Figure 2.8 by collapsing the arms from 1 and 3 to a point. We make an arbitrary choice of \(\phi_{5}(P,Q)\) when \(Q\) is the degree-4 vertex of the center \(P\) of a face, and then choose \(\phi_{5}(P,Q)\) compatibly when \(Q\) is the degree-4 vertex of points \(P\) on the diagonals of the face. In the paragraph following Figure 2.6, we described how bottom halves of cut loci are determined from top halves of cut loci, and we put these \((P,Q)\) with \(Q\) a vertex of degree 3 or 4 in the bottom half of the cut locus of \(P\) in sets \(E_{i}\) with GMPRs \(\phi_{i}\), \(4\leq i\leq 5\), analogously to what was done for the top halves. The cube is composed of twelve regions such as that in Figure 3.7, each bounded by half-diagonals of faces, and symmetrical about an edge of the cube. For cut-locus vertices of degree 3 or 4, the GMPRs on the diagonals are in separate sets from those on the \(A\)-regions abutting them, and so the twelve regions can be considered separately. Once we have defined the GMPRs for the region containing the 5-8 edge, GMPRs on the other regions can be defined similarly, using permutations of corner-point numbers. **Figure 3.7. A subset of the cube** The cut locus of a point \(\widetilde{P}\) on the left half of Figure 3.7 is obtained from that of its horizontal reflection by applying the permutation \((1\ 6)(4\ 7)\) and flipping horizontally. In Figure 3.8, we show top halves of cut loci for points in the reflection of the edge \(\mathcal{E}\) and of the regions abutting it, together with their GMPRs for vertices of degree \(\geq 3\). Note that \(\mathcal{E}=\widetilde{\mathcal{E}}\), so they have the same cut loci, but the depictions of them from the star unfolding are different depending on whether they are the left or right edge. **Figure 3.8. Horizontal reflection** The sets \(E_{i}\) and functions \(\phi_{i}\), \(4\leq i\leq 5\), for the left side of Figure 3.7 are defined like those on the primed (or unprimed) version on the right side, with \(2\) and \(5\) interchanged. Compare \(\widetilde{D^{\prime}}\) (resp. \(\widetilde{D}\)) in Figure 3.8 with \(D\) (resp. \(D^{\prime}\)) in Figure 3.6. This completes the proof of Theorem 3.1. ## 4. Lower bound In this section we prove the following result, which is the lower bound in Theorem 1.1. The method is similar to that developed by Recio-Mitter in [7] and applied by the author in [3]. **Theorem 4.1**.: _If \(X\) is a cube, it is impossible to partition \(X\times X\) into sets \(E_{i}\), \(1\leq i\leq 4\), with a GMPR \(\phi_{i}\) on \(E_{i}\)._ Proof.: Assume such a decomposition exists. Note that the specific \(E_{i}\) of the previous section are not relevant here. Let \(V_{i}\) be the corner point numbered \(i\) in our treatment of the cube. The cut locus of \(V_{8}\) is as in the left side of Figure 3.4. It consists of edges from \(V_{2}\) to corner points 1, 3, and 6, and diagonals from \(V_{2}\) to corner points 4, 5, and 7. Let \(E_{1}\) be the set containing \((V_{8},V_{2})\), and suppose \(\phi_{1}(V_{8},V_{2})\) is the geodesic passing between \(V_{3}\) and \(V_{4}\). Other cases can be handled in the same way, using a permutation of corner points. Points \(P\) on the curve \(DE\) of Figure 2.2 have top half of cut loci as in Figure 4.2. (This is part of the curve \(DF\) in Figure 2.5.) **Figure 4.2. Top half of cut locus of points on curve \(DE\)** Let \(Q\) be the vertex of degree 4, and \(\alpha\), \(\beta\),\(\gamma\), and \(\delta\) the four regions of approach to \(Q\), as indicated in the figure, which varies with \(P\). As \(P\) approaches \(V_{8}\) along \(DE\), Figure 4.2 approaches the top half of the cut locus of \(V_{8}\) (Figure 3.4); the segment from \(Q\) to 2 shrinks to the point \(V_{2}\), and the other vertical segment collapses, too. Suppose there were a sequence of points \(P_{n}\) on \(DE\) approaching \(V_{8}\) with \(Q_{n}\) the point \(Q\) in Figure 4.2 and \((P_{n},Q_{n})\in E_{1}\). Then \(\phi_{1}(P_{n},Q_{n})\) would approach \(\phi_{1}(V_{8},V_{2})\), but this is impossible, since they pass through different regions. Therefore there must be a sequence \(P_{n}\) on \(DE\) approaching \(V_{8}\) for which \((P_{n},Q_{n})\) is in a different set, \(E_{2}\), and restricting further, we may assume that \(\phi_{2}(P_{n},Q_{n})\) all pass through the same region, \(\alpha\), \(\beta\), \(\gamma\), or \(\delta\). Points in region \(D\) have top half of cut locus as in Figure 4.3. See Figure 2.6. Let \(Q_{\alpha}\) and \(Q_{\gamma}\) be the indicated vertices in Figure 4.3. If \(\phi_{2}(P_{n},Q_{n})\) passes through region \(\alpha\) (resp. \(\gamma\)) in Figure 4.2, consider a sequence of points \(P_{n,m}\) in region \(D\) approaching \(P_{n}\), and let the associated cut-locus points \(Q_{n,m}\) be \(Q_{a}\) (resp. \(Q_{\gamma}\)). Such a sequence \((P_{n,m},Q_{n,m})\) cannot have a convergent subsequence in \(E_{2}\), since, if it did, reindexing, \(\phi_{2}(P_{n,m},Q_{n,m})\to\phi_{2}(P_{n},Q_{n})\), but paths going to \(Q_{\alpha}\) (resp. \(Q_{\gamma}\)) cannot approach a path passing through region \(\alpha\) (resp. \(\gamma\)) in Figure 4.2. So we may restrict to points \((P_{n,m},Q_{n,m})\) not in \(E_{2}\), and restricting further, we may assume they are all in the same \(E_{i}\). If \(i=1\), then \((P_{n,n},Q_{n,n})\)2 would approach \((V_{8},V_{2})\) and would have \(\phi_{1}(P_{n,n},Q_{n,n})\to\phi_{1}(V_{8},V_{2})\), which is impossible since these paths pass through different regions.3 Thus all \((P_{n,m},Q_{n,m})\) must be in either \(E_{3}\) or \(E_{4}\), and we may assume they are all in \(E_{3}\). Footnote 2: This should really be \((P_{n,n^{\prime}},Q_{n,n^{\prime}})\) for some \(n^{\prime}\geq n\), but we will simplify the notation as we have here and subsequently. Footnote 3: Here, as in many other parts of this proof, when we say “pass through” a region, we mean, of course, that the portion of the curve as it approaches the limit point passes through the region. A similar argument works if all \(\phi_{2}(P_{n},Q_{n})\) pass through region \(\beta\) or \(\delta\) in Figure 4.2, using points \(P_{n,m}\) in region \(E\) of Figure 2.2 approaching \(P_{n}\), and \(Q_{n,m}\) the points \(Q_{\beta}\) or \(Q_{\delta}\) in Figure 4.4, which depicts the top half of the cut locus of points in region \(E\) of Figure 2.2. (Region \(E\) of Figure 2.2 is part of region \(F\) of Figure 2.5.) Thus we conclude that all \((P_{n,m},Q_{n,m})\) are in \(E_{3}\), regardless of whether \(\phi_{2}(P_{n},Q_{n})\) passed through \(\alpha\), \(\beta\), \(\gamma\), or \(\delta\). **Figure 4.4. Top half of cut locus of points in region \(E\)** Suppose \(\phi_{2}(P_{n},Q_{n})\) pass through region \(\alpha\) in Figure 4.2, and \(Q_{n,m}\) were the points \(Q_{\alpha}\) in Figure 4.3. An argument similar to the one that we will provide works if \(\alpha\) is replaced by \(\beta\), \(\gamma\), or \(\delta\). All that matters is that the vertex \(Q_{\alpha}\) (or its analogue) has degree 3. In Figure 4.5, we isolate the relevant portion of Figure 4.3, with \(Q_{n,m}\) at the indicated vertex. **Figure 4.5. A portion of Figure 4.3** We may assume, after restricting, that all \(\phi_{3}(P_{n,m},Q_{n,m})\) pass through the same one of the three regions in Figure 4.5, which we call region \(R\). For a sequence \(Q_{n,m,\ell}\) approaching \(Q_{n,m}\) on the edge not bounding \(R\), \((P_{n,m},Q_{n,m,\ell})\) cannot have a convergent subsequence in \(E_{3}\), since \(\phi(P_{n,m},Q_{n,m,\ell})\) cannot pass through \(R\). Restricting more, we may assume that all \((P_{n,m},Q_{n,m,\ell})\) are in the same \(E_{i}\), with \(i\neq 3\). If \(i=2\), then \(\phi_{2}(P_{n,m},Q_{n,m,m})\) approaches \(\phi_{2}(P_{n},Q_{n})\), but geodesics from \(P_{n,m}\) to points close to \(Q_{\alpha}\) in Figure 4.3 ultimately are above the arm from \(Q\) to corner point 6 in Figure 4.2, while \(\phi_{2}(P_{n},Q_{n})\) is below it. (Recall that the cut locus in Figure 4.2 is approached by those in Figure 4.3.) So \(i\neq 2\). Also, \(i\) cannot equal 1, because if so, \(\phi_{1}(P_{n,n},Q_{n,n,n})\rightarrow\phi_{1}(V_{8},V_{2})\), but the latter is between vertices 3 and 4 in the lower half of the cut locus. Therefore \(i=4\). We may assume, after restricting, that all the \(\phi_{4}(P_{n,m},Q_{n,m,\ell})\) come from the same side of the edge in Figure 4.5 which contains the points \(Q_{n,m,\ell}\). Choose points \(Q_{n,m,\ell,k}\) in the complement of the cut locus of \(P_{n,m}\) on the opposite side of the edge, and converging to \(Q_{n,m,\ell}\). Restricting, we may assume that all \((P_{n,m},Q_{n,m,\ell,k})\) are in the same \(E_{i}\). Note that \(\phi_{i}(P_{n,m},Q_{n,m,\ell,k})\) is the unique geodesic between these points. This \(i\) cannot equal \(4\) since \(\phi_{i}(P_{n,m},Q_{n,m,\ell,k})\) and \(\phi_{4}(P_{n,m},Q_{n,m,\ell})\) approach the edge from opposite sides. It cannot equal \(3\) since \(\phi_{i}(P_{n,m},Q_{n,m,\ell,\ell})\) and \(\phi_{3}(P_{n,m},Q_{n,m})\) approach the vertex in Figure 4.5 from different regions. It cannot equal \(2\) since \(\phi_{i}(P_{n,m},Q_{n,m,m,m})\) and \(\phi_{2}(P_{n},Q_{n})\) approach the vertex in Figure 4.2 from different regions. And, it cannot equal \(1\) since \(\phi_{i}(P_{n,n},Q_{n,n,n,n})\) and \(\phi_{1}(V_{8},V_{2})\) approach vertex \(2\) from different regions. Therefore a fifth \(E_{i}\) is required. \(\quad\blacksquare\)
2302.09669
On The Fine Tuning and Physical Origin of Line-Locked Absorption Systems in Active Galaxies
Line locking (LL) of absorption line systems is a clear signature of the dynamical importance of radiation pressure force in driving astrophysical flows, with recent findings suggesting that it may be common in quasars exhibiting multiple intrinsic narrow absorption-line (NAL) systems. In this work we probe the phase space conducive to LL and follow the detailed kinematics of those systems that may lock at the velocity separation of the CIV $\lambda\lambda 1548.19,1550.77$ doublet. We find that a small volume of the phase-phase admits LL, suggesting a high-degree of fine-tuning between the physical properties of locked systems. The stability of LL against quasar luminosity variations is quantified with implications for the long-term variability amplitude of quasars and the velocity-separation statistic between multiple NAL systems. The high occurrence of LL by the CIV doublet implies that the hidden extreme-UV emission from quasars is unlikely to be significantly under-estimated by current models. Further, the ratio of the LL velocity to the outflow velocity may serve as a powerful constraint on the composition of the accelerating medium. We conclude that LL poses significant challenges to current theories for the formation of non-intervening NAL systems, and speculate that it may be a manifestation of expanding circumstellar shells around asymptotic giant branch (AGB) stars in the quasar-host bulge.
T. R. Lewis, D. Chelouche
2023-02-19T20:43:29Z
http://arxiv.org/abs/2302.09669v1
# On The Fine Tuning and Physical Origin of Line-Locked Absorption Systems in Active Galaxies ###### Abstract Line locking (LL) of absorption line systems is a clear signature of the dynamical importance of radiation pressure force in driving astrophysical flows, with recent findings suggesting that it may be common in quasars exhibiting multiple intrinsic narrow absorption-line (NAL) systems. In this work we probe the phase space conducive to LL and follow the detailed kinematics of those systems that may lock at the velocity separation of the C iv \(\lambda\lambda 1548.19,1550.77\) doublet. We find that a small volume of the phase-phase admits LL, suggesting a high-degree of fine-tuning between the physical properties of locked systems. The stability of LL against quasar luminosity variations is quantified with implications for the long-term variability amplitude of quasars and the velocity-separation statistic between multiple NAL systems. The high occurrence of LL by the CIV doublet implies that the hidden extreme-UV emission from quasars is unlikely to be significantly under-estimated by current models. Further, the ratio of the LL velocity to the outflow velocity may serve as a powerful constraint on the composition of the accelerating medium. We conclude that LL poses significant challenges to current theories for the formation of non-intervening NAL systems, and speculate that it may be a manifestation of expanding circumstellar shells around asymptotic giant branch (AGB) stars in the quasar-host bulge. Galaxy winds -- Photoionization -- Quasars -- Quasar absorption line spectroscopy -- Radiative transfer + Footnote †: journal: ApJ 0000-0002-3002-8885]Tiffany R. Lewis 0000-0002-4880-0886]Doron Chelouche ## 1 Introduction Gaseous outflows are ubiquitous in quasars, and are manifested as blueshifted resonance-line absorption with respect to the quasars' restframe (Crenshaw et al., 2003; Vestergaard, 2003; Ganguly & Brotherton, 2008, see also Zakamska & Greene, 2014; Leung et al., 2019 for the detection of quasar outflows in emission). These are commonly detected in the rest UV through X-ray energies, and span a velocity range of \(10^{3}-10^{5}\,\mathrm{km\,s^{-1}}\)(Crenshaw et al., 2003; Kriss et al., 2018; Reeves et al., 2020). The outflow phenomenon is intimately linked to the physics of quasars and the supermassive black holes that power them (Brennan et al., 2018). Further, absorption-line phenomenology implies significant amounts of metal-rich material that is expelled from the compact quasar environs, and may reach galactic and intergalactic scales (Arav et al., 2018). As such, the study of quasar outflows has implications for galaxy formation (Fabian, 2012; Fiore et al., 2017; Rose et al., 2018; Chen et al., 2022), and the properties of the circum-/inter-galactic medium (Gaspari et al., 2013; Kauffmann et al., 2017; Barai et al., 2018; Liu et al., 2018). The phenomenology associated with quasar absorption line systems is vast. Some systems appear narrow with velocity dispersions \(\lesssim 10^{2}\,\mathrm{km\ s^{-1}}\), and may consist of several distinct kinematic components (Culliton et al., 2019; Chen et al., 2019). Other systems exhibit broad (\(\sim 10^{4}\,\mathrm{km\ s^{-1}}\)) absorption line (BAL) profiles, which may be broken into several narrower kinematic components, but often have smoother appearances (Rodriguez Hidalgo et al., 2013). For the latter type, the large velocity spread, the detection of partial-coverage effects (implying small sizes with respect to the background continuum emitting region), and the occasional time-variability of the troughs (Gibson et al., 2010), imply an association with the inner quasar engine. In contrast, several distinct origins exist for narrow absorption-line (NAL) systems (Misawa et al., 2007; Chen et al., 2018): some are associated with material dispersed over cosmological scales, while others, particularly those with velocities \(\lesssim 20,000\,\mathrm{km\,s^{-1}}\) with respect to the quasar rest-frame, are likely associated with the quasar and its host galaxy (Foltz et al., 1986; Nestor et al., 2008), as is indeed supported by time-variability (Narayanan et al., 2004; Wise et al., 2004; Lu et al., 2018) and partial coverage effects (Crenshaw et al., 2003). The physics of quasar NAL outflows is poorly understood. For some systems, especially in low-luminosity sources where the gas outflows with velocities \(\lesssim 10^{3}\,{\rm km}\,\,{\rm s}^{-1}\), it has been suggested that the absorbers are cool condensations perhaps embedded in a hot and compact, thermally expanding wind (Chelouche and Netzer, 2005). Another explanation associates the outflowing gas with a more extended, dust-driven medium (Williamson et al., 2020). For NAL systems observed at higher velocities, it has been suggested that a fast wind with a high kinetic energy can shock the ambient interstellar medium, and push clouds to their observed velocities (Faucher-Giguere et al., 2012; Waters et al., 2017; Zeilig-Hess et al., 2020). Alternative explanations for high-velocity multi-component absorption systems suggest a compact origin in an accretion disk, whose emission drives a wind by means of radiation pressure force, largely due to line and continuum absorption (Kashi et al., 2013; Nomura et al., 2013; Higginbottom et al., 2014; Quera-Bofarull et al., 2020). The latter scenario is supported by the phenomenon of line locking (LL), which is the focus of the present work. Some variants of the aforementioned scenarios include also the effect of magnetic fields that can assist to launch the gas, collimate it, and promote the survival of cool condensations against evaporation and hydrodynamic instabilities (de Kool and Begelman, 1995; Everett, 2005, in the context of broad-line flows). Understanding which of the above mechanisms is relevant to which type of NAL systems has significant implications for feedback and accretion-disk science (Laha et al., 2021, and references therein). The paper is organized as follows: in SS2 we outline the properties and physics of line-locked systems. The steady-state conditions conducive to line-locking are explored in SS3. The kinematics of line-locked systems are further explored in SS4, and further constraints on the available phase-space for line-locking are outlined. The discussion follows in SS5, where the implications of our results for outflow models are provided. A summary is provided in SS6. ## 2 Line-Locking Line-locking (LL) is a term describing a state in which the observed velocity difference between distinct kinematic absorption components along our sightline equals the velocity-separation of known atomic transitions (Fig. 1). For example, Hamann et al. (2011) reported multiple NAL systems toward a particular source, which are separated by the velocity difference of the C iv doublet (see also Lin and Lu, 2020, 2020). LL is perhaps the clearest manifestation of the fact that radiative driving of gas is dynamically important in the astrophysical context (Goldreich and Sargent, 1976, and below). ### The phenomenology of LL NAL systems The absorption spectra of quasars are usually complex, with many NAL components present. Therefore, the detection of LL was historically limited to a small number of sources and its reliability and interpretation were, for many years, subject to much debate (Boroson et al., 1978; Sargent and Boroson, 1977; Drew, 1978; Perry et al., 1978). With the advance of large-scale high-resolution spectroscopy, many more LL systems were discovered (Tripp et al., 1997; Srianand and Petitjean, 2000; Srianand et al., 2002; Simon and Hamann, 2010; Ganguly et al., 2013; Chen et al., 2019), and it was concluded that a velocity separation of \(\simeq 500\,{\rm km}\,\,{\rm s}^{-1}\), which corresponds to the doublet separation of CIV\(\,\lambda\lambda 1548.19,1550.77\), is relatively common (Scargle et al., 1970; Burbidge and Burbidge, 1975). This was recently confirmed for the quasar population as a whole (Bowler et al., 2014; Lu and Lin, 2019; Mas-Ribas, 2019; Mas-Ribas and Mauland, 2019; Chen et al., 2021). Recently, statistical evidence for LL due to the Si iv\(\,\lambda\lambda\,1393.76,1402.77\) doublet, at a separation of \(\simeq 1900\,{\rm km}\,\,{\rm s}^{-1}\), was also reported (Lu and Lin, 2019, see also Foltz et al., 1987; Srianand et al., 2002). LL which corresponds to the velocity difference between other transitions, such as O vi (Ganguly et al., 2003, 2013), N v (Srianand et al., 2002; Ganguly et al., 2003; Veilleux et al., 2022), and O vi-to-Ly\(\beta\)(Ganguly et al., 2013), has been sporadically reported although it is not yet clear whether the small number statistics results from poor spectral resolution of large surveys, is due to chance coincidence in some studies, or results from a physical effect. ### The physics of LL systems The conditions for LL were first described by Milne (1926) in the context of radiation pressure acceleration of atoms in stars. Mushotzky et al. (1972) coined the term "line-locking", and applied it to radiatively accelerated gas in quasars (see also Scargle, 1973). A more complete treatment of LL was outlined by Braun and Milgrom (1989), which we now follow and extend. Figure 1: A model for line-locking wherein two clouds are exposed to ionizing radiation from the left. The shielded cloud (cloud 2) has a higher acceleration than the shielding cloud (cloud 1), and is able to accelerate to higher velocities (compare the upper and lower panels) until the absorption-line troughs overlap in velocity space, its acceleration decreases to the point where the two clouds’ accelerations are equal, and a line-locked position in acheived (lower drawing). Consider two clouds that share the same sightline to a source of radiation and are accelerating away from it1, with one of the clouds (cloud 1) shadowing the other (cloud 2). The shadow is wavelength dependent owing to the nature of absorption line cross-sections. If cloud 2 has some contribution to its total acceleration from a radiation pressure force term, \(a_{\rm rad}(\lambda)\), which is wavelength (i.e., velocity) dependent, then line-locking will ensue provided Footnote 1: LL can also occur between decelerating clouds; we do not consider such a scenario in the present work. \[a\frac{\delta v_{ll}}{v}<a_{2}-a_{1}<\delta a_{\rm rad}, \tag{1}\] where \(a_{1}\) is the radiative acceleration on cloud 1, \(a_{2}\) is the radiative acceleration on cloud 2, \(v\) is the velocity of cloud 2 away from the source, and \(\delta v_{ll}\) is the difference in the velocities of clouds 1 and 2 where line-locking occurs, which is equivalent to the difference in wavelength between the absorption features of CIV. The term \(\delta a_{\rm rad}\) is the difference in the radiative acceleration between the state when cloud 2 is out of line-locked position (i.e., outside the shadow of absorption-line troughs due to cloud 1) to when it is maximally shadowed by it, i.e., it is aligned in velocity space with the center of the absorption-line trough due to cloud 1. In the latter position, line-driving is reduced due to shadowing (i.e., \(\delta a_{\rm rad}>0\)). The inequality on the right-hand side of Eq. 1 means that the acceleration difference, \(a_{2}-a_{1}\), flips sign between the shadowed and de-shadowed states so that the system could relax to an intermediate state, where \(a_{2}=a_{1}\), and a fixed velocity difference between the clouds, \(\delta v\) can be maintained so that \(\delta v\simeq\delta v_{ll}\), where the latter term is the LL velocity, which is set by atomic physics. The left-hand condition in Eq. 1 is set by the global kinematics of the outflow, whose outflow velocity \(v\) is the average velocity of the two clouds, and is given to within a factor of order unity, by the ratio of the following dynamical timescales: the time it takes for cloud 2 to develop a relative velocity difference with respect to cloud 1, \(\delta v_{ll}/(a_{2}-a_{1})\), and the outflow dynamical time, \(v/a\), where \(a\equiv(a_{1}+a_{2})/2\) is the average acceleration of the outflow, which is well defined for nearly co-spatial clouds of similar properties (see below) for which \((a_{2}-a_{1})/a\ll 1\). For LL to be reached for clouds whose initial velocity difference is \(\ll\delta v_{ll}\), it is required that \((\delta v_{ll}/v)/(a/(a_{2}-a_{1}))<1\). ### The properties of LL-NAL C iv systems Reliable constraints on the physics of LL systems are scant. In what follows we focus on the most common LL signatures in NAL systems that correspond to a velocity separation of \(\delta v_{ll}\simeq 500\,{\rm km\ s^{-1}}\) due to the CIV\(\,\lambda\lambda 1548.19,1550.77\). For a particular system (J 2123-005), Hamann et al. (2011) concluded that the gas is highly ionized, with O VI being the abundant ionization state of oxygen, and implying an ionization parameter2, \(U\lesssim 1\) for a typical type-I quasar spectral energy distribution (SED), with a gas column density of \(\sim 10^{15}\,{\rm cm^{-2}}\), and a slightly above solar metalicity with \(Z\simeq 2Z_{\odot}\) (\(Z_{\odot}\) is the solar composition). These authors also concluded that partial coverage effects are important and may be transition-dependent. This means that the clouds have sizes which are comparable to, or smaller than those that characterize the continuum emitting region in this source. For continuum emission originating from a standard accretion disk, the authors estimated absorber scales of \(\sim 0.01\,{\rm pc}\) or smaller. The detection of variability in that system implied gas densities \(>5000\,{\rm cm^{-3}}\) based on recombination-timescale arguments. An upper limit on the density of \(\sim 10^{8}\,{\rm cm^{-3}}\) was deduced from the lack of discernible acceleration during the campaign (Hamann et al., 2011). The above constraints imply a length scale for the absorbing material along our sightline of \(10^{-7}-10^{-3}\,{\rm pc}\), which when combined with partial coverage arguments, implies a spray of many small spherical cloudlets or a highly flattened sheet configuration for the outflow, with an aspect ratio of \(10^{-5}-10^{-1}\). That such small structures exist in quasar outflows has been proposed in the context of BAL flows (e.g., Hall et al., 2007). The location of the outflowing material is rather poorly constrained but likely lies beyond the broad line region (BLR) and within the host galaxy's bulge. Footnote 2: The ionization parameter, \(U\), is the ratio of the ionizing photon density to the electron density in the medium. Large statistical samples (Bowler et al., 2014) demonstrated that there is dust associated with sightlines exhibiting LL systems, which leads to finite reddening at the level of \(E(B-V)\simeq 0.005\,{\rm mag}\) per kinematic system. For dust typical of the interstellar medium (ISM), this corresponds to Figure 2: The relative contribution of the C IV\(\,\lambda\lambda 1548.19,1550.77\) doublet to the total radiation pressure force for a dusty medium. Brighter shades mark regions in phase space where the contribution is higher (see colorbar). Solid white lines mark iso-optical-depth contours with the optical depth at line center denoted in logarithmic units. The red solid (blue-dashed) curve marks the ridge in phase space over which \(R\) is maximal when \(U\) (\(N\)) is the independent variable. The solid green point marks the deduced model parameters from Hamann et al. (2011), while the empty green circle is a rough conversion of their results for the column per thermal width. \(A_{V}\sim 0.015\) mag. If the gas composition of NAL absorbers is comparable to that of the galactic ISM, then such extinction levels correspond to column densities of \(\gtrsim 10^{19}\,{\rm cm}^{-2}\)(Guver & Ozel, 2009), which are not too different from the values reported by Hamann et al. (2011). These findings support an origin for the outflowing gas beyond the sublimation radius, in agreement with findings of Hamann et al. (2011) based on an independent line of arguments. The data presented by Bowler et al. (2014) provide further clues into the structure of the region that leads to multi-component NAL absorption. After correcting for spurious signals and the contribution of intervening systems, these authors (see also Chen et al., 2021) find that the number of intrinsic NAL systems of a given multiplicity drops rapidly with the number of components found, such that the number of systems \(\mathcal{N}_{i}\) of multiplicity \(i\) (after averaging over the full velocity range), satisfies \(\mathcal{N}_{1}\mathcal{N}_{3}/\mathcal{N}_{2}^{2}\simeq 0.8\pm 0.3\), which is consistent with the independent occurrence of clouds along the sightlines. Put differently, the coherence length of the medium that leads to NALs appears to be shorter than the physical separation between absorption components. Whether this applies also for LL-NAL systems is unclear although current statistics point to a \(\gtrsim 50\%\) of multiple NALs, which are associated with the quasar being line-locked (Bowler et al., 2014). ## 3 The Phase Space of LL Systems The problem of the coupled dynamics of LL systems depends on the physical properties of each component, and therefore spans a multidimensional phase space. To simplify its treatment we first consider single cloud configurations, which do not admit LL. ### Single-cloud Configurations Observations of LL systems imply that the C iv\(\lambda\lambda 1548.19,1550.77\) contribution to the acceleration of the clouds is dynamically important, and it is clear from Eq. 1 that larger values of \(\delta a_{\rm rad}\) are more conducive to LL. Crudely, the observed visual extinctions (Bowler et al., 2014) imply that \(\sim 1.5\%\) of the total radiative momentum carried by the quasar radiation field is deposited in the gas by dust, which is the dominant opacity agent. For saturated line-locked absorption, the total flux absorbed is \(<\delta v_{ll}/c\sim 0.15\%\). Therefore \(<10\%\) of the total radiation pressure force may be deposited in the gas by the line-locked transition. More accurate estimations require detailed calculations that take into account the SED of the quasar continuum and all opacity and scattering agents of the accelerating gas, as we next outline. Radiative acceleration is tightly linked to the composition of the accelerating medium, as well as to its ionization and thermal state, and its column. Throughout this work we assume isochoric clouds that are exposed to a typical type-I quasar SED (Hamann et al., 2011). The gas composition is set to twice the solar metal-to-gas value with ISM-like dust-to-metals ratio (Hamann et al., 2011; Bowler et al., 2014, but see Wu et al. (2010) for higher values in NAL systems). Given the dilute nature of NALs with respect to typical critical densities of important transitions, the ionization and thermal state of the gas are fully determined by the ionization parameter, and the cloud's hydrogen column density, \(N\). Given the low opacity associated with LL systems, isobaric cloud solutions should not lead to appreciably different results than those for isochoric ones. Likewise, models that include versions of radiation pressure confinement Figure 3: The phase-space available for LL, as delineated by \(\mathcal{R}\) with colored regions being characterized by \(R\geq 1\) values (see the colorbars for color coding). _Left:_ The allowed column-density phase-space under the assumption that the ionization parameter of the shielding cloud (cloud 1) is fixed. Several cases are depicted for different values of log(\(U_{2}/U_{1}\)), which are denoted next to each colored surface. Discontinuities in the allowed phase space for a given log(\(U_{2}/U_{1}\))-value is the result of finite grid resolution. _Right:_ The allowed ionization-parameter phase-space (colored regions) under the assumption that the column-density of the shielding cloud is fixed, and for several values of log(\(N_{2}/N_{1}\)), which are denoted next to each curve. Overlaid in dotted lines are trajectories along which luminosity perturbations occur. The green diagonal crossing the allowed phase-space for LL marks the median ionization parameter end-points, where LL models may be maintained (see §4.2). (Chelouche & Netzer, 2001; Baskin et al., 2014; Stern et al., 2014) will not exhibit significant compressions for optically thin highly-ionized dusty media as deviations of the radiation pressure force from the mean value are moderate across the cloud. The thermal and ionization state of the clouds is self-consistently calculated here using cloudy C17.01 photoionization code (Ferland et al., 2017). The total radiative acceleration was taken from cloudy's output and includes all major bound-bound, bound-free, free-free processes, as well as scattering by electrons and dust. The radiative acceleration by individual transitions was calculated ab initio assuming thermal broadening (Chelouche & Netzer, 2003). Figure 2 shows the ratio, \(R\), of the radiation pressure force due the C iv doublet to the total radiation pressure force across the relevant phase space. We find that this ratio is maximized for \(0.1\lesssim U\lesssim 1\), which is of order the observed values in J 2123-005 (Hamann et al., 2011). The observed column densities in this source are an order of magnitude larger than the calculated optimal columns for LL (\(\sim 10^{17.5}\,{\rm cm^{-2}}\)), but rough consistency is obtained when a correction is made for the suprathermal line broadening observed, which enhances the line-contribution to the radiative driving at larger columns (Fig. 2; see also Chelouche & Netzer, 2001). ### Two-cloud Configurations Turning next to two-cloud configurations, which admit LL, we note the similar outflow velocity of LL systems that satisfy \(\delta v_{ll}/v\ll 1\) (\(\sim 0.05\) for the source studied by Hamann et al., 2011), and the statistic of multiple NAL systems along our sightline, which is consistent with their independent occurrence (Bowler et al., 2014, and SS2.3). These motivate a model in which the two clouds are physically independent, and are characterized by distinct values for \(U\), and \(N\), but are approximately co-spatial, at least to the degree that differences in the geometric flux attenuation factors may be neglected. We further assume that the \(a_{2}-a_{1}\) term in equation 1 is dominated by the difference in accelerations due radiation pressure force, and that the contribution of non-radiative terms, such as gravity and drag force, is negligible (but see Vilkoviskij et al., 1999 for the case of broad absorption line flows). To simplify the representation of the results, we take Figure 4: The allowed LL phase-space for the shielded cloud (cloud 2), assuming particular properties for the shielding cloud (cloud 1), which are denoted by red points in the respective panels. Colored regions being characterized by \(R\geq 1\) values (see the colorbar for color coding). Note the similarity in the shape delineating the relevant phase space for LL across all panels. Overlaid are optical-depth contours for the C iv \(\lambda\lambda 1548\) transition from the shielded cloud. Also shown in dashed blue line is a slope, which roughly characterizes the shielded column density dependence on the ionization parameter for the allowed phase-space where the optical depth in the LL clouds is of order unity (shown only in the middle panel; see text). the limit \(\delta v_{ll}/v\to 0\), which is relaxed later on. This implies that our findings for the phase-space volume available for LL may be over-estimated (see SS4). Continuum shielding of cloud 2 by cloud 1 is neglected, which is justified for much of the phase space given the low-opacity of NALs. This approximation may be less accurate for some of the phase space, especially when large columns of low-ionization material are concerned, but should not affect the main conclusions presented in this work. With the aforementioned setup and using the force-multiplier formalism (Arav et al., 1994; Chelouche & Netzer, 2001, and references therein), equation 1 takes the compact form \[\mathcal{R}(U_{1},N_{1};U_{2},N_{2})\equiv\frac{\delta a_{\rm rad}}{a_{2}-a_{1 }}=\frac{\delta M_{\rm CIV}}{\delta M}>1, \tag{2}\] where the force multiplier, \(M\), is the ratio of radiation pressure force due to all absorption and scattering processes to that due to electron scattering so that \(a_{\rm rad}\equiv n_{e}\sigma_{T}LM/4\pi r^{2}\rho c\). Here, \(n_{e}(\rho)\) is the electron-number (gas-mass) density, \(\sigma_{T}\) the Thomson cross-section, \(L\) the bolometric luminosity of the quasar, \(r\) the distance of the clouds from the ionizing source, and \(c\) the speed of light. Therefore, \(\delta M_{\rm CIV}\propto\delta a_{\rm rad}\), and \(\delta M\propto a_{2}-a_{1}\). We calculate \(\mathcal{R}\) over a wide range of ionization parameters and column densities, and map the regions in the four-dimensional phase space where Eq. 2 is satisfied. The ionization parameter and column density volume probed here is motivated by recent surveys suggesting that non-intervening NAL systems cover a \(\gtrsim 3\) dex range in the column density of prominent ions (Ganguly et al., 2003; Fechner & Richter, 2009; Perrotta et al., 2016), and a substantial, \(>2\) dex range in ionization-parameter values (Ganguly et al., 2003; Culliton et al., 2019), which is echoed by theoretical calculations (Kurosawa et al., 2009; Zeilig-Hess et al., 2020). We find that the phase-space volume, where LL can occur, is \(\lesssim 1\%\) of the total phase-space volume considered here (logarithmic volumes are assumed throughout). Requiring that the optical depth in the C iv doublet exceeds unity so that its absorption signatures are clearly visible, reduces the fractional phase space volume to \(\lesssim 0.5\%\). Further extending the phase space probed in terms of ionization parameters and/or column densities, and including additional constraints on the flow kinematics (e.g. on the ratio of LL velocity to the bulk velocity of the outflow, or robustness against quasar flux variations; see below) substantially reduces the relative phase space conducive to LL. Relaxing the assumption of co-spatiality of the clouds does not qualitatively change the above conclusion. At face value, this statistic contrasts the observed occurrence rate of line-locked systems among multiple NALs (Bowler et al., 2014), and implies a physical process that greatly enhances LL. Below we quantify the requirements for LL to occur, and consider particular plane projections of the 4-dimensional phase space to map the implied correspondences between the LL clouds. #### 3.2.1 The column-density plane We next consider two (cospatial) clouds that have identical ionization parameters, and hence densities. We choose \(U=10^{-0.5}\), which is optimal for LL in our setup and consistent with the observations (Fig. 2 and Hamann et al., 2011). This scenario could arise, for example, in a thermally unstable medium, which bifurcates into cool and hot thermally stable phases (Mo & Miralda-Escude, 1996) with length-scales, hence columns, triggered by the perturbations' wavelengths. The phase space is shown in the left panel of Fig. 3. It is clear that for LL to be operating, the column densities of the clouds should be similar to \(<50\)% under optimal conditions, with \(N\lesssim 10^{17}\,{\rm cm}^{-2}\). Averaging over the allowed phase space, column densities need be similar to within 10% to allow for LL. Qualitatively similar results are obtained for other values of \(U\) (not shown). Allowing for \(U_{1}\neq U_{2}\), we find that the columns must be different for LL to work, so that \(a_{1}\simeq a_{2}\), and yet the phase space over which LL operates is confined to a relatively narrow strip in phase space from which columns cannot deviate by more than \(\simeq 50\)% (Fig. 3). This is especially true in cases where the shielded cloud (cloud 2) is less ionized than the shielding cloud (cloud 1). In that case, the column densities must be follow a strict relation with deviations of order per-cent or less for LL to operate. #### 3.2.2 The ionization-parameter plane We now consider a scenario in which \(N_{1}=N_{2}\) (right panel of Fig. 3). We find that \(\mathcal{R}>1\) for \(U_{1}\simeq U_{2}\). In particular, for column densities leading to an optical depth of unity in the C iv\(\lambda\lambda 1548.19,1550.77\) lines, any density differences between the absorption systems must be at most \(50\)% for LL to operate, and often much smaller than that. Assuming \(N_{1}\neq N_{2}\), the allowed phase space defined in the \([U_{1},U_{2}]\) plane considerably shrinks; cases in which \(N_{2}>N_{1}\), \(U_{1}\) and \(U_{2}\) must be fine-tuned to within a few per-cents for LL to operate. Generally, the allowed phase-space is delineated by contours for which \(U_{2}\not\propto U_{2}\) with implications for LL stability over time (SS4.2). #### 3.2.3 The ionization-parameter-column-density plane Figure 4 shows the phase-space available for LL when the properties of the shielding cloud, \([U_{1},N_{1}]\), are fixed at several observationally motivated values for which the implied C iv-doublet optical depth is in the range 0.1-10. The phase space defined by the shielded cloud, \([U_{2},N_{2}]\), for which Eq. 2 is satisfied, shows a similar (although not identical) behavior for the cases explored here. Specifically, all allowed phase space follows a ridge, which may be locally (crudely) approximated by a broken powerlaw form (\(N_{2}(U_{2})\propto U_{2}^{\eta}\)) followed by an abrupt cutoff at high values of \(U_{2}\). The cutoff results from the radiation pressure force decreasing with increasing ionization level so that even optically thin, low column-density gas cannot satisfy Eq. 2 beyond some value of \(U_{2}\). The powerlaw index, \(\eta\simeq-0.6\) for system properties similar to those found by Hamann et al. (2011) with optical depths \(\lesssim 10\), but becomes steeper (\(\eta<-1.0\)) for lower values of \(U_{2}\) due to the rapid increase in the opacity of helium and hydrogen. As noted before, a high-level of fine tuning, of order a per-cent or less, is required between clouds when \(N_{2}>N_{1}\). Overall, a lower degree of fine-tuning of the clouds properties is required when the optical depth in the C iv \(\lambda\lambda 1548.19,1550.77\) doublet is of order unity. ## 4 The Kinematics of LL Systems The above analysis is relevant for testing whether the observed properties of NALs can be maintained in a LL position under steady-state conditions, but do not reveal whether LL may be achieved in the first place, nor whether it may persist under time-varying conditions, such as near variable quasars. Here we treat the kinematic problem of two clouds by following their evolution from the launching point and until the coasting phase sets in. We focus on phase-space configurations that lead to LL. The coupled systems' kinematics follows from the equations of motion: \[\begin{split}\dot{v_{1}}=&\ \frac{x_{m}}{m_{p}} \frac{\sigma_{T}L}{4\pi r_{1}^{2}c}M_{1}(U_{1},N_{1})\\ \dot{v_{2}}=&\ \frac{x_{m}}{m_{p}}\frac{\sigma_{T}L}{4 \pi r_{2}^{2}c}M_{2}(U_{2},N_{2};U_{1},N_{1},\delta v,\delta v_{ll})\end{split}, \tag{3}\] where \(\delta v=v_{2}-v_{1}\), \(x_{m}=n_{e}/\rho\simeq 0.85\) for the assumed gas composition, and \(m_{p}\) is the proton mass. In the numerical solutions presented below, this set of equations is solved for the kinematics of each of the coupled clouds. To assist with the interpretation of the results we note that the equation of motion for the velocity difference, \(\delta v\equiv v_{2}-v_{1}\), between the clouds, in the limit of near co-spatiality (\(\delta r/r\ll 1\) where \(r=(r_{1}+r_{2})/2\) and \(\delta r=r_{2}-r_{1}\)) is given by \[\dot{\delta v}\simeq\frac{x_{m}}{m_{p}}\frac{\sigma_{T}LM}{4\pi r^{2}}\left( \frac{\delta M}{M}-\frac{2\delta r}{r}\right), \tag{4}\] where we assumed that \(M_{1}\simeq M_{2}=M\) (i.e., the force multipliers characterizing the two clouds are similar to within a small correction). The term \(\delta M\equiv M_{2}-M_{1}\) consists of a sub-term, which does not depend on the clouds' relative velocity, \(\delta M_{0}\), and a sub-term which is associated with line-blocking, \(\delta M_{\delta v_{ll}}\) and is responsible for LL: \[\delta M=\delta M_{0}(U_{1},N_{1},U_{2},N_{2})+\delta M_{\delta v_{ll}}( \delta v). \tag{5}\] If the \(\delta M_{0}\)-term includes the full contribution from all absorption and scattering processes, including all relevant absorption lines - as would be the output of many photoionization codes - then the term \[\delta M_{\delta v_{ll}}(\delta v)=-\frac{1}{\tau_{e,2}}\frac{\nu L_{\nu}}{L} \mathcal{W}_{1}*\mathcal{W}_{2}, \tag{6}\] where '\(*\)' denotes convolution with respect to velocity3 and Footnote 3: \(\mathcal{W}_{1}*\mathcal{W}_{2}=\int_{-\infty}^{\infty}dv\mathcal{W}_{1}(v)W_ {2}(\delta v-\delta v_{ll}-v)\). \[\mathcal{W}_{i}(v)=\frac{1}{\sqrt{c}}\left[1-\exp\left(-\tau_{i}e^{-v^{2}}/2 \sigma_{i}^{2}\right)\right], \tag{7}\] where \(\sigma_{i}\) is the thermal broadening velocity of cloud \(i\) (our photoionization calculations in SS3 show that \(\sigma_{1}\simeq\sigma_{2}\) for the conditions most conducive to LL). In the above expression we assume for all practical purposes that \(\sigma_{i},dv_{ll}\ll c\), and that the optical depth for electron scattering from cloud 2, \(\tau_{e,2}\ll 1\). The optical depth at the line center, \(\tau_{i}\), is such that \(\tau_{1}\) is the optical depth at line center for the C iv \(\lambda 1548.19\) transition from cloud 1, and \(\tau_{2}\) is the optical depth at line center for the C iv \(\lambda 1550.77\) transition from cloud 2. A Gaussian dependence of the optical depth from the line-center was assumed, as is appropriate for metal NALs. The dependence of \(\delta M_{\delta v_{ll}}\) on the velocity separation between the clouds is shown in Fig. 5 for the case of equal optical depths in the relevant transitions, \(\sigma_{1}=\sigma_{2}\equiv\sigma=10\,{\rm km\;s}^{-1}\), and for a fixed ratio between the optical depth at the line center and that for electron scattering. As discussed in SS3 and shown in Fig. 5, the largest effect of line-blocking on the radiation pressure force is attained for optical depths of order unity and when \(\delta v=\delta v_{ll}\). Specifically, for optical depths \(>100\), \(|\delta M_{dv_{ll}}|\sim 1\), and a high degree of fine-tuning of the clouds properties is required to achieve LL since \(M\gtrsim 10^{3}\) (for dusty media). LL could occur if Figure 5: The drop in the radiation pressure force (force multiplier) due to line blanketing, \(\delta M_{\delta v_{ll}}\). Shown are calculations of Eq. 6 between clouds with identical optical depths in the line troughs, and identical line-broadening with \(\sigma=10\,{\rm km\;s}^{-1}\). A bolometric correction of \(\nu L_{\nu}/L=0.25\) was assumed. It is also assumed that \(\tau_{e}=10^{-7}\tau\), where \(\tau\) spans a range of values (see legend). The largest drop in radiation pressure force for each model is obtained when the systems overlap in velocity space at the doublet separation, \(\delta v_{ll}\simeq 500\,{\rm km\;s}^{-1}\). When comparing different models, clouds whose optical depths are of order unity, lead to the largest acceleration differences between fully-blanketed and non-blanketed configurations. there exists \(0\leq dv\leq dv_{ll}\), where \(\delta M\) flips sign. For much of the relevant phase space, this velocity lies in the range \(450\leq dv\leq 500\,{\rm km~{}s}^{-1}\); the range is asymmetric with respect to \(\delta v_{ll}\) since clouds that accelerate and develop an increasing velocity gap will lock first at \(\delta v\leq\delta v_{ll}\).4 In the limit of optically thin clouds of a fixed \(\tau_{e}\) with \(\sigma_{1}=\sigma_{2}\) and \(\delta v=\delta v_{ll}\), equation 6 simplifies to a quadratic dependence on the optical depth such that \(\delta M_{\delta v_{ll}}(\tau)\propto\tau_{1}\tau_{2}\), which is valid to within a factor of \(\sim 2\) also for \(\tau\simeq 1\). For \(\tau\gg 1\), \(\delta M_{\delta v_{ll}}\) has a square root logarithmic dependence on the optical depth, which is reminiscent of the curve-of-growth. Footnote 4: Locking at \(\delta v>\delta v_{ll}\) is not a stable equilibrium for differentially accelerating clouds but is a stable configuration for differentially decelerating ones. ### Kinematic Solutions Under Stationary Conditions It is beyond the scope of this work to span the full range of solutions for LL systems, and we focus on those solutions that appear to be more relevant to the observed systems. To this end we consider the emergence of LL in objects similar to J2123-005 (Hamann et al., 2011), which are defined by a typical type-I quasar SED5 with a bolometric correction of \(\nu L_{\nu}(1500\,{\rm\AA})/L=0.25\) and \(\alpha_{ox}\simeq-1.9\)(Hamann et al., 2011), and with \(L=8\times 10^{47}\,{\rm erg~{}s}^{-1}\). Gravity is neglected which is consistent with dusty media having \(M\sim 2000-3000\) over the observationally relevant phase space, and with luminous quasars emitting close to their Eddington rate (\(\Gamma_{\rm Edd}\simeq 1\)), so that radiation pressure acceleration is highly effective even when the bulge mass is taken into account at large distances and so long as \(M>200/\Gamma_{\rm Edd}\)(Kormendy & Ho, 2013), which may not be true for low-luminosity sources (Gavignaud et al., 2008). Footnote 5: Cloudy’s AGN model was defined with the following parameterization: \(T=10^{5}\,{\rm K}\), \(\alpha_{ox}=-2\), \(\alpha_{uv}=-0.5\), and \(\alpha_{x}=-1\). Cloud dynamics is treated ballistically. That is, the clouds are considered as distinct and coherent entities whose interaction with the environment - e.g., with an ambient medium via drag forces - is minimal, and does not lead to cloud disruption. Therefore, sonic/critical points in the solution are irrelevant. Further, we assume that the cloud properties (\(U,N\)) do not evolve with time. This assumption is not inherent to the model, but is employed for tractability of the problem given the multi-dimensional nature of the phase space. Lastly, special relativistic effects are ignored despite the high outflow velocities achieved by some models (e.g., Fig. 6 for systems originating from torus-scales). We first consider a model in which the clouds are launched from the dusty region that lies just beyond the broad-line region - the putative torus - which we set to be at 10 pc from the ionizing source (Burtscher et al., 2013). The model is characterized by \(U_{1}=10^{-0.61},~{}U_{2}=10^{-0.77}\) with \(N_{1}=N_{2}=10^{17}\,{\rm cm}^{-2}\), and falls within the phase-space conducive to LL (Figs. 3, 7). Calculations show that the clouds settle to a LL position within \(<10\%\) of their dynamical timescale, and remain so out to their coasting phase. There is a subtle decrease in \(\delta v\) with time owing to the growing radial distance, which results in \(\delta r/r\) being comparable to \(\delta M_{\delta v_{ll}}/M_{2}\), and resulting in a slight "climb" of the shielded cloud along the absorption-line wing to reach a refined LL position. A model identical to the above but with \(U_{1}=10^{-0.72}\) does not satisfy Eq. 1 since the dynamical time is too short for \(\delta v\simeq\delta v_{ll}\) to develop, and the clouds settle to \(\delta v\lesssim 400\,{\rm km~{}s}^{-1}\) at their coasting phase. An identical Figure 6: Full kinematic solutions for clouds around LL conditions. Two sets of models are considered, which pertain to a high-luminosity quasar (Hamann et al., 2011): clouds that are launched from dusty-torus scales (solid curves), and clouds that originate in the host’s bulge (dashed curves). In all cases \(N_{1}=N_{2}=10^{17}\,{\rm cm}^{-2}\) and \(U_{2}=10^{-0.77}\). Three parameterizations are considered for the torus model: \(U_{1}=10^{-0.61}\) (marked by ”0”), \(U_{1}=10^{-0.72}\) (marked by ”\(-\)”), and \(U_{1}=10^{-0.50}\) (marked by ”\(+\)”). Three parameterizations are considered for the bulge model: \(U_{1}=10^{-0.50}\) (marked by ”0”), \(U_{1}=10^{-0.61}\) (marked by ”\(-\)”), and \(U_{1}=10^{-0.39}\) (marked by ”\(+\)”). _Left:_ velocity profiles with the inset showing the relative radial distance accumulated as a function of velocity difference. _Middle:_ the velocity difference as a function of the outflow velocity. _Right:_ the relative acceleration of the clouds as a function of the velocity difference. Note that small changes in the cloud properties (of order 30% in \(U_{1}\)), which do not appreciably change the global clouds kinematics have a substantial effect on the ability to LL (see text). model but with \(U_{1}=10^{-0.5}\) does not settle to a LL position since the right-hand side of Eq. 1 is not satisfied, and despite the decrease in acceleration seen at \(\delta v\simeq 500\,{\rm km\ s^{-1}}\), the clouds experience a monotonic relative acceleration to settle into a \(\delta v\lesssim 2000\,{\rm km\ s^{-1}}\) at their coasting phase (Fig. 6). We next consider a model in which the clouds are launched from the host galaxy's inner bulge at a distance of 100 pc from the ionizing source. We first consider a model similar to the above but with \(U_{1}=10^{-0.5}\), which formally does not admit LL (see above and Fig. 7). Nevertheless, LL still occurs for bulge clouds since cloud 2 develops a non-negligible radial gap with respect to cloud 1, so that \(\delta r/r>\delta M/M\) (Eq. 4). In particular, the system reaches a steady-state with \(\delta v\lesssim\delta v_{ll}=500\,{\rm km\ s^{-1}}\) after \(\sim 30\)% of the dynamical time, whereupon the clouds accelerate nearly coherently. As the clouds move out, the radial gap between them increases up to \(\sim 2\%\) of the distance to the ionizing source, thereby leading to a relative deceleration phase (right panel of Fig. 6), and to a decreasing \(\delta v\) until a steady-state is reached with \(\delta v\simeq 480\,{\rm km\ s^{-1}}\). Any further radial gap increase has no effect on the gas kinematics. The same model but with \(U_{1}=10^{-0.39}\) does not lead to LL as the effect of line-blocking is too small to balance the relative radiative acceleration (\(\delta M>0\)), and \(\delta v\simeq 800\,{\rm km\ s^{-1}}\) is reached at the coasting phase. An identical model with \(U_{1}=10^{-0.61}\) fails to reach LL since the dynamical time to develop \(\delta v_{ll}\) is longer than the outflow time in this case. To conclude, the phase-space diagrams shown in Fig. 3, 4 are indicative of the phase space volume conducive to LL, and yet the exact range depends on the launching site of the clouds via the left-hand side of Eq. 1. #### 4.1.1 Kinematic constraints from LL systems LL introduces a further dynamical constraint, which can be used to recover some of the flow attributes, under the assumption that its observed properties are identical to those during the acceleration phase. Further assuming optically thin media, which is LL at velocity \(\delta v_{ll}\), and has reached its terminal velocity, \(v_{\infty}\), then the condition (Eq. 1) \[\frac{\delta v_{ll}}{v_{\infty}}<\frac{\delta M}{M}<\frac{|\delta M_{\delta v_ {ll}}(\delta v=\delta v_{ll})|}{M}, \tag{8}\] for nearly co-spatial clouds translates to the following upper limit on the launching distance, \(r_{0}=100r_{100{\rm pc}}\) pc (Chelouche & Netzer, 2001, see Eq. 10 below), \[r_{100{\rm pc}}<10L_{48}\frac{\tau_{1}^{2}\tau_{2}^{2}}{M_{3}^{1/2}}\left( \frac{f_{\rm CIV}}{0.1}\frac{Z_{C}}{2}\right)^{2}\left(\frac{b_{L}}{4}\right) ^{-2}T_{4} \tag{9}\] where \(L_{48}\equiv L/(10^{48}\,{\rm erg\ s^{-1}})\), the gas temperature, \(T=10^{4}T_{4}\) K, and \(M=10^{3}M_{3}\). The factor \(f_{\rm CIV}\) is the ionization fraction of C IV, and \(Z_{C}\) is the abundance of carbon relative to the solar composition (of cloud 2). In the above expression \(b_{L}\) is the bolometric correction with respect to the monochromatic UV luminosity (\(b_{L}\simeq 4\) for the chosen SED). It was assumed that \(\delta M_{0}<|\delta M_{d_{U1}}|\) so that LL can be realized. For the particular case of J 2123-005, \(\tau_{1}\simeq\tau_{2}\simeq 1\), and assuming \(f_{\rm CIV}=0.1\) and \(Z_{C}=2\)(Hamann et al., 2011) and \(M_{3}=2\)(as verified by photoionization calculations), we obtain \(r_{100{\rm pc}}<5\) based on LL kinematics, which is consistent with the distance range reported by Hamann et al. (2011) of 5-1100 pc based on independent arguments. These kinematic arguments complement launching distance estimations based on the global outflow kinematics, where the asymptotic velocity satisfies (Chelouche & Netzer, 2001), \[v_{\infty}\simeq 3\times 10^{4}L_{48}^{1/2}M_{3}^{1/2}r_{100{\rm pc}}^{-1/2} \,{\rm km\ s^{-1}}, \tag{10}\] and, conversely, the launching radius, \[r_{100{\rm pc}}\simeq 10v_{\infty,4}^{-2}L_{48}M_{3}, \tag{11}\] where \(v_{\infty}=10^{4}v_{\infty,4}{\rm km\ s^{-1}}\). Here it was assumed that \(M\) is constant along the acceleration path, which is reasonable given that radiation pressure acceleration by dust is less sensitive to the level of ionization of the gas and to its column density, so long as the medium is optically thin for dust absorption, which is justified for the column-density range observed. For the case of J 2123-005, LL estimates imply smaller scales by a factor of \(\gtrsim 3\) than implied by global outflow kinematics (\(r_{100{\rm pc}}\simeq 14\)), suggesting that the outflow and/or quasar properties may be different over dynamical times than those implied by current observations. #### 4.1.2 The role of continuum shielding It has been shown that substantial continuum shielding of quasar outflows can have significant dynamical effects (Murray et al., 1995; Chelouche & Netzer, 2003). Below we test whether extinguishing the continuum by a large column of neutral gas, which is external to the LL systems and lies along their sightline to the ionizing source, has a qualitative effect on the phase space available for LL. From Eq. 6 it is clear that given optical depths in the lines, \(\delta M_{\delta v_{ll}}\) inversely depends on the bolometric correction, which decreases when substantial (but Compton-thin) shielding columns are present. Still, revised bolometric factors are not expected to increase \(\delta M_{\delta v_{ll}}\) by more than a factor of \(\sim 2\). For the specific shielding scenarios simulated here, and an SED that peaks below the Lyman edge, changes to the bolometric correction are minor (\(\sim 10\)%). The ratio \(\tau_{2}/\tau_{e,2}\) also varies for substantial shielding columns since the relative fraction of ions at their maximum is typically lower and a wider range of ionization levels characterizes the ionized gas (Chelouche & Netzer, 2003). Our calculations show that the latter effect is dominant, and that shielding by large columns decreases the peak ratio of the C IV doublet radiation pressure force to \(\simeq 2\)% (compared to \(\lesssim 10\)% for the non-shielded case; see Fig. 2). In comparison, changes to the total radiation pressure force are at the \(\lesssim 10\)% level between the shielded and non-shielded scenarios due to the dominance of dust opacity and the SED chosen, with \(M\gtrsim 10^{3}\) in both cases. The effect of shielding on the phase space available for LL is studied in Fig. 7, where the optical depth in the doublet lines is of order unity for preset values of \(N_{1}=N_{2}\). Higher levels of shielding push the optimal phase-space range for LL to higher values of \(U_{1},~{}U_{2}\) (or, conversely, to lower densities), by as much as four orders of magnitude. However, the phase-space available for LL remains comparable in volume. Therefore, the effect of shielding does not alleviate the need for fine-tuning of the clouds properties to facilitate LL. ### Quasar variability and LL Quasars vary on a wide range of timescales, and are characterized by a red power spectrum, such that \(P(\omega)\propto\omega^{-\alpha}\) with \(2<\alpha<3\) over hours to years timescales (de Vries et al., 2005; Smith et al., 2018). Therefore, much of the variance is at the lowest frequencies with previous works suggesting substantial power on timescales of order \(\sim 10^{4}\) years (Kcel et al., 2017), which are comparable to the outflow timescale: \[t_{\rm dyn}\equiv\frac{r_{0}}{v_{\infty}}\simeq 3\times 10^{3}\,L_{48}^{-1/2}r_{ 100\rm pc}^{1/2}M_{3}^{-1/2}{\rm years}, \tag{12}\] where the force multiplier, \(M=10^{3}M_{3}\). Another relevant timescale is the de-shadowing timescale over which the shielded cloud can accelerate relative to the shielding cloud by more than one thermal width, \(\sigma/\delta\alpha_{\rm rad}\), which is shorter than \(t_{\rm dyn}\) by a factor of \(\sigma/v_{\infty}\). The effect of quasar variability is to move a system of clouds defined in the \([U_{1},~{}U_{2}]\) plane along \(45^{\circ}\) diagonals (Fig. 3). Therefore, a model which satisfies the conditions for LL under steady-state conditions, may not do so if pushed by the fluctuating quasar flux to a region of phase-space that is not conducive to LL. Qualitatively, the larger the variability amplitude is, the more likely the system will be pushed away from LL equilibrium. To estimate the level of flux variations that may occur without disrupting LL, we consider three models for which \(\log(N_{1})=17,18,19\). For each of the models, we restrict the discussion to the range \(0.1N_{1}\leq N_{2}\leq 10N_{1}\), and analyze the phase-space in the \([U_{1},~{}U_{2}]\) plane in the following manner: for each set of \(N_{1},~{}N_{2}\) values, the phase space conducive to LL is calculated, which results in a simply connected surface in the \([U_{1},U_{2}]\) plane. Each surface may be transected by \(45^{\circ}\) diagonals of varying length, which is a measure of the peak-to-peak flux variation amplitude that maintains LL. The median length of all the transects is logged (see, for example, the green line in Fig. 3 for a particular set of models, whose length corresponds to \(\simeq 1\) dex), and used as a measure for flux variability that may be tolerated by a pre-existing LL system. We note, however, that alternative measures may be defined, although these are less probable to materialize under particular conditions are met (e.g., for systems that are nearly identical and lie along the diagonal). The above process is repeated for a range of \(N_{2}/N_{1}\) values, and for each of the \(N_{1}\) models defined above. We quantify the results by defining Figure 8: Typical (median) peak-to-peak luminosity variations that are consistent with stable LL configurations, for three values of the column density of the shielding cloud (see legend in log units) and for a range of column densities of the shielded cloud (abscissa). For example, for cloud configurations with \(\log(N_{\rm H,shielding})=18\), \(\log(N_{\rm H,shielded})=17.5\), the median (over the available phase space, as appears in the right panel of Fig. 3) peak-to-peak that can sustain stable LL is \(\simeq 0.2\). Figure 7: The allowed LL phase-space for shielded clouds. The allowed LL phase-space is shown for clouds with equal column densities under several degrees of shielding (denoted by the shielding columns of neutral cold gas; see legend). Enhanced shielding pushing the optimal phase space for LL to higher ionization parameters (lower density gas), and higher columns (see text), but the area available for LL remains comparable. The inset shows a blow-up of the phase space for non-shielded clouds when a condition on the dynamical times is added (see text), for several levels of \(\delta v_{ll}/v_{\infty}\). The particular clouds properties used in our kinematic analysis of §4.2 are denoted by colored stars. Quasar flux variations result in the clouds’ properties tracing a diagonal line in the \([U_{1},U_{2}]\) plane, whose length is defined by the RMS level of the sinusoidal signal (color coded; see inset’s legend with values given in RMS over mean units). the median root mean square variability measure, \(\delta_{\rm RMS}^{\rm median}\), where we assume that the quasar luminosity variations are of a sinusoidal form so that \[L(t)=L_{0}[1+\delta{\rm sin}(\omega t+\phi)], \tag{13}\] where \(\delta<1\) is the variation amplitude, and \(\delta_{\rm RMS}\simeq 0.7\delta\). The angular velocity, \(\omega\equiv 2\pi/t_{\rm var}\) where \(t_{\rm var}\) is the period, and \(\phi\) is a random phase (see below). The results are shown in Fig. 8 as a function of \(N_{1}/N_{2}\) for several values of \(N_{1}\). It is clear that low column density configurations have a higher tolerance to flux variations of the source, and among those, configurations for which the clouds columns are comparable (and hence their ionization parameters as well, as they lie along the main diagonal) are most robust. For larger columns, the system is relatively easy to disrupt from a LL equilibrium. Further, clouds configurations in which the shielding cloud has a higher column than the shielded cloud (i.e., \(N_{1}>N_{2}\)) are more resilient to luminosity fluctuations of the ionizing source. For the particular models shown in Fig. 8, \(\sim 30\)% flux variations are not expected to disrupt a pair of LL clouds with columns of order \(10^{17}\,{\rm cm}^{-2}\) (assuming \(\delta v_{ll}/v_{\infty}\to 0\)), but could easily disrupt a system with whose columns are of order \(10^{19}\,{\rm cm}^{-2}\) unless the columns agree to better than \(\sim\)10%. We emphasize that the quoted results are likely upper-limits on the true susceptibility of the system to disruption since \(\delta v_{ll}/v_{\infty}\) is finite, and the phase space conducive to LL is more limited; see Fig. 7 where larger \(\delta v_{ll}/v_{\infty}\)-values substantially reduces the available phase space and \(\delta_{\rm RMS}\) (not shown). #### 4.2.1 Asymptotic inter-cloud kinematics As describe above, in our simulations we assume a single sinusoidal mode of a given amplitude, frequency, and random phase. Motivated by the data for J 2123-005 (Hamann et al., 2011), we consider two plausible models for the kinematics of dusty clouds: a model in which the clouds are accelerated from torus scales (10 pc), and a model where they accelerate from bulge scales (100 pc). The terminal outflow velocity for clouds traveling balistically with constant properties is that given by Eq. 10. The inter-cloud kinematics follows from the solution to equations 3 with \(L(t)\) given by Eq. 13. We assume the ionization-recombination timescales are the shortest in the problem so the gas thermal and ionization states are instantaneously set by \(L(t)\), which translate to time-variation in \(U\). We assume \(N_{1}=N_{2}=10^{17}\,{\rm cm}^{-2}\), and \(U_{1}=10^{-0.5},\ U_{2}=10^{-0.77}\) (SS4.1). We track the velocity difference between the clouds, \(\delta v\) at all times, and log its asymptotic value as a function of \(\omega\) and \(\delta\) for torus clouds and for bulge clouds. As expected, luminosity variations on timescales much shorter than dynamical timescales (\(t_{\rm var}\lll t_{\rm dyn}\)) do not prevent clouds from attaining a LL position for \(\delta_{\rm RMS}=0.7\delta\leq 0.6\). In particular, the clouds relative acceleration, \(\delta a\), and \(\delta v\) traces closed loops in phase space while accelerating coherently to high velocities from bulge scales (Fig. 9). Nevertheless, for luminosity variations that operate on timescales \(t_{\rm var}\gtrsim 0.1t_{\rm dyn}\), and for \(\delta_{\rm RMS}>0.2\), clouds do not settle, Figure 9: Response of LL systems to quasar light variations (systems originating from the torus/bulge are shown in the left/right panel. The asymptotic value of \(\delta v\) between the systems are shown as function of the quasar variability period (assumed sinusoidal), \(t_{\rm var}\) for several values of RMS variability (see legend in the left panel which applies to both panels). Systems can reach a LL position and maintain it when \(t_{\rm var}\) is much shorter than all dynamical timescales. At periods comparable to or larger than dynamical timescales, the clouds do not settle to a LL position, and the the asymptotic \(\delta v\) distribution shows a bifurcation pattern. The probability, \(P\), for finding LL systems under quasar light variations of a given RMS amplitude with a period \(t_{\rm var}\) is shown in the lower insets of both panels (here we defined a LL systems to have \(450<\delta v<500\,{\rm km\,s^{-1}}\)). The upper inset in the right panel shows a typical solution for the response of LL clouds to periodic quasar variations (time flows along the blue curve with the clouds settling to a state with \(\delta v\simeq 484.1\,{\rm km\,s^{-1}}\), and show small velocity and relative-acceleration oscillations corresponding to an inverse “heart” shaped curve). in most-to-all cases to a LL position, with their relative velocity showing a bifurcation pattern. The \(\delta v\)-range increases with \(\delta_{\rm RMS}\). Qualitatively similar behavior is observed for torus clouds and for bulge clouds, although the \(\delta v\)-range in the latter is smaller on account of the smaller accelerations at large distances. For bulge clouds, \(\delta v<\delta v_{ll}\) is also observed on account of the smaller relative clouds acceleration in a fraction of the models, preventing them from reaching LL velocities. In the latter models, the system is more stable to luminosity variations occurring on \(t_{\rm var}\lesssim t_{\rm dyn}\), which is due to the fact that the system does not attain a line-locked configuration much before \(t_{\rm dyn}\) (see above). The insets of figure 9 also show the probability of a system of clouds to achieve a LL configuration under the effect of varying quasar luminosity for torus and for bulge clouds, as a function of the variability timescale. Clearly, the systems are most susceptible to variations over dynamical timescales. Systems that achieve their LL state over shorter timescales with respect to dynamical timescales are more prone to be driven out of LL equilibrium. In particular, for significant \(\gtrsim 30\%\) variations over dynamical timescales, torus clouds with \(t_{\rm dyn}\sim 10^{3}\) years will all be driven out of LL equilibrium. ### Why LL of the C iv doublet? The Bowler et al. (2014) and Mas-Ribas (2019) studies show that LL at the velocity separation of the C iv\(\lambda\lambda 1548.19,1550.77\) doublet is common among multi-component intrinsic NALs, but find little evidence for substantial LL features at \(\delta v_{ll}<500\,{\rm km\ s^{-1}}\). While this could be partly attributed to the limited spectral resolution of large spectroscopic surveys, we are not aware of a large number of such cases found in high-resolution data. Conversely, LL with \(\delta v_{ll}>500\,{\rm km\ s^{-1}}\) have been sporadically detected due to transitions in the near UV (e.g., Srianand & Petitjean, 2000; Lu et al., 2018), but do not appear to be as common as the CIV doublet locking (Bowler et al., 2014). LL with velocity separations corresponding to far-UV (FUV) transitions, which lie at the presumed peak of of quasar emission, have not been, thus far, robustly identified to the best of our knowledge. Theoretically, continuum absorption by resonance transitions drives the gas at the implied densities with the contribution of absorption from excited levels being negligible. The wavelength distribution of resonance lines leads to a spectrum of velocity differences, which deviates from a random distribution due to atomic physics at velocity-separations \(\lesssim 10^{4}\,{\rm km\ s^{-1}}\) (Fig. 10). Focusing on a subset of atomic transitions, which is relevant for optically thin \(U\sim 1\) gas6, shows a similar behavior. Notably, the velocity difference of the CIV \(\lambda\lambda 1548,1550\) doublet is not expected to be the lowest velocity difference to line-lock. Below we aim to qualitatively address this issue for a few particular examples. Footnote 6: Here we take all transitions with oscillator strengths \(>0.1\) for all prominent ions of all abundant elements for which the ionization fraction is \(>0.1\), for a total of 167 transitions. The CIV \(\lambda\lambda 312.42,312.45\) has similar oscillator strengths to the CIV \(\lambda\lambda 1548.19,1550.77\), so that their contributions to the radiation pressure force can be significant if the monochromatic FUV luminosity dominates. Specifically, for clouds whose initial velocity separation is negligibly small, one expects the systems to lock first at a velocity separation of \(\sim 30\,{\rm km\ s^{-1}}\). Realistically, however, the troughs widths are comparable to the LL velocity difference (Hamann et al., 2011), hence random motions in the medium could mask out LL signatures or even prevent locking from occurring at such subsonic speeds. The OIV multiplet has its \(554.1\)A, \(554.5\)A transitions separated by \(\simeq 237\,{\rm km\ s^{-1}}\), and oscillator strengths which make their contribution to the radiation pressure comparable to that of the CIV doublet. Such a velocity difference between line-locked systems has not been statistically uncovered by large surveys (Mas-Ribas, 2019), nor detected in large numbers in high-resolution data. This could imply that gas conditions - whether composition or ionization state - are less conducive to LL by OIV, or that the quasar SED is much softer than assumed here, perhaps due to continuum absorption shortward of the Lyman edge. Alternatively, the clouds may have an initial velocity difference that exceeds \(\simeq 237\,{\rm km\ s^{-1}}\) once accelerated along our sightline to the quasar. More generally, for two _identical_ clouds that are exposed to a flat \(\nu L_{\nu}\) (this is justified to within a factor of two in the range 700A-20,000A for our chosen SED) and following Figure 10: Velocity difference, \(\delta v\) distributions for resonance lines. The blue curve shows \(\delta v\)-distribution for all transitions with an oscillator strength greater than 0.1. Deviations from a random distribution (dashed blue line) are seen at the low velocity end due to atomic physics, and at the high-velocity end due to special relativistic effects (see inset). Focusing only on transitions relevant to highly-ionized gas (see text) leads to a similar distribution (red line). In both cases, there are transitions which are theoretically able to LL at \(\delta v<500\,{\rm km\ s^{-1}}\). Introducing a spectral cutoff beyond the Lyman edge results in C iv\(\lambda\lambda 1548.19,1550.77\) being the first strong transition to LL. from equation 6, the ratio between \(\delta M\) due to perfect LL by some multiplet \(X\) (when the troughs perfectly overlap in velocity space so that \(\delta v=\delta v_{ll}\)) to that due to the C iv doublet is \[\frac{\delta M_{\delta v_{ll},\mathrm{X}}}{\delta M_{\delta v_{ll},\mathrm{CIV}} }\simeq\left\{\begin{array}{cc}(\tau_{\mathrm{X}}/\tau_{\mathrm{CIV}})^{2}& \tau\ll 1\\ \sqrt{\ln(\tau_{\mathrm{CIV}})/\ln(\tau_{\mathrm{X}})}&\tau\gg 1\end{array} \right., \tag{14}\] where the optical depth (\(\tau\)) limits considered apply to both transitions. The limit \(\tau\gg 1\) is included for completeness and is less relevant as the contribution of very optically thick lines to \(M\) is small (e.g., Fig. 2). Here we neglected small thermal broadening differences between different metal lines. We numerically evaluate Eq. 6, and show the minimal velocity at which LL is expected to occur based on the following prescription: for each model defined by \([U,N]\), all multiplets that satisfy \[\frac{\delta M_{\delta v_{ll},\mathrm{X}}}{\delta M_{\delta v_{ll},\mathrm{CIV }}}\geq\beta, \tag{15}\] are included, and the minimal velocity at which LL occurs is associated with the multiplet having the smallest velocity separation. We first consider \(\beta=1\) as threshold with the underlying premise being that transitions with \(\delta M_{\delta v_{ll},\mathrm{X}}\)-values comparable to or larger than that of the C iv doublet are more likely to line-lock first if their \(\delta v_{ll}<500\,\mathrm{km\ s}^{-1}\) given the larger phase-space volumes associated with them7. The calculations imply that for much of the phase-space that appears to be relevant to line-locked systems (Hamann et al., 2011; Bowler et al., 2014) the C iv doublet is more likely to lock first (see Fig. 11). This is not the case, however, for low ionization systems with \(U\lesssim 10^{-2}\), which may lock at velocities of order the line broadening due to O iii transitions. High ionization (\(U>1\)) low column systems are more likely to lock at velocities that correspond to that of O iv transitions. Footnote 7: For (nearly) identical and co-spatial clouds, LL will occur at the minimal velocity separation between transitions, which is just above the effective thermal speed of the medium, and once the kinematic effects of pressure gradients subside. To qualitatively assess the degree to which composition and/or changes to the quasar SED could affect LL velocities by different transitions we resort to a very simplified prescription whereby only transitions that satisfy Eq. 15 with \(\beta\neq 1\) are included. In case no transitions are found that satisfy the criterion, or those that do satisfy it imply a \(\delta v_{ll}>500\,\mathrm{km\ s}^{-1}\) then the LL velocity is set to \(500\,\mathrm{km\ s}^{-1}\). For \(\beta=5\), this approach effectively boosts the relevance of the CIV \(\lambda\lambda 1548,1550\), which could be due to a relative suppression of the EUV flux of the quasar - perhaps due to continuum shielding - or due to enhanced carbon abundance in the gas (see SS5.2). Under this criterion, the phase space leading to \(\delta v_{ll}\simeq 500\,\mathrm{km\ s}^{-1}\) is enlarged, covering much of the relevant \([U,N]\) plane (Fig. 11). Changing the criterion to \(\beta=0.2\) includes many more transitions, which are able to LL at lower velocities (perhaps due to enhanced EUV flux or a reduced carbon abundance), and at no point in phase space does the system lock at \(\delta v_{ll}>240\,\mathrm{km\ s}^{-1}\). Specifically, systems with properties similar to those observed in J 2123-005 are then more likely to lock by the aforementioned O iv doublet. The fact that many LL systems are identified at \(\delta v_{ll}\simeq 500\,\mathrm{km\ s}^{-1}\) implies that the EUV hump cannot be significantly underestimated by our model, or that some mechanism exists, which forms clouds with an initial relative velocity which significantly exceeds the thermal speed, by as much as an order of magnitude before being accelerated along our sightline to the quasar. A more quantitative follow-up of LL kinematics involving all candidate transition for LL, which includes the effect of quasar variability and the potential hopping between different line-locked transitions, is beyond the scope of this work. ## 5 Discussion Figure 11: The estimated minimal line-locking for clouds of given identical properties, which remain fixed over the outflow dynamical timescales. In each panel a different set of absorption lines is considered, which satisfies Eq. 15 for different values of \(\beta\)_Left:_\(\beta=5\) results showing that locking at the C iv\(\lambda\lambda 1548.19,1550.77\) doublet separation occurs for much of the available phase space. _Middle:_ results for \(\beta=1\) that correspond to the standard model, imply that this simple model is not inconsistent with the observations and the implied gas properties, and that LL at \(500\,\mathrm{km\ s}^{-1}\) is expected. _Right:_ results for \(\beta=0.2\) predicted LL at lower velocities due to other multiplets (see text). Our results imply that the fractional phase-space volume conducive to LL in NAL systems is of order a per-cent or less, and therefore much smaller than implied by recent statistical studies of such systems (Bowler et al., 2014). This suggests fine-tuning of the clouds properties, which sets stringent constraints on their formation path, their evolution over dynamical timescales, and their environment. ### Implications for cloud-formation scenarios Below we consider a non-exhaustive set of models for the formation of outflowing absorption-line systems in quasars. #### 5.1.1 Velocity condensations It has been previously suggested that NALs are formed by condensations in velocity space of numerous optically-thin cloudlets spread in velocity space, which undergo line-locking, and accumulate at particular velocities (Milne, 1926; Scargle et al., 1970; Scargle, 1973). Therefore, the formation of line-locked systems in the context explored here is just one manifestation of a potentially more general phenomenon. Our calculations have shown that should cloudlets be formed having a range of densities and column-densities, and with C iv significantly contributing to the radiation pressure force, only a very small fraction of those clouds, of order per-cent at most, would be able to line-lock. Further, the more optically thin the clouds are, the higher the degree of fine-tuning required for them to lock since \(a_{2}-a_{1}<\delta a_{\rm rad}\propto\tau^{2}\to 0\) at low opacity (Eq. 6 and related text in SS4). Such a scenario suggests then that the majority of the material should remain spread out in velocity space, and give rise to very shallow troughs. In that case, however, much higher values of reddening, at the level of \(E(B-V)\sim 1\,\)mag, would be observed for dust-to-metals ratio typical of the local ISM, and contrary to observations (Bowler et al., 2014). Further, if numerous cloudlets line lock to make up discrete absorption components then the numbers of high-multiplicity systems will exceed those observed. We therefore consider this scenario unlikely. #### 5.1.2 Turbulent media It is intriguing that current density and location estimates for LL systems (as part of the more general NAL population) imply the presence of spatially compact and dense (\(\gtrsim 10^{3}\,{\rm cm}^{-3}\)) dusty clouds on hundreds of pc scales away from the central black hole. In non-active galaxies, such properties characterize molecular clouds. The formation of molecular clouds, and in particular their cores, is believed to arise from super-sonic turbulence. In this scenario, significant compression occurs due to strong shocks, in which the post-shocked compressed gas can significantly cool and condense. Here we assume that NAL systems are relics of molecular clouds, and follow the statistical properties of a turbulent medium from which they formed. The degree to which this assumption is realistic for an accelerated medium is unclear. Recent simulations of supersonic (isothermal) turbulence suggest that the density distribution is of the log-normal type, \(P(\rho)\propto\exp\left[-\left(\ln\rho-\left\langle\ln\rho\right\rangle \right)^{2}/2\tilde{\sigma}^{2}\right]\) over four orders of magnitude in (normlized) density (Kritsuk et al., 2007). Numerical studies find that \(\left\langle\ln\rho\right\rangle=-\tilde{\sigma}^{2}/2\), where \(\tilde{\sigma}^{2}=\ln\left(1+b^{2}\mathcal{M}^{2}\right)\) with \(b\lesssim 1\), and \(\mathcal{M}\) is the Mach number. With our density estimates relative to the mean ISM density implying \(\ln\!\rho\sim 8\), the column density distribution, \(N(\rho)\sim\rho P(\rho)\), lies on the decaying tail, such that \(N(\rho)\propto\rho\exp\left[-\left(\ln\!\rho\right)^{2}/2\right]\sim\rho^{1- \ln(\rho)/2}\sim\rho^{-3}\) (we assume \(\tilde{\sigma}^{2}\sim 1\) due to the logarithmic dependence on \(\mathcal{M}\), and \(\mathcal{M}<10\); Tofflemire et al., 2011). The deduced \(N(\rho)\) is very different from the one required for LL to operate, for which \(N(\rho)\propto\rho^{-\eta}\) with \(\eta<0\) (see Fig. 4 and SS3.2.3). It is therefore unlikely that LL systems originate from a turbulent ISM structure that is typical of (non-active) galaxies. #### 5.1.3 Mechanically compressed and pushed ISM clouds Recently proposed models for quasar outflows suggest that absorption line systems (including BALs) result from the compression and mechanical acceleration of the ISM on galactic scales by a fast and hot wind emanating from the active nucleus (Faucher-Giguere et al., 2012; Zeilig-Hess et al., 2020). This class of models often does not directly include the effect of radiation pressure force in lines on the cloud kinematics, but can be used to test whether the resulting condensations' properties are consistent with those required by LL arguments, which signify the dynamical importance of radiation pressure force. Specifically, Zeilig-Hess et al. (2020, see their Fig. 8) give predictions for the column density distribution of the compressed ISM clouds, whose velocity-dependent average declines with velocity by \(\sim 0.5\,\)dex over line-locking velocity separations. Under such conditions, LL is unlikely to materialize. Further, the statistics reported by Zeilig-Hess et al. (2020) implies that the probability for two clouds to have a velocity separation of \(<50\,{\rm km\;s^{-1}}\) (\(400-600\,{\rm km\;s^{-1}}\)) _and_ have their column densities similar to within 0.1 dex is \(\sim\) 13% (\(\sim\) 10%), hence significantly lower than the observed LL statistics. The above probability estimates from the simulations are very likely over-estimated since a large range of gas temperatures - i.e., densities - was assumed by Zeilig-Hess et al. (2020) to provide column-density predictions, while, as our calculations imply, fine tuning of the clouds' density _and_ column-density is required. Nevertheless, it must be emphasize that the relevance of the Zeilig-Hess et al. (2020) simulations to high-velocity NALs has yet to be worked out since the velocity range, and column density range included in those simulations is different than the observed ones, and relevant radiation pressure force terms not included in their work. #### 5.1.4 Medium instabilities Perhaps the greatest challenge of (radiation-) hydrodynamic instabilities in explaining the emergence of LL in quasars is the high-level of fine-tuning required to facilitate LL between physical components of the outflowing medium. Therefore, drawing robust conclusions requires detailed numerical simulations, which are unavailable for the problem at hand. Here we make no attempt to do so, and resort instead to qualitative analytic arguments for a few cases of interest. Thermal instability is a plausible means to form discrete entities - "clouds" - of cool condensations from a more dilute and hot medium (Mo & Miralda-Escude, 1996; Brandenburg et al., 2007). For the chosen SED, the gas is thermally stable under isochoric conditions. Further, LL optimally occurs for ionization parameters which are also thermally stable under isobaric conditions. Marginal stability, but not formal instability, exists for \(10^{0.8}<U<10^{2}\) in our model, so that gas components that cover the temperature range \(3\times 10^{4}-2\times 10^{5}\) K may be in pressure equilibrium. Therefore, for the particular model explored here, thermal instability is unlikely to give rise to the observed condensations, and to the narrow range of systems properties implied by LL considerations. Consider also a more dynamical scenario in which thermally unstable gas under isobaric conditions is exposed to a varying quasar flux with period \(t_{\rm var}\) (we neglect other perturbations in our highly simplified description). In this case, gas whose cooling/heating timescales are short, as is in our case, could settle to a stable thermal state if isobaric conditions are achieved in regions whose sound-crossing timescale satisfies, \(t_{s}\sim(N/n)/c_{s}\sim t_{\rm var}\), where \(c_{s}\) is the sound speed in the hot medium with temperature \(T_{h}\), and \(N\) is the column density. This leads to the condensations' column densities satisfying \(N(t_{\rm var})\propto t_{\rm var}\). As quasars vary over a range of timescales, the column density distribution is not single valued, and would probably be broadened by a myriad of additional processes not included here. While the true effect must be calculated numerically, it is unclear why a high degree of fine-tuning of the clouds properties may be provided by such a process. A further challenge for this model concerns reddening constraints. Specifically, the column density of the volume filling hot medium from which the cool gas condenses, and with which it is in pressure equilibrium, is given by \(r_{0}n_{c}(T_{c}/T_{h})\sim 10^{21}r_{100{\rm pc}}\,{\rm cm}^{-2}\), where we assumed \(n_{c}\sim 10^{3}\,{\rm cm}^{-3}\) and \(T_{h}\sim 10^{6}\) K. This implies significant rest-frame visual extinctions of \(\sim 0.5\) mag toward quasars, which is not observed. Thus, unless dust-formation occurs in-situ, thermal instability is an unlikely origin for high-velocity LL NALs. Another type of instability is the line-driven instability (LDI), which is thought to operate in the winds of massive stars (Owocki & Rybicki, 1984), and has been suggested as a possible scenario in the context of LL systems in quasars (Bowler et al., 2014). In this scenario, small velocity perturbations of the outflow lead to shadowing/de-shadowing effects among its different parts. These cause non-monotonic spatial variations in the radiation-pressure force, which lead to non-monotonic velocity fluctuations in the flow, hence to growing density stratification and to shocks. These result in a multiphase structure with typical length scales of order the Sobolev length scale of the flow, \(l_{\rm Sobo}\simeq\sigma/(dv/dr)\), where \(dv/dr\) the (local) velocity gradient (Sundqvist et al., 2018). Unlike stellar winds, the Sobolev length scale for NAL clouds is of order the entire cloud length, and multiple NAL-systems statistic do not support a stellar-wind-like scenario. Further, considering published numerical calculations of LDI, it is not clear that the level of fine-tuning, which is required for LL to operate, may be reached (Sundqvist et al., 2018). Further, we expect LDI to be less prominent in NALs, which are primarily driven by continuum processes (light absorption by dust) and are sensitive to the flow ionization level rather than merely to the line-opacity between different phases of the flow. Thus, it is unclear how relevant LDI is to NAL flows in quasars. It may be interesting to examine the development of Rayleigh-Taylor instability (RTI) at the leading (non-illuminated) face of a radiatively accelerated NAL cloud through a dilute ambient medium. In the self-similar phase of RTI (assuming one develops within \(t_{\rm dyn}\)), the mixing layer between the dense and dilute media expands with time, \(t\), such that its time-dependent scale-height \(h(t)\sim a_{\rm rad}t^{2}\), where we assume a high-density contrast between the phases (Atwood number of order unity) and incompressibility which are clear over-simplifications (Ristorcelli & Clark, 2004). At its leading edge, the mixing layer therefore expands at a speed of \(v\sim a_{\rm rad}t\), and material - hereafter extrusions - whose velocity exceeds the thermal speed to the cloud, will be de-shadow, and hence able to accelerate more efficiently and extrude to ultimately detach from the parent cloud. The column density of the extrusion is estimated here by the product of the density of the parent cloud medium (the degree to which this holds in reality needs to be verified by appropriate simulations) and the scale-height where de-shadowing occurs, \(h\sim(a_{\rm rad}/2)(\sigma/a_{\rm rad})^{2}\) gives \[N\sim\frac{1}{M\sigma_{T}}\left(\frac{\mathcal{U}_{\rm gas}}{\mathcal{U}_{\rm rad }}\right)\sim 10^{18}\frac{n_{4}T_{4}r_{100{\rm pc}}^{2}}{L_{48}M_{3}}\,{\rm cm}^{- 2}, \tag{16}\] where \(\mathcal{U}_{\rm gas}\) (\(\mathcal{U}_{\rm rad}\)) is the gas- (radiation-) energy density. Such columns are in the rough ballpark of the column densities found by Hamann et al. (2011), but it remains to be seen whether such a mechanism can consistently operate and lead to the fine-tuning required for LL. #### 5.1.5 Radiation-pressure confined clouds It has been shown that radiation pressure confined (RPC) gas can achieve remarkably uniform structure regardless of the initial/boundary conditions imposed (Baskin et al., 2014; Stern et al., 2014), and hence is a promising candidate for producing clouds whose properties are highly correlated, as required by LL conditions. The radiation-to-gas pressure ratio is given by \[\frac{P_{\rm rad}}{P_{\rm gas}}=\frac{\tau_{e}M}{nk_{\rm B}T}\frac{L}{4\pi r^{ 2}c}\simeq 10^{-2}N_{17}U^{\gamma}, \tag{17}\] where the Compton optical depth, \(\tau_{e}\) is assumed to be \(\ll M^{-1}\), and the powerlaw index satisfies \(\gamma\simeq 0.7\), which incorporates the dependence of the total force multiplier on \(U\) and of the gas temperature on \(U\) in the range \(10\) (not shown). Here \(N_{17}\equiv N/10^{17}\,\mathrm{cm}^{-2}\), and we neglected the modest dependence of \(M\) on the column density for marginally optically thick media. Therefore, clouds whose properties are optimal for LL are characterized by \(P_{\mathrm{rad}}/P_{\mathrm{gas}}\ll 1\) and thus do not provide an indication for a RPC dynamics. Taking into account the observed total columns per system, and the supra-thermal line-broadening does not appreciably change our conclusions (SS3). Furthermore, steady-state RPC requires that the leading edge of the cloud is extremely optically thick (so that the bulk acceleration is negligible, as in the case of BLR clouds) or that there is ram pressure from the ambient medium, which balances the pressure by the compressed gas. The latter scenario, which may be relevant for NAL systems at large (Stern et al., 2014) encounters great difficulties in the context of LL since is necessitates extreme fine tuning between disparate physical mechanisms being the radiation pressure force and the drag force. We therefore find the RPC scenario to be an unlikely explanation for NAL systems undergoing LL. #### 5.1.6 Circumstellar AGB shells The notion that some quasar outflows originate in continuous stellar winds and their contrails has been suggested by Scoville & Norman (1995). Here we qualitatively examine whether the large expanding circumstellar shells detected, for example, around many (carbon-rich) AGB stars (Hofner & Olofsson, 2018) could potentially be the origin of LL systems. The model has several appealing attributes: a) it identifies an origin for dense, metal-rich and dusty gas components on galaxy bulge scales, b) it naturally fine-tunes the properties of the two seemingly distinct kinematic components by associating them with a common symmetric origin (a star), and c) it provides an initial velocity separation between the two kinematic components, which are identified with the approaching and receding sides of an expanding shell, thereby preventing LL of nearly identical systems at relative subsonic speeds (Fig. 12). In addition, it provides a natural explanation for the extremely high aspect ratio implied for some systems (Hamann et al., 2011). We emphasize that it is not our intention to quantify the level of symmetry that is required by AGB shells to facilitate LL and compare it to available data for nearby AGB shells, nor do we aim to carry out detailed hydrodynamic calculations to study the stability of an expanding shell configuration over dynamical timescales. It is also not within our scope to provide detailed spectral predictions for the absorption and extinction signatures across the electromagnetic spectrum from an ensemble of AGB shells along our sight-line to continuum region(s) in quasars. Here we consider a qualitative model in which the outflow originates in the host galaxy bulge, whose size at \(z\sim 2\) is \(r_{b}\simeq 10^{21}M_{b,10}^{1/2}\,\mathrm{cm}\)(Shen et al., 2003; Bruce et al., 2014), where the bulge mass, \(M_{b}=10^{10}M_{b,10}\,\mathrm{M}_{\odot}\). We parameterize the launching radius of the outflow, \(r_{0}=\epsilon_{r}r_{b}\), where \(\epsilon_{r}\lesssim 1\) for a bulge origin of the outflow. In this case, the dynamical timescale of the NAL outflow satisfies \(t_{\mathrm{dyn}}\sim 10^{4}\epsilon_{r}r_{b,21}v_{\infty,4}^{-1}\,\mathrm{years}\) during which time it should be detectable as a NAL system, namely it should substantially cover the continuum emission region of the quasar and have a non-negligible optical depth in relevant UV transitions. Geometrically, the radius of the expanding AGB wind over dynamical timescales, neglecting ISM interaction, is \[r_{w}\sim\epsilon_{r}r_{b}\frac{v_{w}}{v_{\infty}} \simeq 10^{18}\epsilon_{r}r_{b,21}v_{w,10}v_{\infty,4}^{-1}\, \mathrm{cm}, \tag{18}\] \[\simeq 10^{19}\epsilon_{r}^{3/2}\mathrm{T}_{\mathrm{Edd}}^{-3/4}L_{ 48}^{1/4}v_{w,10}\,\mathrm{cm}\] where the AGB wind speed \(v_{w}=10v_{w,10}\,\mathrm{km}\,^{-1}\). In the last step we used Eq. 10 (with \(M_{3}=3\)) and assumed a bulge-BH-mass relation such that \(M_{b}\sim 100M_{\mathrm{BH}}\)(Haring & Rix, 2004; Peng et al., 2006; Ding et al., 2020, but note that this does not apply to pseudo-bulge and pure-disk systems; Kormendy et al., 2011), and recast the expression in terms of the Eddington ratio of the quasar, \(\Gamma_{\mathrm{Edd}}\). Clearly, if the current quasar luminosity does not reflect on the average luminosity over the dynamical time, or the gas acceleration changes markedly with distance aside from geometrical flux-dilution Figure 12: A possible depiction of a circumstellar AGB shell evolution toward a line-locked system configuration. A compact geometrically thin shell is ejected during a thermal pulse of the AGB stars (top panel) and expands to gradually cover the quasar continuum emission region while being accelerated by radiation pressure force due to illumination by the quasar (ISM interaction is neglected, perhaps due to ISM pre-evacuation by the quasar). The shell detaches from its origin, accelerates radially from the bulge with its leading edge developing an increasing velocity difference with respect to the trailing side, thereby stretching the shell along the radial direction to the quasar (middle panel). LL velocities are attained over dynamical timescales between the leading and trailing shell rims (lower panel). These drawings are qualitative at best, and the actual nebular shape need not be an ellipsoid (this uncertainty is denoted by dashed shape lines at long timescales). This may be particularly true at locations whose normal vector to the surface is perpendicular to the radial direction (to the quasar), which result in significantly different acceleration of the rims due to optical depth effects. effects then the last step may lead to erroneous conclusions about \(r_{w}\). Over dynamical timescales, the AGB wind could therefore fully cover the accretion disk having a half-light radius of \(r_{\rm SSY3}(\lambda)\sim 10^{16}L_{48}^{1/2}\lambda_{1550}^{4/3}\) (the inner disk boundary is ignored here), but may only partly cover the broad-line-region whose size, \(r_{\rm BLR}\simeq 3\times 10^{18}L_{48}^{1/2}\,\)cm (an optical to bolometric luminosity correction of 10 was assumed for the BLR size-luminosity relation of Bentz et al., 2013). This could give rise to partial-coverage effects in the absorption troughs due to the finite contribution of the BLR to the continuum signal (Chelouche et al., 2019, and references therein). Interestingly, the radial gap between the rims of the expanding shell \(\delta r\) satisfies \(\delta r/r_{0}\sim\delta v_{ll}/v_{\infty}\sim 5\times 10^{-2}\lesssim| \delta M_{\delta v_{ll}}|/M\) for optical depth in the CIV transition of order unity (Fig. 5), thereby facilitating LL despite the distance gap developing between the kinematic components over dynamical timescales. This is especially true for high metalicity, but dust poor gas. To absorb in the UV, the wind's ionization parameter should be of order unity (Hamann et al., 2011; Bowler et al., 2014) over dynamical timescales hence on size-scales of order \(r_{w}\). For the chosen SED the following relation holds: \(U\simeq L_{48}n_{4}^{-1}r_{b,21}^{-2}\). This sets a requirement on the "instantaneous" mass loss rate that leads to the expanding shell of \[\dot{M}\sim 4\pi\rho\epsilon_{r}^{2}r_{b}^{2}\frac{v_{w}^{3}}{v_{\infty}^{2}} \sim 6\times 10^{-3}\frac{\epsilon_{r}v_{w,10}^{3}}{U\Gamma_{\rm Edd}^{1/2}}L_ {48}^{1/2}{\rm M}_{\odot}\;{\rm yr}^{-1}. \tag{19}\] In comparison, the maximal momentum driven mass-loss rate that can be propelled by a star of luminosity \(L_{\star}\lesssim 10^{5}L_{\odot}\)(Ventura et al., 2018) is \(\dot{M}_{\rm max}\sim L_{\star}/cv_{w}\sim 10^{-4}\,{\rm M}_{\odot}\;{\rm yr }^{-1}\) (this limit can increase by a factor of a few when multiple photon scatterings in an optically thick non-porous media is involved). Unless the AGB ejecta on large scales are characterized by \(v_{w,10}\ll 1\) (perhaps due to mass loading from the ISM) then the model favors a more efficient radiative acceleration, such as due to a lower gas-to-dust ratio than the value adopted here of \(\simeq 100\)(see however Maercker et al., 2018), or due to a more compact launching region around the quasar than assumed here (e.g., \(\epsilon_{r}\lesssim 0.1\)), or their combination. Next we estimate the column density through the expanding shell at \(r_{w}\). Denoting the timescale for the thermal pulse during which the shell is ejected by \(t_{\rm tp}\), then the shell thickness is \(\delta r\sim v_{w}t_{\rm tp}\), which remains constant during its expansion due to mass conservation (we neglect mass loading by ISM or precursor wind interaction; Mattsson et al., 2007). Denoting the ejected shell mass by \(M_{s}\) where \(M_{\rm s}=\dot{M}t_{\rm tp}\), then \(N\sim n\delta r\), which satisfies \[N\sim\frac{M_{\rm s}}{4\pi m_{p}r_{b}^{2}}\frac{v_{\infty}^{2}}{v_{w}^{2}} \sim 6\times 10^{14}\epsilon_{r}^{-3}\Gamma_{\rm Edd}^{3/2}L_{48}^{-1/2}M_{ \rm s,-3}\,{\rm cm}^{-2}, \tag{20}\] where \(M_{\rm s}=10^{-3}M_{\rm s,-3}\,{\rm M}_{\odot}\) with \(M_{\rm s,-3}\lesssim 10\) is typical of detached AGB shells (Olofsson et al., 1996). To match the observed columns of (\(\sim 10^{19}\,{\rm cm}^{-2}\); Ganguly et al., 2003; Hamann et al., 2011) the model (again) favors more compact launching regions satisfying \(\epsilon_{r}\lesssim 0.1\). Alternatively, the observed column might result from the confluence of many low-column systems. Nevertheless, this requires all of them to be fine-tuned to yield LL, which is highly improbable (see SS5.1.1). The global covering fraction of AGB shells over the quasar sky, \(C_{g}\sim n_{\rm AGB}(t_{\rm dyn}/t_{\rm AGB})r_{w}^{2}v_{b}\), where we assumed that AGB shells survive for a dynamical time, and that all AGBs go through a thermal-pulse phase during their AGB-phase lifetime, \(t_{\rm AGB}\), whereby a single detached shell is ejected (see, however, Kastner & Wilson, 2021 for a discussion of multiple shell ejection events from AGB stars with periods of \(\gtrsim 10^{4}\) years). Noting that carbon-rich AGB are solar-like stars, we estimate \(n_{\rm AGB}=n_{\star}(t_{\rm AGB}/t_{\star})\), where \(t_{\star}\) is the lifetime on the main sequence and with \(n_{\star}=\epsilon_{b}M_{b}/(M_{\star}4\pi r_{b}^{3}/3)\). Here \(M_{\star}\) is the typical stellar mass assumed without loss of generality to be solar and hence \(t_{\star}\simeq 10^{10}\,\)years.8 The parameter \(\epsilon_{b}\) is the fraction of the mass bulge that is relevant for producing LL signatures (note that \(\epsilon_{r}\) and \(\epsilon_{b}\) are inter-dependent parameters via the density profile of the bulge; see below). With these definitions Footnote 8: The above estimates depend little on the assumed \(M_{\star}\) normalization, for a given stellar population, since the number of stars at mass \(M_{\star}\) is \(\propto\;M_{\star}^{-2.35}\) for a Salpeter initial mass function, while their lifetime is \(\propto\;M_{\star}^{-2.5}\). \[C_{g}\sim\frac{\epsilon_{b}M_{b}}{M_{\star}}\frac{t_{\rm dyn}}{t_{\star}}\left( \frac{v_{w}}{v_{\infty}}\right)^{2}\sim 50\epsilon_{b}\epsilon_{r}^{3/2}L_{48}^{3/4} \Gamma_{\rm Edd}^{-9/4}v_{w,10}^{2}. \tag{21}\] Taking \(\epsilon_{r}=0.1\) and \(\epsilon_{b}=0.01\), results in \(C_{g}\sim 0.01\), which is of order the observed value (Chen et al., 2021). Our choice of \(\epsilon_{b}\) is consistent with the presence of a compact nuclear star cluster (NSC, Neumayer et al., 2020), whose mass is \(\sim 10^{-3}-10^{-2}M_{b}\) in local sources (Georgiev et al., 2016, and references therein9). Our \(C_{g}\) estimate does not account for time-dependent star-formation history, and depends on the survival time of accelerated AGB shells, as well as on the number of shells ejected during the AGB lifetime. A more realistic estimation awaits numerical simulations, which are beyond the scope of this work, and a more comprehensive comparison between model predictions and absorption signatures in the UV and X-ray range over the luminosity range that characterizes active galactic nuclei. Footnote 9: We emphasize that the outflow scenario advocated here may operate in addition to other mechanisms that may lead to outflow phenomena from NSCs (Gohil & Ballantyne, 2018). In the above model an expanding shell from an AGB star, which is driven along its sightline to the quasar, will dislocate with respect to its origin and eventually detach from its parent star (middle panel of Fig. 12). The shell radius at detachment, \(r_{d}\), is crudely given by \[r_{d}\sim 2\epsilon_{r}r_{b}\left(\frac{v_{w}}{v_{\infty}}\right)^{2}\sim 10^{15} \epsilon_{r}\Gamma_{\rm Edd}^{-1}\,{\rm cm}. \tag{22}\] For our adopted formalism to be consistent, we therefore require that \(r_{d}>R_{\rm AGB}\), which is the photospheric radius of a typical AGB star, which we take to be \(100R_{\odot}\)(Hofner & Olofsson, 2018). We next provide a first stab at mapping the quasar phase space where LL could occur according to this model. Motivated by the above considerations for the more relevant range of parameter values we use \(\epsilon_{r}=0.03,\ \epsilon_{b}=0.02,\ v_{w,10}=2,\ M_{s,-3}=10,\) and \(U=1\), and consider the phase space spanned by the remaining parameters, namely \(L\) and \(\Gamma_{\rm Ed}\)(Eqs. 18-21). As for observational constraints, we require that \(M<10^{-3}{\rm M}_{\odot}\ {\rm yr}^{-1}\) (set by local AGB physics), \(\delta v_{ll}/v_{\infty}<5\times 10^{-2}\) (set by Eq. 8; see Fig. 7), \(17<\log(N)<20\)(Hamann et al., 2011; Bowler et al., 2014), \(-2.5<\log(C_{g})<-0.5\)(Chen et al., 2021), and \(r_{w}\geq 10^{16}L_{48}^{1/2}\lambda_{150}^{4/3}\) (full coverage of the UV emitting disk). We also require that \(r_{d}>100R_{\odot}\) (see above). The allowed phase space is shown in figure 13 and includes a substantial fraction of the Sloan digital sky survey (SDSS) sources (Shen et al., 2011). We emphasize that the phase-space volume is sensitive to \(\epsilon_{r},\ \epsilon_{b},\ M_{s}\) and \(v_{w}\), all of which are rather uncertain, and some parameter combinations may void the model altogether. Generally, the model predicts that sources at the top range of the Eddinton-rate distribution at a given luminosity bin, are less likely to show LL since the covering fraction is low. In the bottom range of the Eddington-rate distribution, LL systems are characterized by low columns of gas, resulting in weaker absorbers, which may surface with high-resolution spectroscopic surveys, and with higher rate of occurrence per source due to the higher covering fractions implied. The source J2123-005 formally lies outside the phase space predicted by the specific model considered here, which results from the assumed condition on the peak mass-loss rate from AGB stars. Objects in this range may show higher ionization absorption. Lastly, fainter sources emitting at low Eddington rates are not expected to show LL systems unless carbon is highly over-abundant or the gas is dust-poor. ### Observational tests of the theory and their implication Our calculations indicate that the most probable configuration for LL is that of clouds with similar, but not strictly identical properties, such as ionization and column density (and also gas composition). In particular, it is required that \(a_{2}\gtrsim a_{1}\), for LL to occur. A challenge to this theory would be to find counter examples, namely, that the faster (shielded) component has, for example, a higher column density and a higher ionization level than the low-velocity component, so that \(a_{2}<a_{1}\) (unless the system is at its coasting phase). The most likely phase space for LL by the C iv doublet to occur is for clouds with column densities of order \(10^{17}\,{\rm cm}^{-2}\). For clouds whose inner velocity dispersion is significantly above the thermal values, the optimal column-density scales with the effective line width, at least so long as the continuum optical depth is smaller than unity. For the case of J 2123-005, a suprathermal line broadening of \(\sim 30\,{\rm km}\ {\rm s}^{-1}\) was found, which implies optimal columns for LL of \(N\lesssim 10^{18}\,{\rm cm}^{-2}\). This is within a factor of a few of the column-density estimates of Hamann et al. (2011). While higher column clouds can experience LL, their properties need be extremely fine-tuned, which imposes extremely tight constraints on the physical mechanism leading to cloud formation, and controlling cloud stability over their acceleration timescales. Clouds with very different properties must occupy an extremely localized and fine-tuned range of the phase space to facilitate LL. Further, such line-locked cloud configurations can be easily disrupted by luminosity variations of the center source over dynamical timescales. Quantifying the relation \(U_{1}/U_{2}\) and \(N_{1}/N_{2}\) for line-locked NALs, and comparing those to model predictions (e.g., Fig. 7) will shed light on the physics of such systems. Our calculations indicate that for compositions of order the solar value with ISM-like dust-to-metals mixture, the ratio between the radiation pressure force term giving rise to LL and the total radiation pressure force is order order percents, which by dynamical-time arguments implies that the expected \(dv_{ll}/v_{\infty}\) is of the same order. Finding systems for which the latter ratio is much higher - i.e., low velocity systems experiencing LL - would imply that the gas compo Figure 13: The allowed phase space for LL systems within the AGB-shell model. The colored fill patch is the allowed phase space of the model, and is determined by the observational properties of LL systems, and known circumstellar shells around (local) AGB stars (se text). Different boundaries delineating the allowed phase space are set by different conditions (see color-coding) with the relevant quantity and its gradient away from the colored region denoted next to each curve. The black symbol marks the phase-space position of J 2123-0050. We emphasize that different assumption about \(\epsilon_{r},\ \epsilon_{b},\ M_{s}\) and \(v_{w}\) can substantially expand or shrink (or even void) the phase space available for LL. Gray points are quasars from the Shen et al. (2011) sample, demonstrating that many of them lie within the allowed phase-space of the particular model shown. sition may be significantly different than assumed, with the abundance of the element giving rise to LL being particularly enhanced. Indeed, Bowler et al. (2014) find evidence for LL in systems with \(dv_{ll}/v_{\infty}\gtrsim 0.17\), which might indicate an overabundance of carbon by an order of magnitude or more compared to the solar composition (see Karakas et al., 2022, who considered such models for AGB stars and noted the weak thermal pulses associated with them). Alternatively, it could mean that some systems are relatively dust-poor so that the total radiation pressure force is lower - by roughly a factor of 10 for \(U=1\) and a column of \(10^{19}\,\mathrm{cm}^{-2}\) - and the relative contribution of the line-locked transitions to the total radiative acceleration is correspondingly higher, and hence larger \(dv_{ll}/v_{\infty}\) ratios may be reached, by roughly a factor of 3. Metal-poor massive stars are thought to have low dust-yields (Dell'Agli et al., 2019), which could explain the low terminal outflow velocities associated with some LL systems. It will be interesting to examine whether low-velocity LL systems show less reddening per their absorbing columns than high velocity systems. The kinematic models employed here suggest that quasar variability over dynamical timescales can be disruptive for line-locked systems, which have not yet reached their coasting phase. It would be interesting to check whether LL systems have a higher incidence in less variable sources or use dynamical arguments involving LL to constrain the structure function (power-spectrum) of quasars. Further, looking for \(\delta v\)-statistics of multiple quasar NALs could be used to check for non-uniform distribution over velocity space, perhaps related to the predicted bifurcation patterns (Fig. 9). It is of considerable interest to search for LL at velocity separations corresponding to multiplets other than C iv\(\,\lambda\lambda 1548.19,1550.77\), and assess their probability, which could shed light on the kinematics of NAL systems. For example, finding ample LL systems at small velocity separations whose outflow velocities are moderate, could indicate an evolutionary path by which NAL clouds hop from one line-locked position to the next as their properties change across their path, or due to quasar-flux variability. Conversely, not finding evidence for LL at small velocity separations could mean that clouds are formed with a finite velocity difference between them, ruling out, for example, Rayleigh-Taylor instability as the origin of multiple NAL systems. The proposed scenario in which circumstellar AGB envelopes may be the origin of LL systems should be further investigated by searching for commonalities between the metal and dust content of LL systems and those of shells around local AGB stars (having in mind the different redshift range probed in each case; Maraston et al., 2006), and specifically carbon-rich ones, which likely result from thermal pulses rather than from ejecta-ISM interaction. Further, calculation of the global covering factor by a population of circumstellar AGB shells, which uses realistic AGB population and evolution models, should be confronted with LL statistics. Importantly, numerical simulations must be performed to test for the stability of shells as they are accelerated through the dilute, perhaps pre-evacuated bulge medium in quasar hosts, and realistically assess the probability of LL to occur along quasar sightlines (i.e., of \(a_{2}\gtrsim a_{1}\)). On the flip side, the study of LL systems can resolve small scale phenomenon in AGB ejecta via partial covering effects of the quasar continuum source. This pencil-beam approach could shed light on the physics and composition of expanding AGB shells, and improve models for such objects in the local universe. It may be interesting to check whether highly supersolar carbon abundances, which may be implied by the existence of systems with large values of \(dv_{ll}/v_{\infty}\)(Bowler et al., 2014, see above) may be reconciled with our understanding of local AGB ejecta Lastly, the detection of multiple (\(>2\)) LL systems is a challenge to the simple AGB scenario outlined here, and it remains to be tested whether the spiral circumstellar material patterns seen around local AGB stars due to binary interaction (Hofner & Olofsson, 2018) could provide a viable explanation for this phenomenon. ## 6 Summary The emergence of line-locking (LL) of accelerating and outflowing NAL systems in quasars is studied by means of detailed photoionization and kinematic calculations. It is found that only a very small volume of the relevant phase space is conducive to LL, which appears to be at odds with recent findings for the relatively high occurrence of this phenomenon in multiple-components NALs. This implies a high-degree of fine-tuning between the properties of apparently distinct absorption components, which sets stringent constraints on their formation scenarios. Motivated by available constraints on the ionization and thermal state of such systems, the conditions for LL are examined in detail over the relevant phase space, as well as the stability of such configurations against time variations of the quasar flux. We find that the properties of the line-locked NAL system in J 2123-005, which is perhaps the best studied system of its kind, seem to be in agreement with the phase space optimal for LL due to the C iv\(\,\lambda\lambda 1548.19,1550.77\) doublet, after allowing for supra-thermal line-broadening. Further, the ratio of the LL velocity of \(\simeq 500\,\mathrm{km}\,\mathrm{s}^{-1}\) and the outflow velocity in this source is \(\sim 5\)%, and is qualitatively consistent with model predictions for the relative contribution of the C iv doublet transitions to the total radiative acceleration assuming solar-like metal composition and dust-to-metals ratio. Nevertheless, for the clouds to develop a velocity difference leading to LL while being accelerated to their bulk outflow velocity, requires extreme fine-tuning of their properties along their entire path, which occupies a negligibly small fraction of the phase-space volume. The high-degree of fine-tuning between the properties of LL NALs is surprising, and is inconsistent with most NAL formation scenarios, such as thermal instability, the mechanical compression and pushing of ISM clouds, velocity condensations or "attractors" due to the aggregated effect of LL, or radiation pressure confined clouds. The high degree of fine-tuning needs to be maintained over dynamical timescales of the flow, and is not merely viewed at the coast ing phase, thus implying stable clouds whose properties do not vary dramatically over time, and certainly not independently of each other. This is difficult to materialize if non-radiative force terms (e.g., drag) are important since those require further tuning with respect to additional, independently varying physical processes. This suggests that line-locked NALs occur in extremely dilute environments, which may have been pre-evacuated by the quasar. This, however, has implications for the confinement of NAL systems that travel at highly (and slightly different) supersonic speeds, and yet retain their properties over dynamical times. A scenario that associates line-locked systems with expanding circumstellar AGB shells in the quasar host is proposed, which naturally leads to finely tuned NAL properties, prevents LL small velocity differences between the clouds, and is qualitatively consistent with the observational constraints for well studied systems. Several predictions of the model are provided, and tests of the theory are outlined. If substantiated as a viable model for LL systems then it could provide a unique probe of _individual_ stellar phenomenon in the hosts of quasars at high-redshift, which can be used to shed light on their star-formation and metal-enrichment history, and the properties of the ambient interstellar material in quasar hosts. Additionally, LL systems can be used to assess the mass loss rate from individual AGB stars at epochs when the universe was much younger than today, and may provide a unique probe of the metalicity and gas-to-dust mixture in AGB ejecta before substantial mixing occurs. Additional numerical work is required to test the proposed scenario, which is beyond the scope of the present paper. This research has been supported by grants from the Israeli Science Foundation, ISF (2398/19), and the German Research Foundation, DFG (CH71-34-3). TRL acknowledges additional support by the Zuckerman Foundation through a Zuckerman Postdoctoral Fellowship, as well as support by the NASA Postdoctoral Program. We thank an annumous referee for constructive comments and suggestions. We are indebted to P. Goldreich and J. Everett for fruitful discussions in the early stages of this work, and thank M. Zeilig-Hess for helpful feedback. We thank G. Ferland and collaborators for creating and maintaining the cloudy photoionization code. Calculations were performed using high-performance computing facilities at the University of Haifa, which are funded in part by an ISF grant (2155/15).
2303.12606
Dynamic Partial Order Reduction for Checking Correctness against Transaction Isolation Levels
Modern applications, such as social networking systems and e-commerce platforms are centered around using large-scale databases for storing and retrieving data. Accesses to the database are typically enclosed in transactions that allow computations on shared data to be isolated from other concurrent computations and resilient to failures. Modern databases trade isolation for performance. The weaker the isolation level is, the more behaviors a database is allowed to exhibit and it is up to the developer to ensure that their application can tolerate those behaviors. In this work, we propose stateless model checking algorithms for studying correctness of such applications that rely on dynamic partial order reduction. These algorithms work for a number of widely-used weak isolation levels, including Read Committed, Causal Consistency, Snapshot Isolation, and Serializability. We show that they are complete, sound and optimal, and run with polynomial memory consumption in all cases. We report on an implementation of these algorithms in the context of Java Pathfinder applied to a number of challenging applications drawn from the literature of distributed systems and databases.
Ahmed Bouajjani, Constantin Enea, Enrique Román-Calvo
2023-03-22T14:45:04Z
http://arxiv.org/abs/2303.12606v3
# Dynamic Partial Order Reduction for Checking Correctness against Transaction Isolation Levels ###### Abstract. Modern applications, such as social networking systems and e-commerce platforms are centered around using large-scale databases for storing and retrieving data. Accesses to the database are typically enclosed in transactions that allow computations on shared data to be isolated from other concurrent computations and resilient to failures. Modern databases trade isolation for performance. The weaker the isolation level is, the more behaviors a database is allowed to exhibit and it is up to the developer to ensure that their application can tolerate those behaviors. In this work, we propose stateless model checking algorithms for studying correctness of such applications that rely on dynamic partial order reduction. These algorithms work for a number of widely-used weak isolation levels, including Read Committed, Causal Consistency, Snapshot Isolation and Serializability. We show that they are complete, sound and optimal, and run with polynomial memory consumption in all cases. We report on an implementation of these algorithms in the context of Java Pathfinder applied to a number of challenging applications drawn from the literature of distributed systems and databases. Keywords:**Theory of computation Verification by model checking; Distributed computing models; Software and its engineering Formal software verification. + Footnote †: journal: Computer Science version of the data to all clients at any point in time. However, serializability requires expensive synchronization and incurs a high performance cost. As a consequence, most storage systems use weaker isolation levels, such as _Causal Consistency_(Akkoorath and Bieniusa, 2016; Lamport, 1978; Lloyd et al., 2011), _Snapshot Isolation_(Berenson et al., 1995), _Read Committed_(Berenson et al., 1995), etc. for better performance. In a recent survey of database administrators (Pavlo, 2017), 86% of the participants responded that most or all of the transactions in their databases execute at Read Committed level. A weaker isolation level allows for more possible behaviors than stronger isolation levels. It is up to the developers then to ensure that their application can tolerate this larger set of behaviors. Unfortunately, weak isolation levels are hard to understand or reason about (Adya, 1999; Brutschy et al., 2017) and resulting application bugs can cause loss of business (Warszawski and Bailis, 2017). **Model Checking Database-Backed Applications.** This paper addresses the problem of _model checking_ code for correctness against a given isolation level. _Model checking_(Clarke et al., 1983; Queille and Sifakis, 1982) explores the state space of a given program in a systematic manner and it provides high coverage of program behavior. However, it faces the infamous state explosion problem, i.e., the number of executions grows exponentially in the number of concurrent clients. _Partial order reduction_(POR) (Clarke et al., 1999; Godefroid, 1996; Peled, 1993; Valmari, 1989) is an approach that limits the number of explored executions without sacrificing coverage. POR relies on an equivalence relation between executions where e.g., two executions are equivalent if one can be obtained from the other by swapping consecutive independent (non-conflicting) execution steps. It guarantees that at least one execution from each equivalence class is explored. _Optimal_ POR techniques explore exactly one execution from each equivalence class. Beyond this classic notion of optimality, POR techniques may aim for optimality by avoiding visiting states from which the exploration is blocked. _Dynamic_ partial order reduction (DPOR) (Flanagan and Godefroid, 2005) has been introduced to explore the execution space (and tracking the equivalence relation between executions) on-the-fly without relying on a-priori static analyses. This is typically coupled with _stateless_ model checking (SMC) (Godefroid, 1997) which explores executions of a program without storing visited states, thereby, avoiding excessive memory consumption. There is a large body of work on (D)POR techniques that address their soundness when checking a certain class of specifications for a certain class of programs, as well as their completeness and their theoretical optimality (see Section 8). Most often these works consider shared memory concurrent programs executing under a strongly consistent memory model. In the last few years, some works have studied DPOR in the case of shared memory programs running under weak memory models such as TSO or Release-Acquire, e.g. (Abdulla et al., 2017, 2016, 2018; Kokologiannakis et al., 2019). While these algorithms are sound and complete, they have exponential space complexity when they are optimal. More recently, Kokologiannakis et al. (2022) defined a DPOR algorithm that has a polynomial space complexity, in addition of being sound, complete and optimal. This algorithm can be applied for a range of shared memory models. While the works mentioned above concern shared memory programs, we are not aware of any published work addressing the case of database transactional programs running under weak isolation levels. In this paper, we address this case and propose new stateless model checking algorithms relying on DPOR techniques for database-backed applications. We assume that all the transactions in an application execute under the _same_ isolation level, which happens quite frequently in practice (as mentioned above, most database applications are run on the default isolation level of the database). Our work generalizes the approach introduced by (Kokologiannakis et al., 2022). However, this generalization to the transactional case, covering the most relevant isolation levels, is not a straightforward adaptation of [Kokologiannakis et al., 2022]. Ensuring optimality while preserving the other properties, e.g., completeness and polynomial memory complexity, is very challenging. Next, we explain the main steps and features of our work. **Formalizing Isolation Levels.** Our algorithms rely on the axiomatic definitions of isolation levels introduced by Biswas and Enea (2019). These definitions use logical constraints called _axioms_ to characterize the set of executions of a database (e.g., key-value store) that conform to a particular isolation level (extensible to SQL queries [Biswas et al., 2021]). These constraints refer to a specific set of relations between events/transactions in an execution that describe control-flow or data-flow dependencies: a program order po between events in the same transaction, a session order so between transactions in the same session1, and a write-read wr (read-from) relation that associates each read event with a transaction that writes the value returned by the read. These relations along with the events in an execution are called a _history_. A history describes only the interaction with the database, omitting application-side events (e.g., computing values written to the database). Footnote 1: A session is a sequential interface to the storage system. It corresponds to what is also called a _connection_. **Execution Equivalence.** DOR algorithms are parametrized by an equivalence relation on executions, most often, Mazurkiewicz equivalence [Mazurkiewicz, 1986]. In this work, we consider a weaker equivalence relation, also known as _read-from equivalence_[Abdulla et al., 2019, 2018; Chalupa et al., 2018; Kokologiannakis et al., 2022, 2019; Kokologiannakis and Vafeiadis, 2020], which considers that two executions are equivalent when their histories are precisely the same (they contain the same set of events, and the relations po, so, and wr are the same). In general, reads-from equivalence is coarser than Mazurkiewicz equivalence, and its equivalence classes can be exponentially-smaller than Mazurkiewicz traces in certain cases [Chalupa et al., 2018]. **SMC Algorithms.** Our SMC algorithms enumerate executions of a given program under a given isolation level \(I\). They are _sound_, i.e., enumerate only _feasible_ executions (admitted by the program under \(I\)), _complete_, i.e., they output a representative of each read-from equivalence class, and _optimal_, i.e., they output _exactly one_ complete execution from each read-from equivalence class. For isolation levels weaker than and including Causal Consistency, they satisfy a notion of _strong optimality_ which says that additionally, the enumeration avoids states from which the execution is "blocked", i.e., it cannot be extended to a complete execution of the program. For Snapshot Isolation and Serializability, we show that _there exists_ no algorithm in the same class (to be discussed below) that can ensure such a strong notion of optimality. All the algorithms that we propose are polynomial space, as opposed to many DPOR algorithms introduced in the literature. As a starting point, we define a generic class of SMC algorithms, called _swapping based_, generalizing the approach adopted by [Kokologiannakis et al., 2022, 2019], which enumerate histories of program executions. These algorithms focus on the interaction with the database assuming that the other steps in a transaction concern local variables visible only within the scope of the enclosing session. Executions are extended according to a generic scheduler function Next and every read event produces several exploration branches, one for every write executed in the past that it can read from. Events in an execution can be swapped to produce new exploration "roots" that lead to different histories. Swapping events is required for completeness, to enumerate histories where a read \(r\) reads from a write \(w\) that is scheduled by Next after \(r\). To ensure soundness, we restrict the definition of swapping so that it produces a history that is feasible by construction (extending an execution which is possibly infeasible may violate soundness). Such an algorithm is optimal w.r.t. the read-from equivalence when it enumerates each history exactly once. We define a concrete algorithm in this class that in particular, satisfies the stronger notion of optimality mentioned above for every isolation level \(I\) which is _prefix-closed_ and _causally-extensible_, e.g., _Read Committed_ and _Causal Consistency_. Prefix-closure means that every prefix of a history \(x\in\text{Vars}\quad a\in\text{LVars}\) \[\begin{array}{l}\text{Prog}:=\text{Sess}\ \ |\ \ \text{Sess}\ \|\ \text{Prog}\ or abort instructions, and its body contains instructions that access the database and manipulate a set \(\mathsf{LVars}\) of local variables. We use symbols \(a\), \(b\), etc. to denote elements of \(\mathsf{LVars}\). For simplicity, we abstract the database state as a valuation to a set \(\mathsf{Vars}\) of _global_ variables2, ranged over using \(x\), \(y\), etc. The instructions accessing the database correspond to reading the value of a global variable and storing it into a local variable \(a\left(a\coloneqq\mathsf{read}(x)\right),\) writing the value of a local variable \(a\) to a global variable \(x\left(\mathsf{write}(x,a)\right)\), or an assignment to a local variable \(a\left(a\coloneqq e\right)\). The set of values of global or local variables is denoted by \(\mathsf{Vals}\). Assignments to local variables use expressions \(e\) over local variables, which are interpreted as values and whose syntax is left unspecified. Each of these instructions can be guarded by a Boolean condition \(\phi(\vec{a})\) over a set of local variables \(\vec{a}\) (their syntax is not important). Our results assume bounded programs, as usual in SMC algorithms, and therefore, we omit other constructs like while loops. SQL statements (SELECT, JOIN, UPDATE) manipulating relational tables can be compiled to reads or writes of variables representing rows in a table (see for instance, (Biswas et al., 2021; Rahmani et al., 2019)). Footnote 2: In the context of a relational database, global variables correspond to fields/rows of a table while in the context of a key-value store, they correspond to keys. ### Isolation Levels We present the axiomatic framework introduced by Biswas and Enea (2019) for defining isolation levels. Isolation levels are defined as logical constraints, called _axioms_, over _histories_, which are an abstract representation of the interaction between a program and the database in an execution. #### 2.2.1. Histories Programs interact with a database by issuing transactions formed of \(\mathsf{begin}\), commit, abort, read and write instructions. The effect of executing one such instruction is represented using an _event_\(\langle e,type\rangle\) where \(e\) is an _identifier_ and _type_ is a _type_. There are five types of events: \(\mathsf{begin}\), commit, abort, \(\mathsf{read}(x)\) for reading the global variable \(x\), and \(\mathsf{write}(x,v)\) for writing value \(v\) to \(x\). \(\mathcal{E}\) denotes the set of events. For a read/write event \(e\), we use \(\mathit{var}(e)\) to denote the variable \(x\). A _transaction log_\(\langle t,E,\mathsf{po}_{t}\rangle\) is an identifier \(t\) and a finite set of events \(E\) along with a strict total order \(\mathsf{po}_{t}\) on \(E\), called _program order_ (representing the order between instructions in the body of a transaction). The minimal element of \(\mathsf{po}_{t}\) is a \(\mathsf{begin}\) event. A transaction log without neither a commit nor an abort event is called _pending_. Otherwise, it is called _complete_. A complete transaction log with a commit event is called _committed_ and _aborted_ otherwise. If a commit or an abort event occurs, then it is maximal in \(\mathsf{po}_{t}\); commit and abort cannot occur in the same log. The set \(E\) of events in a transaction log \(t\) is denoted by \(\mathsf{events}(t)\). Note that a transaction is aborted because it executed an abort instruction. Histories do not include transactions aborted by the database because their effect should not be visible to other transactions and the abort is not under the control of the program. For simplicity, we may use the term _transaction_ instead of transaction log. Isolation levels differ in the values returned by read events which are not preceded by a write on the same variable in the same transaction. We assume in the following that every transaction in a program is executed under the same isolation level. For every isolation level that we are aware of, if a read of a global variable \(x\) is preceded by a write to \(x\) in \(\mathsf{po}_{t}\), then it should return the value written by the last write to \(x\) before the read (w.r.t. \(\mathsf{po}_{t}\)). The set of \(\mathsf{read}(x)\) events in a transaction log \(t\) that are _not_ preceded by a write to \(x\) in \(\mathsf{po}_{t}\), for some \(x\), is denoted by \(\mathsf{reads}(t)\). Also, if \(t\) does _not_ contain an abort event, the set of \(\mathsf{write}(x,\_)\) events in \(t\) that are _not_ followed by other writes to \(x\) in \(\mathsf{po}_{t}\), for some \(x\), is denoted by \(\mathsf{writes}(t)\). If a transaction contains multiple writes to the same variable, then only the last one (w.r.t. \(\mathsf{po}_{t}\)) can be visible to other transactions (w.r.t. any isolation level that we are aware of). If \(t\) contains an abort event, then we define \(\mathsf{writes}(t)\) to be the empty set. This is because the effect of aborted transactions (its set of writes) should not be visible to other transactions. The extension to sets of transaction logs is defined as usual. Also, we say that a transaction \(\log\,t\)_writes_\(x\), denoted by \(t\) writes \(x\), when writes\((t)\) contains some write\((x,\_)\) event. A _history_ contains a set of transaction logs (with distinct identifiers) ordered by a (partial) _session order_ so that represents the order between transactions in the same session. It also includes a _write-read_ relation (also called read-from) that defines read values by associating each read to a transaction that wrote that value. Read events do _not_ contain a value, and their return value is defined as the value written by the transaction associated by the write-read relation. Let \(T\) be a set of transaction logs. For a write-read relation \(\mathsf{wr}\subseteq\mathsf{writes}(T)\times\mathsf{reads}(T)\) and variable \(x\), \(\mathsf{wr}_{x}\) is the restriction of \(\mathsf{wr}\) to reads of \(x\), \(\mathsf{wr}_{x}=\mathsf{wr}\cap(\mathsf{writes}(T)\times\{e\mid e\text{ is a read}(x)\text{ event}\})\). We extend the relations \(\mathsf{wr}\) and \(\mathsf{wr}_{x}\) to pairs of transactions by \(\langle t_{1},t_{2}\rangle\in\mathsf{wr}\), resp., \(\langle t_{1},t_{2}\rangle\in\mathsf{wr}_{x}\), iff there exists a write\((x,\_)\) event \(w\) in \(t_{1}\) and a read\((x)\) event \(r\) in \(t_{2}\) s.t. \(\langle w,r\rangle\in\mathsf{wr}\), resp., \(\langle w,r\rangle\in\mathsf{wr}_{x}\). Analogously, \(\mathsf{wr}\) and \(\mathsf{wr}_{x}\) can be extended to tuples formed of a transaction (containing a write) and a read event. We say that the transaction \(\log\,t_{1}\) is _read_ by the transaction \(\log\,t_{2}\) when \(\langle t_{1},t_{2}\rangle\in\mathsf{wr}\). Definition 2.1 ().: A _history_\(\langle T,\mathsf{so},\mathsf{wr}\rangle\) is a set of transaction logs \(T\) along with a strict partial _session order_ so, and a _write-read_ relation \(\mathsf{wr}\subseteq\mathsf{writes}(T)\times\mathsf{reads}(T)\) such that * the inverse of \(\mathsf{wr}\) is a total function, * if \((w,r)\in\mathsf{wr}\), then \(w\) and \(r\) are a write and respectively, a read, of the same variable, and * so \(\cup\mathsf{wr}\) is acyclic (here we use the extension of \(\mathsf{wr}\) to pairs of transactions). Every history includes a distinguished transaction writing the initial values of all global variables. This transaction precedes all the other transactions in so. We use \(h\), \(h_{1}\), \(h_{2}\), \(\ldots\) to range over histories. The set of transaction logs \(T\) in a history \(h=\langle T,\mathsf{so},\mathsf{wr}\rangle\) is denoted by \(\mathsf{tr}(h)\), and \(\mathsf{events}(h)\) is the union of \(\mathsf{events}(t)\) for \(t\in T\). For a history \(h\) and an event \(e\) in \(h\), \(\mathsf{tr}(h,e)\) is the transaction \(t\) in \(h\) that contains \(e\). Also, writes\((h)=\bigcup_{t\in\mathsf{tr}(h)}\) writes\((t)\) and reads\((h)=\bigcup_{t\in\mathsf{tr}(h)}\mathsf{reads}(t)\). We extend so to pairs of events by \((e_{1},e_{2})\in\mathsf{so}\) if \((\mathsf{tr}(h,e_{1}),\mathsf{tr}(h,e_{2}))\in\mathsf{so}\). Also, \(\mathsf{po}=\bigcup_{t\in T}\mathsf{po}_{t}\). #### 2.2.2. Axiomatic Framework A history satisfies a certain isolation level if there is a strict total order co on its transactions, called _commit order_, which extends the write-read relation and the session order, and which satisfies certain properties. These properties, called _axioms_, relate the commit order with the so and \(\mathsf{wr}\) relations in a history and are defined as first-order formulas of the form: Figure 2. Axioms defining isolations levels (all logical variables representing transactions, e.g., \(t_{1}\), are universally quantified). The reflexive and transitive, resp., transitive, closure of a relation \(rel\) is denoted by \(rel^{*}\), resp., \(rel^{+}\). Also, \(\circ\) denotes the composition of two relations, i.e., \(rel_{1}\circ rel_{2}=\{\langle a,b\rangle|\exists c.\langle a,c\rangle\in rel_{ 1}\wedge\langle c,b\rangle\in rel_{2}\}\). \[\forall x,\ \forall t_{1}\neq t_{2},\ \forall t_{3}.\] \[\langle t_{1},t_{3}\rangle\in\mathsf{wr}r_{x}\wedge t_{2}\ \text{writes}\ x\wedge\phi(t_{2},t_{3})\Rightarrow\langle t_{2},t_{1}\rangle \in\mathtt{co} \tag{1}\] where \(\phi\) is a property relating \(t_{2}\) and \(\tau\) (i.e., the read or the transaction reading from \(t_{1}\)) that varies from one axiom to another.3 Note that an aborted transaction \(t\) cannot take the role of \(t_{1}\) nor \(t_{2}\) in equation 1 as the set \(\mathsf{writes}(t)\) is empty. Intuitively, this axiom schema states the following: in order for \(\tau\) to read specifically \(t_{1}\)'s write on \(k\), it must be the case that every \(t_{2}\) that also writes \(k\) and satisfies \(\phi(t_{2},\tau)\) was committed before \(t_{1}\). The property \(\phi\) relates \(t_{2}\) and \(\tau\) using the relations in a history and the commit order. Figure 2 shows two axioms which correspond to their homonymous isolation levels: _Causal Consistency_ (CC) and _Serializability_ (SER). The conjunction of the other two axioms Conflict and Prefix defines _Snapshot Isolation_ (SI). _Read Atomic_ (RA) is a weakening of CC where \((\mathtt{so}\cup\mathsf{wr})^{+}\) is replaced with \(\mathtt{so}\cup\mathsf{wr}\). _Read Committed_ (RC) is defined similarly. Note that SER is stronger than SI (i.e., every history satisfying SER satisfies SI as well), SI is stronger than CC, CC is stronger than RA, and RA is stronger than RC. Footnote 3: These formulas are interpreted on tuples \(\langle h,\mathtt{co}\rangle\) of a history \(h\) and a commit order \(\mathtt{co}\) on the transactions in \(h\) as usual. For instance, the axiom defining Causal Consistency [10] states that for any transaction \(t_{1}\) writing a variable \(x\) that is read in a transaction \(t_{3}\), the set of \((\mathsf{wr}\cup\mathtt{so})^{+}\) predecessors of \(t_{3}\) writing \(x\) must precede \(t_{1}\) in commit order (\((\mathsf{wr}\cup\mathtt{so})^{+}\) is usually called the _causal_ order). A violation of this axiom can be found in Figure 3: the transaction \(t_{2}\) writing \(2\) to \(x\) is a \((\mathsf{wr}\cup\mathtt{so})^{+}\) predecessor of the transaction \(t_{3}\) reading \(1\) from \(x\) because the transaction \(t_{4}\), writing \(1\) to \(y\), reads \(x\) from \(t_{2}\) and \(t_{3}\) reads \(y\) from \(t_{4}\). This implies that \(t_{2}\) should precede in commit order the transaction \(t_{1}\) writing \(1\) to \(x\), which is inconsistent with the write-read relation (\(t_{2}\) reads from \(t_{1}\)). The Serializability axiom requires that for any transaction \(t_{1}\) writing to a variable \(x\) that is read in a transaction \(t_{3}\), the set of \(\mathtt{co}\) predecessors of \(t_{3}\) writing \(x\) must precede \(t_{1}\) in commit order. This ensures that each transaction observes the effects of all the \(\mathtt{co}\) predecessors. Definition 2.2 ().: For an isolation level \(I\) defined by a set of axioms \(X\), a history \(h=\langle T,\mathtt{so},\mathsf{wr}\rangle\)_satisfies \(I\)_ iff there is a strict total order \(\mathtt{co}\) s.t. \(\mathsf{wr}\cup\mathtt{so}\subseteq\mathtt{co}\) and \(\langle h,\mathtt{co}\rangle\) satisfies \(X\). A history that satisfies an isolation level \(I\) is called \(I\)-consistent. For two isolation levels \(I_{1}\) and \(I_{2}\), \(I_{1}\) is _weaker than \(I_{2}\)_ when every \(I_{1}\)-consistent history is also \(I_{2}\)-consistent. ### Program Semantics We define a small-step operational semantics for transactional programs, which is parametrized by an isolation level \(I\). The semantics keeps a history of previously executed database accesses in order to maintain consistency with \(I\). For readability, we define a program as a partial function \(\mathsf{P}:\mathsf{SessId}\rightharpoonup\mathsf{Sess}\) that associates session identifiers in \(\mathsf{SessId}\) with concrete code as defined in Figure 1 (i.e., sequences of transactions). Similarly, the session order \(\mathtt{so}\) in a history is defined as a partial function \(\mathtt{so}:\mathsf{SessId}\rightharpoonup\mathsf{Tlogs}^{*}\) that associates session identifiers with sequences of transaction logs. Two transaction logs are ordered by \(\mathtt{so}\) if one occurs before the other in some sequence \(\mathtt{so}(j)\) with \(j\in\mathsf{SessId}\). The operational semantics is defined as a transition relation \(\Rightarrow_{I}\) between _configurations_, which are defined as tuples containing the following: * history \(h\) storing the events generated by database accesses executed in the past, Figure 3. Causal Consistency violation. Boxes group events from the same transaction. * a valuation map \(\vec{\gamma}\) that records local variable values in the current transaction of each session (\(\vec{\gamma}\) associates identifiers of sessions with valuations of local variables), * a map \(\vec{B}\) that stores the code of each live transaction (mapping session identifiers to code), * sessions/transactions \(\mathsf{P}\) that remain to be executed from the original program. The relation \(\Rightarrow_{I}\) is defined using a set of rules as expected. Starting a new transaction in a session \(j\) is enabled as long as this session has no live transactions (\(\vec{\mathsf{B}}(j)=\epsilon\)) and results in adding a transaction log with a single begin event to the history and scheduling the body of the transaction (adding it to \(\vec{\mathsf{B}}(j)\)). Local steps, i.e., checking a Boolean condition or computation with local variables, use the local variable valuations and advance the code as expected. Read instructions of some global variable \(x\) can have two possible behaviors: (1) if the read follows a write on \(x\) in the same transaction, then it returns the value written by the last write on \(x\) in that transaction, and (2) otherwise, the read reads from another transaction \(t^{\prime}\) which is chosen non-deterministically as long as extending the current history with the write-read dependency associated to this choice leads to a history that still satisfies \(I\). Depending on the isolation level, there may not exist a transaction \(t^{\prime}\) the read can read from. For other instructions, e.g., commit and abort, the history is simply extended with the corresponding events while ending the transaction execution in the case of abort. An _initial_ configuration for program \(\mathsf{P}\) contains the program \(\mathsf{P}\), a history \(h=\langle\{t_{0}\},\emptyset,\emptyset\rangle\) where \(t_{0}\) is a transaction log containing writes that write the initial value for all variables, and empty current transaction code (\(\mathsf{B}=\epsilon\)). An execution of a program \(\mathsf{P}\) under an isolation level \(I\) is a sequence of configurations \(c_{0}c_{1}\ldots c_{n}\) where \(c_{0}\) is an initial configuration for \(\mathsf{P}\), and \(c_{m}\Rightarrow_{I}c_{m+1}\), for every \(0\leq m<n\). We say that \(c_{n}\) is \(I\)_-reachable_ from \(c_{0}\). The history of such an execution is the history \(h\) in the last configuration \(c_{n}\). A configuration is called _final_ if it contains the empty program (\(\mathsf{P}=\emptyset\)). Let \(\operatorname{hist}_{I}(\mathsf{P})\) denote the set of all histories of an execution of \(\mathsf{P}\) under \(I\) that ends in a final configuration. ## 3. Prefix-Closed and Causally-Extensible Isolation Levels We define two properties of isolation levels, prefix-closure and causal extensibility, which enable efficient DPOR algorithms (as shown in Section 5). ### Prefix Closure For a relation \(R\subseteq A\times A\), the restriction of \(R\) to \(A^{\prime}\times A^{\prime}\), denoted by \(R\downarrow A^{\prime}\times A^{\prime}\), is defined by \(\{(a,b):(a,b)\in R,a,b\in A^{\prime}\}\). Also, a set \(A^{\prime}\) is called \(R\)-downward closed when it contains \(a\in A\) every time it contains some \(b\in A\) with \((a,b)\in R\). A _prefix_ of a transaction log \(\langle t,E,\mathsf{po}_{t}\rangle\) is a transaction log \(\langle t,E^{\prime},\mathsf{po}_{t}\downarrow E^{\prime}\times E^{\prime}\rangle\) such that \(E^{\prime}\) is \(\mathsf{po}_{t^{\prime}}\)-downward closed. A _prefix_ of a history \(h=\langle T,\mathsf{so},\mathsf{wr}\rangle\) is a history \(h^{\prime}=\langle T^{\prime},\mathsf{so}\downarrow T^{\prime}\times T^{ \prime},\mathsf{wr}\downarrow T^{\prime}\times T^{\prime}\rangle\) such that every transaction log in \(T^{\prime}\) is a prefix of a different transaction log in \(T\) but carrying the same id, \(\mathsf{events}(h^{\prime})\subseteq\mathsf{events}(h)\), and \(\mathsf{events}(h^{\prime})\) is \((\mathsf{po}\cup\mathsf{so}\cup\mathsf{wr})^{*}\)-downward closed. For example, Figure 4. Explaining the notion of prefix of a history. **init** denotes the transaction log writing initial values. Boxes group events from the same transaction. the history pictured in Fig. 3(b) is a prefix of the one in Fig. 3(a) while the history in Fig. 3(c) is not. The transactions on the bottom of Fig. 3(c) have a wr predecessor in Fig. 3(a) which is not included. Definition 3.1 ().: An isolation level \(I\) is called _prefix-closed_ when every prefix of an \(I\)-consistent history is also \(I\)-consistent. Every isolation level \(I\) discussed above is prefix-closed because if a history \(h\) is \(I\)-consistent with a commit order co, then the restriction of co to the transactions that occur in a prefix \(h^{\prime}\) of \(h\) satisfies the corresponding axiom(s) when interpreted over \(h^{\prime}\). Theorem 3.2 ().: _Read Committed, Read Atomic, Causal Consistency, Snapshot Isolation, and Serializability are prefix closed._ ### Causal Extensibility We start with an example to explain causal extensibility. Let us consider the histories \(h_{1}\) and \(h_{2}\) in Figures 4(a) and 4(b), respectively, _without_ the events \(\mathsf{read}(y)\) and \(\mathsf{write}(y,2)\) written in blue bold font. These histories satisfy Read Atomic. The history \(h_{1}\) can be extended by adding the event \(\mathsf{read}(y)\) and the wr dependency \(\mathsf{wr}(\mathsf{init},\mathsf{read}(y))\) while still satisfying Read Atomic. On the other hand, the history \(h_{2}\)_can not_ be extended with the event \(\mathsf{write}(y,2)\) while still satisfying Read Atomic. Intuitively, if the reading transaction on the bottom reads \(x\) from the transaction on the right, then it should read \(y\) from the same transaction because this is more "recent" than \(\mathsf{init}\) w.r.t. session order. The essential difference between these two extensions is that the first concerns a transaction which is maximal in \((\mathsf{so}\cup\mathsf{wr})^{+}\) while the second no. The extension of \(h_{2}\) concerns the transaction on the right in Figure 4(b) which is a wr predecessor of the reading transaction. Causal extensibility will require that at least the \((\mathsf{so}\cup\mathsf{wr})^{+}\) maximal (pending) transactions can always be extended with any event while still preserving consistency. The restriction to \((\mathsf{so}\cup\mathsf{wr})^{+}\) maximal transactions is intuitively related to the fact that transactions should not read from non-committed (pending) transactions, e.g., the reading transaction in \(h_{2}\) should not read from the still pending transaction that writes \(x\) and later \(y\). Formally, let \(h=\langle T,\mathsf{so},\mathsf{wr}\rangle\) be a history. A transaction \(t\) is called \((\mathsf{so}\cup\mathsf{wr})^{+}\)-maximal in \(h\) if \(h\) does not contain any transaction \(t^{\prime}\) such that \((t,t^{\prime})\in(\mathsf{so}\cup\mathsf{wr})^{+}\). We define a _causal extension_ of a pending transaction \(t\) in \(h\) with an event \(e\) as a history \(h^{\prime}\) such that: * \(e\) is added to \(t\) as a maximal element of \(\mathsf{po}_{t}\), * if \(e\) is a read event and \(t\)_does not_ contain a write to \(\mathit{var}(e)\), then \(\mathsf{wr}\) is extended with some tuple \((t^{\prime},e)\) such that \((t^{\prime},t)\in(\mathsf{so}\cup\mathsf{wr})^{+}\) in \(h\) (if \(e\) is a read event and \(t\)_does_ contain a write to \(\mathsf{var}(e)\), then the value returned by \(e\) is the value written by the latest write on \(\mathsf{var}(e)\) before \(e\) in \(t\); the definition of the return value in this case is unique and does not involve wr dependencies), * the other elements of \(h\) remain unchanged in \(h^{\prime}\). Figure 5. Explaining causal extensibility. \(\mathsf{init}\) denotes the transaction log writing initial values. Boxes group events from the same transaction. For example, Figure 6(b) and 6(c) present two causal extensions with a read(\(x\)) event of the transaction \(t_{4}\) in the history \(h\) in Figure 6(a). The new read event reads from transaction \(t_{1}\) or \(t_{3}\) which were already related by \((\texttt{so}\cup\texttt{wr})^{+}\) to \(t_{4}\). An extension of \(h\) where the new read event reads from \(t_{2}\) is _not_ a causal extension because \((t_{2},t_{4})\notin(\texttt{so}\cup\texttt{wr})^{+}\). Definition 3.3 ().: An isolation level \(I\) is called _causally-extensible_ if for every \(I\)-consistent history \(h\), every \((\texttt{so}\cup\texttt{wr})^{+}\)-maximal pending transaction \(t\) in \(h\), and every event \(e\), there exists a causal extension \(h^{\prime}\) of \(t\) with \(e\) that is \(I\)-consistent. Theorem 3.4 ().: _Causal Consistency, Read Atomic, and Read Committed are causally-extensible._ Snapshot Isolation and Serializability are _not_ causally extensible. Figure 6 presents a counter-example to causal extensibility: the causal extension of the history \(h\) that does _not_ contain the write(\(x,2\)) written in blue bold font with this event does not satisfy neither Snapshot Isolation nor Serializability although \(h\) does. Note that the causal extension with a write event is unique. (Note that both \(h\) and this causal extension satisfy Causal Consistency and therefore, as expected, this counter-example does not apply to isolation levels weaker than Causal Consistency.) ## 4. Swapping-based model checking algorithms We define a class of stateless model checking algorithms for enumerating executions of a given transactional program, that we call _swapping-based algorithms_. Section 5 will describe a concrete instance that applies to isolation levels that are prefix-closed and causally extensible. These algorithms are defined by the recursive function explore listed in Algorithm 1. The function explore receives as input a program \(\mathsf{P}\), an _ordered history_\(h_{<}\), which is a pair \((h,<)\) of a history and a total order \(<\) on all the events in \(h\), and a mapping locals that associates each event \(e\) in \(h\) with the valuation of local variables in the transaction of \(e\) (\(\texttt{tr}(h,e)\)) just before executing \(e\). For an ordered history \((h,<)\) with \(h=\langle T,\texttt{so},\texttt{wr}\rangle\), we assume that \(<\) is consistent with \(\texttt{po}\), \(\texttt{so}\), and \(\texttt{wr}\), i.e., \(e_{1}<e_{2}\) if \((\texttt{tr}(h,e_{1}),\texttt{tr}(h,e_{2}))\in(\texttt{so}\cup\texttt{wr})^{+}\) or \((e_{1},e_{2})\in\texttt{po}\). Initially, the ordered history and the mapping locals are empty. The function explore starts by calling Next to obtain an event representing the next database access in some pending transaction of \(\mathsf{P}\), or a begin/commit/abort event for starting or ending a transaction. This event is associated to some session \(j\). For example, a typical implementation of Next would choose one of the pending transactions (in some session \(j\)), execute all local instructions until Figure 7. Two causal extensions of the history \(h\) on the left with the read(\(x\)) event written in blue. the next database instruction in that transaction (applying the transition rules if-true, if-false, and local) and return the event \(e\) corresponding to that database instruction and the current local state \(\gamma\). Next may also return \(\bot\) if the program finished. If Next returns \(\bot\), then the function Valid can be used to filter executions that satisfy the intended isolation level before outputting the current history and local states (the use of Valid will become relevant in Section 6). Otherwise, the event \(e\) is added to the ordered history \(h_{<}\). If \(e\) is a read event, then ValidWrites computes a set of write events \(w\) in the current history that are valid for \(e\), i.e., adding the event \(e\) along with the \(\mathsf{wr}\) dependency \((w,e)\) leads to a history that still satisfies the intended isolation level. Concerning notations, let \(h\) be a history where \(\mathsf{so}\) is represented as a function \(\mathsf{so}:\mathsf{SessId}\rightarrow\mathsf{Tlogs}^{*}\) (as in SS 2.3). For event \(e\), \(h\oplus_{j}e\) is the history obtained from \(h\) by adding \(e\) to the last transaction in \(\mathsf{so}(j)\) as the last event in \(\mathsf{po}\) (i.e., if \(\mathsf{so}(j)=\sigma;\langle t,E,\mathsf{po}_{t}\rangle\), then the session order \(\mathsf{so}^{\prime}\) of \(h\oplus_{j}e\) is defined by \(\mathsf{so}^{\prime}(k)=\mathsf{so}(k)\) for all \(k\neq j\) and \(\mathsf{so}(j)=\sigma;\langle t,E\cup\{e\},\mathsf{po}_{t}\cup\{(e^{\prime},e) :e^{\prime}\in E\}\rangle\)). This is extended to ordered histories: \((h,<)\oplus_{j}e\) is defined as \((h\oplus_{j}e,<\cdot e)\) where \(<\cdot e\) means that \(e\) is added as the last element of \(<\). Also, \(h\oplus_{j}(e,\mathrm{begin})\) is a history where \(\langle t,\{(e,\mathrm{begin})\},\emptyset\rangle\) with \(t\) a fresh \(\mathrm{id}\) is appended to \(\mathsf{so}(j)\), and \(h\oplus\mathsf{wr}(t,e)\) is defined by adding \((t,e)\) to the write-read of \(h\). ``` 1:functionexploreSwaps(P, \(h_{<}\), locals) 2:\(l\leftarrow\textsc{ComputeReorderings}(h_{<})\) 3:for all\((\alpha,\beta)\in l\)do 4:if\(\textsc{Optimality}(h_{<},\alpha,\beta,\mathrm{locals})\)then 5:explore(P,Swap(\(h_{<},\alpha,\beta,\mathrm{locals}))\) ``` **Algorithm 2**exploreSwaps Once an event is added to the current history, the algorithm may explore other histories obtained by re-ordering events in the current one. Such re-orderings are required for completeness. New read events can only read from writes executed in the past which limits the set of explored histories to the scheduling imposed by Next. Without re-orderings, writes scheduled later by Next cannot be read by read events executed in the past, although this may be permitted by the isolation level. The function exploreSwaps calls ComputeReorderings to compute pairs of sequences of events \(\alpha,\beta\) that should be re-ordered; \(\alpha\) and \(\beta\) are _contiguous and disjoint_ subsequences of the total order \(<\), and \(\alpha\) should end before \(\beta\) (since \(\beta\) will be re-ordered before \(\alpha\)). Typically, \(\alpha\) would contain a read event \(r\) and \(\beta\) a write event \(w\) such that re-ordering the two enables \(r\) to read from \(w\). Ensuring soundness and avoiding redundancy, i.e., exploring the same history multiple times, may require restricting the application of such re-orderings. This is modeled by the Boolean condition called Optimality. If this condition holds, the new explored histories are computed by the function Swap. This function returns local states as well, which are necessary for continuing the exploration. We assume that Swap(\(h_{<},\alpha,\beta,\mathrm{locals}\)) returns pairs \((h^{\prime}_{<},\mathrm{locals}^{\prime})\) such that 1. \(h^{\prime}\) contains at least the events in \(\alpha\) and \(\beta\), 2. \(h^{\prime}\) without the events in \(\alpha\) is a prefix of \(h\), and 3. if a read \(r\) in \(\alpha\) reads from different writes in \(h\) and \(h^{\prime}\) (the \(\mathsf{wr}\) relations of \(h\) and \(h^{\prime}\) associate different transactions to \(r\)), then \(r\) is the last event in its transaction (w.r.t. \(\mathsf{po}\)). The first condition makes the re-ordering "meaningful" while the last two conditions ensure that the history \(h^{\prime}\) is feasible by construction, i.e., it can be obtained using the operational semantics defined in Section 2.3. Feasibility of \(h^{\prime}\) is ensured by keeping prefixes of transaction logs from \(h\) and all their \(\mathsf{wr}\) dependencies except possibly for read events in \(\alpha\) (second condition). In particular, for events in \(\beta\), it implies that \(h^{\prime}\) contains all their \((\mathsf{po}\cup\mathsf{so}\cup\mathsf{wr})^{*}\) predecessors. Also, the change of a read-from dependency is restricted to the last read in a transaction (third condition) because changing the value returned by a read may disable later events in the same transaction4. A concrete implementation of explore is called: * \(I\)_-sound_ if it outputs only histories in \(\operatorname{hist}_{I}(\operatorname{P})\) for every program \(\operatorname{P}\), * \(I\)_-complete_ if it outputs every history in \(\operatorname{hist}_{I}(\operatorname{P})\) for every program \(\operatorname{P}\), * _optimal_ if it does not output the same history twice, * _strongly optimal_ if it is optimal and never engages in fruitless explorations, i.e., explore is never called (recursively) on a history \(h\) that does not satisfy \(I\), and every call to explore results in an output or another recursive call to explore. ## 5. Swapping-based model checking for prefix-closed and causally-extensible isolation levels We define a concrete implementation of explore, denoted as explore-ce, that is \(I\)-sound, \(I\)-complete, and strongly optimal for any isolation level \(I\) that is prefix-closed and causally-extensible. The isolation level \(I\) is a parameter of explore-ce. The space complexity of explore-ce is polynomial in the size of the program. An important invariant of this implementation is that it explores histories with _at most one_ pending transaction and this transaction is maximal in session order. This invariant is used to avoid fruitless explorations: since \(I\) is assumed to be causally-extensible, there always exists an extension of the current history with one more event that continues to satisfy \(I\). Moreover, this invariant is sufficient to guarantee completeness in the sense defined above of exploring all histories of "full" program executions (that end in a final configuration). Section 5.1 describes the implementations of Next and ValidWrites used to extend a given execution, Section 5.2 describes the functions ComputeReorderings and Swap used to compute re-ordered executions, and Section 5.3 describes the Optimality restriction on re-ordering. We assume that the function Valid is defined as simply \(\textsc{Valid}(h)\mathrel{\mathop{:}\mskip-4.0mu =}true\) (no filter before outputting). Section 5.4 discusses correctness arguments. ### Extending Histories According to An Oracle Order The function Next generates events representing database accesses to extend an execution, according to an _arbitrary but fixed_ order between the transactions in the program called _oracle order_. We assume that the oracle order, denoted by \(<_{\text{or}}\), is consistent with the order between transactions in the same session of the program. The extension of \(<_{\text{or}}\) to events is defined as expected. For example, assuming that each session has an id, an oracle order can be defined by an order on session ids along with the session order so: transactions from sessions with smaller ids are considered first and the order between transactions in the same session follows so. Next returns a new event of the transaction that is not already completed and that is _minimal_ according to \(<_{\text{or}}\). In more detail, if \(j,e,\gamma\) is the output of Next(\(\operatorname{P},h_{<}\), locals), then either: * the last transaction log \(t\) of session \(j\) (w.r.t. so) in \(h\) is pending, and \(t\) is the smallest among pending transaction logs in \(h\) w.r.t. \(<_{\text{or}}\) * \(h\) contains no pending transaction logs and the next transaction of sessions \(j\) is the smallest among not yet started transactions in the program w.r.t. \(<_{\text{or}}\). This implementation of Next is deterministic and it prioritizes the completion of pending transactions. The latter is useful to maintain the invariant that any history explored by the algorithm has at most one pending transaction. Preserving this invariant requires that the histories given as input to Next also have at most one pending transaction. This is discussed further when explaining the process of re-ordering events in Section 5.2. For example, consider the program in Figure 7(a), an oracle order which orders the two transactions in the left session before the transaction in the right session, and the history \(h\) in Figure 7(b). Since the local state of the pending transaction on the left stores 3 to the local variable \(a\) (as a result of the previous \(\operatorname{read}(x)\) event) and the Boolean condition in \(\mathtt{if}\) holds, Next will return the event \(\operatorname{write}(y,1)\) when called with \(h\). According to Algorithm 1, if the event returned by Next is not a read event, then it is simply added to the current history as the maximal element of the order \(<\) (cf. the definition of \(\oplus_{j}\) on ordered histories). If it is a read event, then adding this event may result in multiple histories depending on the chosen \(\operatorname{wr}\) dependency. For example, in Figure 9, extending the history in Figure 9 with the \(\operatorname{read}(x)\) event could result in two different histories, pictured in Figure 9 and 9, depending on the write with whom this read event is associated by \(\operatorname{wr}\). However, under CC, the latter history is inconsistent. The function ValidWrites limits the choices to those that preserve consistency with the intended isolation level \(I\), i.e., \[\textsc{ValidWrites}(h,e)\coloneqq\{t\ \in\operatorname{commTrans}(h)\mid h \oplus_{j}e\oplus\operatorname{wr}(t,e)\text{ satisfies }I\}\] where \(\operatorname{commTrans}(h)\) is the set of committed transactions in \(h\). ### Re-Ordering Events in Histories After extending the current history with one more event, explore may be called recursively on other histories obtained by re-ordering events in the current one (and dropping some other events). Re-ordering events must preserve the invariant of producing histories with at most one pending transaction. To explain the use of this invariant in avoiding fruitless explorations, let us consider the program in Figure 9 assuming an exploration under Read Committed. The oracle order gives priority to the transaction on the left. Assume that the current history reached by the exploration is the one pictured in Figure 9 (the last added event is \(\operatorname{write}(x,2)\)). Swapping \(\operatorname{write}(x,2)\) with \(\operatorname{read}(x)\) would result in the history pictured in Figure 9. To ensure that this swap produces a new history which was not explored in the past, the \(\operatorname{wr}_{x}\) dependency of \(\operatorname{read}(x)\) is changed towards the \(\operatorname{write}(x,2)\) transaction (we detail this later). By the definition of next (and the oracle order), this history shall be extended with \(\operatorname{read}(y)\), and this read event will be associated by \(\operatorname{wr}_{y}\) to the only available \(\operatorname{write}(y,\_)\) event from \(\operatorname{init}\). This is pictured in Figure 9. The next exploration step will extend the history with \(\operatorname{write}(y,2)\) (the only extension possible) which however, results Figure 8. A program with two sessions (a), a history \(h\) (b), and an extension of \(h\) with an event returned by Next (c). The so-edges from \(\operatorname{init}\) to the other transactions are omitted for legibility. We use edges labeled by or to represent the oracle order \(<_{\operatorname{or}}\). Events in gray are not yet added to the history. Figure 9. Extensions of a history by adding a read event. Events in gray are not yet added to the history. in a history that does _not_ satisfy Read Committed, thereby, the recursive exploration branch being blocked. The core issue is related to the history in Figure 10d which has a pending transaction that is _not_ (\(\mathrm{so}\cup\mathrm{wr}\))\({}^{*}\)-maximal. Being able to extend such a transaction while maintaining consistency is not guaranteed by Read Committed (and any other isolation level we consider). Nevertheless, causal extensibility guarantees the existence of an extension for pending transactions that are (\(\mathrm{so}\cup\mathrm{wr}\))\({}^{*}\)-maximal. We enforce this requirement by restricting the explored histories to have at most one pending transaction. This pending transaction will necessarily be (\(\mathrm{so}\cup\mathrm{wr}\))\({}^{*}\)-maximal. To enforce histories with at most one pending transaction, the function \(\mathrm{ComputeReorderings}\), which identifies events to reorder, has a non-empty return value only when the last added event is commit (the end of a transaction)5. Therefore, in such a case, it returns pairs of some transaction log prefix ending in a read \(r\) and the last completed transaction log \(t\), such that the transaction log containing \(r\) and \(t\) are _not_ causally dependent (i.e., related by (\(\mathrm{so}\cup\mathrm{wr}\))\({}^{*}\)) (the transaction log prefix ending in \(r\) and \(t\) play the role of the subsequences \(\alpha\) and respectively, \(\beta\) in the description of \(\mathrm{ComputeReorderings}\) from Section 4). To simplify the notation, we will assume that \(\mathrm{ComputeReorderings}\) returns pairs \((r,t)\). Footnote 5: Aborted transactions have no visible effect on the state of the database so swapping an aborted transaction cannot produce a new meaningful history. \(\mathrm{ComputeReorderings}(h_{<})\coloneqq\{(r,t)\in\mathcal{E}\times T\mid r \in\mathsf{reads}(T)\wedge t\text{ writes var}(r)\wedge\mathrm{tr}(h,r)<t\) \(\wedge\ (\mathrm{tr}(h,r),t)\notin(\mathrm{so}\cup\mathrm{wr})^{*}\wedge t\) is complete and it includes the last event in \(<\) Figure 11. Re-ordering events. All so-edges from **init** to other transactions are omitted for legibility. The history order \(<\) is represented by the top to bottom order in each figure. Events in gray are deleted from the history. Figure 10. Example of inconsistency after swapping two events. All so-edges from **init** to the other transactions are omitted for legibility. The history order \(<\) is represented by the top to bottom order in each figure. Events in gray are not yet added to the history. For example, for the program in Figure (a)a and history \(h\) in Figure (b)b, ComputeReorderings(\(h\)) would return \((r_{1},t_{4})\) and \((r_{2},t_{4})\) where \(r_{1}\) and \(r_{2}\) are the \(\operatorname{read}\left(x\right)\) events in \(t_{1}\) and \(t_{2}\) respectively. For a pair \((r,t)\), the function Swap produces a new history \(h^{\prime}\) which contains all the events ordered before \(r\) (w.r.t. \(<\)), the transaction \(t\) and all its \((\texttt{so}\cup\operatorname{wr})^{*}\) predecessors, and the event \(r\) reading from \(t\). All the other events are removed. Note that the po predecessors of \(r\) from the same transaction are ordered before \(r\) by \(<\) and they will be also included in \(h^{\prime}\). The history \(h^{\prime}\) without \(r\) is a prefix of the input history \(h\). By definition, the only pending transaction in \(h^{\prime}\) is the one containing the read \(r\). The order relation is updated by moving the transaction containing the read \(r\) to be the last; it remains unchanged for the rest of the events. \(\texttt{Swap}(h_{<},r,t,\operatorname{locals})\coloneqq\left((h^{\prime}=(h \setminus D)\oplus\operatorname{wr}(t,r),<^{\prime}),\operatorname{ locals}^{\prime}\right)\), where \(\operatorname{locals}^{\prime}=\operatorname{locals}\downarrow\operatorname {events}(h^{\prime})\)\(D=\{e|r<e\land(\operatorname{tr}(h,e),t)\notin(\texttt{so}\cup \operatorname{wr})^{*}\}\) and \(<^{\prime}=\left(<\downarrow(\operatorname{events}(h^{\prime})\setminus \operatorname{events}(\operatorname{tr}(h^{\prime},r)))\right)\cdot \operatorname{tr}(h^{\prime},r)\) Above, \(h\setminus D\) is the prefix of \(h\) obtained by deleting all the events in \(D\) from its transaction logs; a transaction log is removed altogether if it becomes empty. Also, \(h^{\prime\prime}\oplus\operatorname{wr}(t,r)\) denotes an _update_ of the \(\operatorname{wr}\) relation of \(h^{\prime\prime}\) where any pair \((\_,r)\) is replaced by \((t,r)\). Finally, \(<^{\prime\prime}\cdot\operatorname{tr}(h^{\prime},r)\) is an extension of the total order \(<^{\prime\prime}\) obtained by appending the events in \(\operatorname{tr}(h^{\prime},r)\) according to program order. Continuing with the example of Figure 11, when swapping \(r_{1}\) and \(t_{4}\), all the events in transaction \(t_{2}\) belong to \(D\) and they will be removed. This is shown in Figure (d)d. Note that transaction \(t_{1}\) aborted in Figure (b)b while it will commit in Figure (d)d (because the value read from \(x\) changed). When swapping \(r_{2}\) and \(t_{4}\), no event but the commit in \(t_{2}\) will be deleted (Figure (c)c). ### Ensuring Optimality Simply extending histories according to Next and making recursive calls on re-ordered histories whenever they are \(I\)-consistent guarantees soundness and completeness, but it does not guarantee optimality. Intuitively, the source of redundancy is related to the fact that applying Swap on different histories may give the same result. As a first example, consider the program in Figure (a)a with \(2\) transactions that only read some variable \(x\) and \(2\) transactions that only write to \(x\), each transaction in a different session. Assume that explore reaches the ordered history in Figure (b)b and Next is about to return the second reading transaction. explore will be called recursively on the two histories in Figure (c)c and Figure (d)d that differ in the write that this last read is reading from (the initial write or the first write transaction). On both branches of the recursion, Next will extend the history with the last write transaction written in blue bold font. For both histories, swapping this last write with the first read on \(x\) will result in the history in Figure (e)e (cf. the definition of ComputeReorderings and Swap). Thus, both branches of the recursion will continue extending the same history and optimality is Figure 12. Re-ordering events versus optimality. We assume an oracle order orders transaction from left to right, top to bottom in the program. All transaction logs are history-ordered top to bottom according to their position in the figure. Events in gray are not yet added to the history. violated. The source of non-optimality is related to wr dependencies that are _removed_ during the Swap computation. The histories in Figure 11(c) and Figure 11(d) differ in the wr dependency involving the last read, but this difference was discarded during the Swap computation. To avoid this behavior, Swap is enabled only on histories where the discarded wr dependencies relate to some "fixed" set of writes, i.e., latest6 writes w.r.t. \(<\) that guarantee consistency by causal extensibility (see the definition of \(\mathsf{readLatest}_{I}(\_,\_,)\) below). By causal extensibility, a read \(r\) can always read from a write which already belongs to its "causal past", i.e., predecessors in \((\mathsf{so}\cup\mathsf{wr})^{*}\) excluding the wr dependency for \(r\). For every discarded wr dependency, it is required that the read reads from the latest such write w.r.t. \(<\). In this example, re-ordering is enabled only when the second read\((x)\) reads from the initial write; write\((x,2)\) does not belong to its "causal past" (when the wr dependency of the read itself is excluded). Footnote 6: We use latest writes because they are uniquely defined. In principle, other ways of identifying some unique set of writes could be used. The restriction above is not sufficient, because the two histories for which Swap gives the same result may not be generated during the same recursive call (for different wr choices when adding a read). For example, consider the program in Figure 12(a) that has four sessions each containing a single transaction. explore may compute the history \(h\) pictured in Figure 12(b). Before adding transaction \(t_{4}\), explore can re-order \(t_{3}\) and \(t_{2}\) and then extend with \(t_{4}\) and arrive at the history \(h_{1}\) in Figure 12(c). Also, after adding \(t_{4}\), it can re-order \(t_{1}\) and \(t_{4}\) and arrive at the history \(h_{2}\) in Figure 12(d). However, swapping the same \(t_{1}\) and \(t_{4}\) in \(h_{1}\) leads to the same history \(h_{2}\), thereby, having two recursive branches that end up with the same input and violate optimality. Swapping \(t_{1}\) and \(t_{4}\) in \(h_{1}\) should not be enabled because the read\((y)\) to be removed by Swap has been swapped in the past. Removing it makes it possible that this recursive branch explores that wr choice for read\((y)\) again. The Optimality condition restricting re-orderings requires that the re-ordered history be \(I\)-consistent and that every read deleted by Swap or the re-ordered read \(r\) (whose wr dependency is modified) reads from a latest valid write, cf. the example in Figure 12, and it is not already swapped, cf. the example in Figure 12 (the set \(D\) is defined as in Swap): \[\mathsf{Optimality}(h_{<},r,t,\mathsf{locals})\coloneqq\text{the history returned by \@@cite[cite]{[\@@bibref{}{SWap}{}{}]}}(h_{<},r,t,\mathsf{ locals})\text{ satisfies }I\] \[\wedge\ \forall r\in\mathsf{reads}(h)\cap(D\cup\{r\}).\text{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{ \scalebox{0.8}{\prime}{\scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\prime}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{0.8{ \scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{\scalebox{0.8}{\prime}{ \scalebox{0.8}{\scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{0.8{ \scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{\scalebox{0.8}{\prime}{ \scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{0.8{\scalebox{0.8}{ \scalebox{0.8}{\prime}{\scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{ \scalebox{0.8}{\prime}{\scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{ \scalebox{0.8}{\prime}{\scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{ \scalebox{0.8}{\prime}{\scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{ \scalebox{0.8}{\prime}{\scalebox{0.8}{\scalebox{0.8}{\prime}{\scalebox{0.8}{ \scalebox{0.8}{\prime}{\scalebox{0.8}{\prime{\color{0.8}{\prime{\color{0.8}{\color{0.8 \prime{\ where \(h^{\prime}=h\setminus\{e\mid r\leq e\land(\operatorname{tr}(h,e),t)\notin( \operatorname{so}\cup\operatorname{wr})^{*}\}\). We say that a read \(r\) is _swapped_ in \(h_{<}\) when (1) \(r\) reads from a transaction \(t\) that is a successor in the oracle order \(<_{\operatorname{or}}\) (the transaction was added by Next after the read), which is now a predecessor7 in the history order \(<\), (2) there is no transaction \(t^{\prime}\) that is before \(r\) in both \(<_{\operatorname{or}}\) and \(<\), and which is a \((\operatorname{so}\cup\operatorname{wr})^{+}\) successor of \(t\), and (3) \(r\) is the first read in its transaction to read from \(t\). Formally, assuming that \(t\) is the transaction such that \((t,r)\in\operatorname{wr}\), Footnote 7: The explore maintains the invariant that every read follows the transaction it reads from in the history order \(<\). \[\textsc{swapped}(h_{<},r)\coloneqq t<r\wedge t>_{\operatorname{or}}r\wedge \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ \mid 1. Define a _canonical_ total order \(<\) for every unordered partial history \(h\), such that if the algorithm reaches \(h_{<^{\prime}}\), for some order \(<^{\prime}\), then \(<\) and \(<^{\prime}\) coincide. This canonical order is useful in future proof steps as it allows to extend several definitions to arbitrary histories that are not necessarily reachable, such as Optimality or swapped. 2. Define the notion of or-_respectfulness_, an invariant satisfied by every (partial) ordered history reached by the algorithm. Briefly, a history is or-respectful if it has only one pending transaction and for every two events \(e,e^{\prime}\) such that \(e<_{\mathrm{or}}e^{\prime}\), either \(e<e^{\prime}\) or there is a swapped event \(e^{\prime\prime}\) in between. 3. Define a deterministic function prev which takes as input a partial history (not necessarily reachable), such that if \(h\) is reachable, then prev\((h)\) returns the history computed by the algorithm just before \(h\) (i.e., the previous history in the call stack). Prove that if a history \(h\) is or-respectful, then prev\((h)\) is also or-respectful. 4. Deduce that if \(h\) is or-respectful, then there is a finite collection of or-respectful histories \(H_{h}=\{h_{i}\}_{i=0}^{n}\) such that \(h_{n}=h\), \(h_{0}=\emptyset\), and \(h_{i}=\textsc{prev}(h_{i+1})\) for each \(i\). The or-respectfulness invariant and the causal-extensibility of the isolation level are key to being able to construct such a collection. In particular, they are used to prove that \(h_{i}\) has at most the same number of swapped events as \(h_{i+1}\) and in case of equality, \(h_{i}\) contain exactly one event less than \(h_{i+1}\), which implies that the collection is indeed finite. 5. Prove that if \(h\) is or-respectful and prev\((h)\) is reachable, then \(h\) is also reachable. Conclude by induction that every history in \(H_{h}\) is reachable, as \(h_{0}\) is the initial state and \(h_{i}=\textsc{prev}(h_{i+1})\). The proof of strong optimality relies on arguments employed for \(I\)-completeness. It can be shown that if the algorithm would reach a (partial) history \(h\) twice, then for one of the two exploration branches, the history \(h^{\prime}\) computed just before \(h\) would be different from prev\((h)\), which contradicts the definition of prev\((h)\). In terms of time complexity, the explore-ce\((I)\) algorithm achieves polynomial time between consecutive outputs for isolation levels \(I\) where checking \(I\)-consistency of a history is polynomial time, e.g., RC, RA, and CC. ## 6. Swapping-based model checking for snapshot isolation and serializability For explore-ce, the part of strong optimality concerning _not_ engaging in fruitless explorations was a direct consequence of causal extensibility (of the isolation level). However, isolation levels such as SI and SER are _not_ causally extensible (see Section 3.2). Therefore, the question we investigate in this section is whether there exists another implementation of explore that can ensure strong optimality along with \(I\)-soundness and \(I\)-completeness for \(I\) being SI or SER. We answer this question in the negative, and as a result, propose an SMC algorithm that extends explore-ce by just filtering histories before outputting to be consistent with SI or SER. Theorem 6.1 ().: _If \(I\) is Snapshot Isolation or Serializability, there exists no explore algorithm that is \(I\)-sound, \(I\)-complete, and strongly optimal._ The proof of Theorem 6.1 defines a program with two transactions and shows that any concrete instance of explore in Alg. 1_cannot be both \(I\)_-complete and strongly optimal. Given this negative result, we define an implementation of explore for an isolation level \(I\in\{SI,SER\}\) that ensures optimality instead of strong optimality, along with soundness, completeness, and polynomial space bound. Thus, let explore-ce\((I_{0})\) be an instance of explore-ce parametrized by \(I_{0}\in\{\mathsf{RC},\mathsf{RA},\mathsf{CC}\}\). We define an implementation of explore for \(I\), denoted by explore-ce\({}^{*}(I_{0},I)\), which is exactly explore-ce\((I_{0})\) except that instead of Valid\((h):=true\), it uses \[\textsc{Valid}(h)\quad\coloneqq\quad h\text{ satisfies }I\] explore-ce\({}^{*}(I_{0},I)\) enumerates exactly the same histories as explore-ce\((I_{0})\) except that it outputs only histories consistent with \(I\). The following is a direct consequence of Theorem 5.1. Corollary 6.2 ().: _For any isolation levels \(I_{0}\) and \(I\) such that \(I_{0}\) is prefix-closed and causally extensible, and \(I_{0}\) is weaker than \(I\), explore-ce\({}^{*}(I_{0},I)\) is \(I\)-sound, \(I\)-complete, optimal, and polynomial space._ ## 7. Experimental Evaluation We evaluate an implementation of explore-ce and explore-ce\({}^{*}\) in the context of the Java Pathfinder (JPF) (Visser et al., 2004) model checker for Java concurrent programs. As benchmark, we use bounded-size client programs of a number of database-backed applications drawn from the literature. The experiments were performed on an Apple M1 with 8 cores and 16 GB of RAM. ### Implementation We implemented our algorithms as an extension of the DFSearch class in JPF. For performance reasons, we implemented an iterative version of these algorithms where roughly, inputs to recursive calls are maintained as a collection of histories instead of relying on the call stack. For checking consistency of a history with a given isolation level, we implemented the algorithms proposed by Biswas and Enea (2019). Our tool takes as input a Java program and isolation levels as parameters. We assume that the program uses a fixed API for interacting with the database, similar to a key-value store interface. This API consists of specific methods for starting/ending a transaction, and reading/writing a global variable. The fixed API is required for being able to maintain the database state separately from the JVM state (the state of the Java program) and update the current history in each database access. This relies on a mechanism for "transferring" values read from the database state to the JVM state. ### Benchmark We consider a set of benchmarks inspired by real-world applications and evaluate them under different types of client programs and isolation levels. _Shopping Cart (Sivaramakrishnan et al., 2015)_ allows users to add, get and remove items from their shopping cart and modify the quantities of the items present in the cart. _Twitter (Difallah et al., 2013)_ allows users to follow other users, publish tweets and get their followers, tweets and tweets published by other followers. _Courseware (Nair et al., 2020)_ manages the enrollment of students in courses in an institution. It allows to open, close and delete courses, enroll students and get all enrollments. One student can only enroll to a course if it is open and its capacity has not reached a fixed limit. _Wikipedia (Difallah et al., 2013)_ allows users to get the content of a page (registered or not), add or remove pages to their watching list and update pages. _TPC-C (TPC 2010)_ models an online shopping application with five types of transactions: reading the stock of a product, creating a new order, getting its status, paying it and delivering it. SQL tables are modeled using a "set" global variable whose content is the set of ids (primary keys) of the rows present in the table, and a set of global variables, one variable for each row in the table (the name of the variable is the primary key of that row). SQL statements such as INSERT and DELETE statements are modeled as writes on that "set" variable while SQL statements with a WHERE clause (SELECT, JOIN, UPDATE) are compiled to a read of the table's set variable followed by reads or writes of variables that represent rows in the table (similarly to (Biswas et al., 2021)). ### Experimental Results We designed three experiments where we compare the performance of a baseline model checking algorithm, explore-ce and explore-ce* for different (combinations of) isolation levels, and we explore the scalability of explore-ce when increasing the number of sessions and transactions per session, respectively. For each experiment we report running time, memory consumption, and the number of end states, i.e., histories of complete executions and in the case of explore-ce*, before applying the Valid filter. As the number of end states for a program on a certain isolation level increases, the running time of our algorithms naturally increases as well. The first experiment compares the performance of our algorithms for different combinations of isolation levels and a baseline model checking algorithm that performs no partial order reduction. We consider as benchmark five (independent) client programs8 for each application described above (25 in total), each program with 3 sessions and 3 transactions per session. Running time, memory consumption, and number of end states are reported in Fig. 14 as cactus plots [Brain et al. 2017]. Footnote 8: For an application that defines a number of transactions, a client program consists of a number of sessions, each session containing a sequence of transactions defined by the application. To justify the benefits of partial order reduction, we implement a baseline model checking algorithm DFS(CC) that performs a standard DFS traversal of the execution tree w.r.t. the formal semantics defined in Section 2.3 for CC (for fairness, we restrict interleavings so at most one transaction is pending at a time). This baseline algorithm may explore the same history multiple times since it includes no partial order reduction mechanism. In terms of time, DFS(CC) behaves poorly: it timeouts for 20 out of the 25 programs and it is less efficient even when it terminates. We consider a timeout of 30 mins. In comparison the strongly optimal algorithm explore-ce(CC) (under CC) finishes in in 3\({}^{\prime}\)26\({}^{\prime\prime}\) seconds in average (counting timeouts). DFS(CC) is similiar to explore-ce(CC) in terms of memory consumption. The memory consumption of DFS(CC) is 381MB in average, compared to 508MB for explore-ce(CC) (JPF forces a minimum consumption of 256MB). To show the benefits of _strong_ optimality, we compare explore-ce(CC) which is strongly optimal with "plain" optimal algorithms explore-ce*(\(I_{0},\texttt{CC}\)) for different levels \(I_{0}\). As shown in Figure 14(a), explore-ce(CC) is more efficient time-wise than every "plain" optimal algorithm, and the difference in performance grows as \(I_{0}\) becomes weaker. In the limit, when \(I_{0}\) is the trivial isolation level true where every history is consistent, explore-ce*(true, CC) timeouts for 20 out of Figure 14. Cactus plots comparing different algorithms in terms of time, memory, and end states. For readability, we use CC to denote explore-ce under CC, \(I_{1}+I_{2}\) stands for explore-ce*(\(I_{1},I_{2}\)), and true is the trivial isolation level where every history is consistent. Differences between CC, CC + SI and CC + SER are very small and their graphics overlap. Moreover, DFS(CC) denotes a standard DFS traversal of the semantics defined in Section 2.3. These plots exclude benchmarks that timeout (30 mins): 3 benchmarks for CC, \(\langle\texttt{SI},\texttt{CC}\rangle\) and \(\langle\texttt{SER},\texttt{CC}\rangle\) and 6, 17, 20 and 20 benchmarks timeout for \((\texttt{RA},\texttt{CC})\), \(\langle\texttt{RC},\texttt{CC}\rangle\), \(\langle\texttt{true},\texttt{CC}\rangle\) and DFS(CC) respectively. the 25 programs. The average speedup (average of individual speedups) of explore-ce(CC) w.r.t. explore-ce\({}^{*}\)(RA, CC), explore-ce\({}^{*}\)(RC, CC) and explore-ce\({}^{*}\)(true, CC) is 3, 18 and 15. respectively (we exclude timeout cases when computing speedups). All algorithms consume around 500MB of memory in average. For the SI and SER isolation levels that admit no strongly optimal explore algorithm, we observe that the overhead of explore-ce\({}^{*}\)(CC, SI) or explore-ce\({}^{*}\)(CC, SER) relative to explore-ce(CC) is negligible (the corresponding lines in Figure 14 are essentially overlapping). This is due to the fact that the consistency checking algorithms of Biswas and Enea (2019) are polynomial time when the number of sessions is fixed, which makes them fast at least on histories with few sessions. In our second experiment, we investigate the scalability of explore-ce when increasing the number of sessions. For each \(i\in[1,5]\), we consider 5 (independent) client programs for TPC-C and 5 for Wikipedia (10 in total) with \(i\) sessions, each session containing 3 transactions. We start with 10 programs with 5 sessions, and remove sessions one by one to obtain programs with fewer sessions. We take CC as isolation level. The plot in Figure (a)a shows average running time and memory consumption for each number \(i\in[1,5]\) of sessions. As expected, increasing the number of sessions is a bottleneck running time wise because the number of histories increases significantly. However, memory consumption does not grow with the same trend, cf. the polynomial space bound. Finally, we evaluate the scalability of explore-ce(CC) when increasing the number of transactions per session. We consider 5 (independent) TPC-C client programs and 5 (independent) Wikipedia programs with 3 sessions and \(i\) transactions per session, for each \(i\in[1,5]\). Figure (b)b shows average running time and memory consumption for each number \(i\in[1,5]\) of transactions per session. Increasing the number of transactions per session is a bottleneck for the same reasons. ## 8. Related Work **Checking Correctness of Database-Backed Applications.** One line of work is concerned with the logical formalization of isolation levels (Adya et al., 2000; Berenson et al., 1995; Biswas and Enea, 2019; Cerone et al., 2015; X3, 1992). Our work relies on the axiomatic definitions of isolation levels introduced by Biswas and Enea (2019), which have also investigated the problem of checking whether a given history satisfies a certain isolation level. Our SMC algorithms rely on these algorithms to check consistency of a history with a given isolation level. Another line of work focuses on the problem of finding "anomalies": behaviors that are not possible under serializability. This is typically done via a static analysis of the application code that builds a static dependency graph that over-approximates the data dependencies in all possible executions of the application (Bernardi and Gotsman, 2016; Cerone and Gotsman, 2018; Fekete et al., 2005; Gan et al., 2020; Jorwekar et al., 2007; Warszawski and Bailis, 2017). Anomalies with respect Figure 15. Evaluating the scalability of explore-ce(CC) for TPC-C and Wikipedia client programs when increasing their size. These plots include benchmarks that timeout (30 mins): 4, 9 and 10 for 3, 4 and 5 sessions respectively in Figure (a)a, and 5, 8 and 10 for 3, 4 and 5 transactions per sessions respectively in Figure (b)b. to a given isolation level then correspond to a particular class of cycles in this graph. Static dependency graphs turn out to be highly imprecise in representing feasible executions, leading to false positives. Another source of false positives is that an anomaly might not be a bug because the application may already be designed to handle the non-serializable behavior (Brutschy et al., 2018; Gan et al., 2020). Recent work has tried to address these issues by using more precise logical encodings of the application (Brutschy et al., 2017, 2018), or by using user-guided heuristics (Gan et al., 2020). Another approach consists of modeling the application logic and the isolation level in first-order logic and relying on SMT solvers to search for anomalies (Kaki et al., 2018; Nagar and Jagannathan, 2018; Ozkan, 2020), or defining specialized reductions to assertion checking (Beillahi et al., 2019, 2019). Our approach, based on SMC, does not generate false positives because we systematically enumerate only valid executions of a program which allows to check for user-defined assertions. Several works have looked at the problem of reasoning about the correctness of applications executing under weak isolation and introducing additional synchronization when necessary (Balegas et al., 2015; Gotsman et al., 2016; Li et al., 2014; Nair et al., 2020). These are based on static analysis or logical proof arguments. The issue of repairing applications is orthogonal to our work. MonkeyDB (Biswas et al., 2021) is a mock storage system for testing storage-backed applications. While being able to scale to larger code, it has the inherent incompleteness of testing. As opposed to MonkeyDB, our algorithms perform a systematic and complete exploration of executions and can establish correctness at least in some bounded context, and they avoid redundancy, enumerating equivalent executions multiple times. Such guarantees are beyond the scope of MonkeyDB. **Dynamic Partial Order Reduction.**Abdulla et al. (2017) introduced the concept of _source sets_ which provided the first strongly optimal DPOR algorithm for Mazurkiewicz trace equivalence. Other works study DPOR techniques for coarser equivalence relations, e.g., (Abdulla et al., 2019; Agarwal et al., 2021; Aronis et al., 2018; Chalupa et al., 2018; Chatterjee et al., 2019). In all cases, the space complexity is exponential when strong optimality is ensured. Other works focus on extending DPOR to weak memory models either by targeting a specific memory model (Abdulla et al., 2017, 2016, 2018; Norris and Demsky, 2013) or by being parametric with respect to an axiomatically-defined memory model (Kokologiannakis et al., 2022, 2019; Kokologiannakis and Vafeiadis, 2020). Some of these works can deal with the coarser reads-from equivalence, e.g., (Abdulla et al., 2018; Kokologiannakis et al., 2022, 2019; Kokologiannakis and Vafeiadis, 2020). Our algorithms build on the work of Kokologiannakis et al. (2022) which for the first time, proposes a DPOR algorithm which is both strongly optimal and polynomial space. The definitions of database isolation levels are quite different with respect to weak memory models, which makes these previous works not extensible in a direct manner. These definitions include a semantics for _transactions_ which are collections of reads and writes, and this poses new difficult challenges. For instance, reasoning about the completeness and the (strong) optimality of existing DPOR algorithms for shared-memory is agnostic to the scheduler (Next function) while the strong optimality of our explore-ce algorithm relies on the scheduler keeping at most one transaction pending at a time. In addition, unlike TruSt, explore-ce ensures that no swapped events can be swapped again and that the history order \(<\) is an extension of so\(\cup\)wr. This makes our completeness and optimality proofs radically different. Moreover, even for transactional programs with one access per transaction, where SER and SC are equivalent, TruSt under SC and explore-ce\({}^{*}(I_{0},\text{\tt SER})\) do not coincide, for any \(I_{0}\in\{\text{\tt RC},\text{\tt RA},\text{\tt CC}\}\). In this case, TruSt enumerates only SC-consistent histories at the cost of solving an NP-complete problem at each step while the explore-ce\({}^{*}\) step cost is polynomial time at the price of not being strongly-optimal. Furthermore, we identify isolation levels (SI and SER) for which it is impossible to ensure both strong optimality and polynomial space bounds with a swapping-based algorithm, a type of question that has not been investigated in previous work. ## 9. Conclusions We presented efficient SMC algorithms based on DPOR for transactional programs running under standard isolation levels. These algorithms are instances of a generic schema, called swapping-based algorithms, which is parametrized by an isolation level. Our algorithms are sound and complete, and polynomial space. Additionally, we identified a class of isolation levels, including \(\mathsf{RC},\mathsf{RA}\), and \(\mathsf{CC}\), for which our algorithms are strongly optimal, and we showed that swapping-based algorithms cannot be strongly optimal for stronger levels \(\mathsf{SI}\) and \(\mathsf{SER}\) (but just optimal). For the isolation levels we considered, there is an intriguing coincidence between the existence of a strongly optimal swapping-based algorithm and the complexity of checking if a given history is consistent with that level. Indeed, checking consistency is polynomial time for \(\mathsf{RC}\), \(\mathsf{RA}\), and \(\mathsf{CC}\), and \(\mathsf{NP}\)-complete for \(\mathsf{SI}\) and \(\mathsf{SER}\). Investigating further the relationship between strong optimality and polynomial-time consistency checks is an interesting direction for future work. ## Acknowledgements We thank anonymous reviewers for their feedback, and Ayal Zaks for shepherding our paper. This work was partially supported by the project AdeCoDS of the French National Research Agency. ## Data Availability Statement The implementation is open-source and can be found in (Bouajjani et al., 2023b).
2308.10363
Effectiveness of wealth-based vs exchange-based tax systems in reducing inequality
In the so-called ``fair'' models of peer-to-peer wealth exchanges, economic inequality tends to reach its maximum value asymptotically. This global trend is evident as the richest continuously accumulate a larger share of wealth at the expense of others. To address the mounting issue of inequality, different strategies of taxes and redistribution are commonly employed. Our study delves into the interplay between wealth and trade (consumption) tax bases, probing their impact on wealth distribution within wealth-conservative economies. The ultimate aim is to unearth an optimal framework that adeptly curbs inequality.Through a meticulous analysis of varying tax rates and the allocation of the collected tax to the most economically vulnerable strata, we unveil a compelling pattern resembling two distinct phases. These phases delineate the most effective systems for inequality mitigation. Our findings underscore the synergistic potential of amalgamating these tax systems, surpassing the individual efficacy of each. This synthesis beckons policymakers to weave together tax rates and precision-targeted redistribution, crafting tax systems that wield the potential for tangible and substantial reductions in economic disparity.
Thiago Dias, Sebastián Gonçalves
2023-08-20T20:44:10Z
http://arxiv.org/abs/2308.10363v1
# Effectiveness of wealth-based _vs_ exchange-based tax systems in reducing inequality ###### Abstract In the so-called "fair" models of peer-to-peer wealth exchanges, economic inequality tends to reach its maximum value asymptotically. This global trend is evident as the richest continuously accumulate a larger share of wealth at the expense of others. To address the mounting issue of inequality, different strategies of taxes and redistribution are commonly employed. Our study delves into the interplay between wealth and trade (consumption) tax bases, probing their impact on wealth distribution within wealth-conservative economies. The ultimate aim is to unearth an optimal framework that adeptly curbs inequality. Through a meticulous analysis of varying tax rates and the allocation of the collected tax to the most economically vulnerable strata, we unveil a compelling pattern resembling two distinct phases. These phases delineate the most effective systems for inequality mitigation. Our findings underscore the synergistic potential of amalgamating these tax systems, surpassing the individual efficacy of each. This synthesis beckons policymakers to weave together tax rates and precision-targeted redistribution, crafting tax systems that wield the potential for tangible and substantial reductions in economic disparity. keywords: + Footnote †: journal: ## 1 Introduction Empirical studies on the distribution of wealth of individuals in different economies demonstrate the presence of two classes. While the wealthiest 1% to 5% of the population follows a power-law distribution, the distribution of the remaining fits in a Gamma distribution [1; 2; 3; 4]. The increasing gap between the wealth held by each class is a worldwide concern. For instance, in 2020, 43% of the world's wealth was in the hands of the richest 1% of the population. Two years later, it increased to roughly 47% [5; 6]. Would this trend continue indefinitely, a small group of persons would possess all the economic resources while the rest nothing. A common measure of inequality is the Gini coefficient, \(G\), which takes into account the relative mean absolute difference of income or wealth of individuals or households. It can assume values from 0 (perfect equality) to 1, where only one individual retains all the available wealth. When \(G=1\), the system is said to be in the condensate state. The prospect of condensation is profoundly undesirable since it implies a situation where a large majority of the population becomes trapped in poverty and the wealth exchange is severely reduced. Such an scenario represents the economic collapse [7; 8]. One approach that countries have adopted to combat the growth of inequality is the implementation of taxes and subsequent redistribution. Figure 1 presents the evolution of the mean Gini coefficient of the OECD (The Organization for Economic Cooperation and Development) countries from 2000 to 2020 before and after taxation and redistribution. Data was taken from reference [9]. It demonstrates that transfers of taxes lead to a reduction in the Gini coefficients by approximately 40%. This result highlights the importance of implementing effective taxation and transfer policies in the pursuit of a more equitable society. Various taxation systems can be observed globally, each with its own characteristics and implications [10; 11]. The history of the discussion regarding the efficiency and fairness of different taxation policies dates back to 1776, when Adam Smith published his book _The Wealth of Nations_[12]. It remains a subject of discussion through the centuries to this day [13; 14; 15; 16]. For example, Bradford and Toder [17] analyzed the equity and simplicity of consumption and capital income-based taxes, arguing that consumption tax does not hinder capital formation and savings due to its neutrality between current and future consumption. Other studies have found that consumption-based taxation is more re-distributive than income-based taxation, including labor income [18; 19]. The distinctions between wealth and capital income taxation were explored by Guvenen _et al._, who concluded that collecting taxes from wealth is preferable to taxing capital income since it can enhance productivity, promote economic growth, and lead to re-distributional gains. For the sake of simplicity, here we analyze the implications of exchange-based (equivalent to consumption-based) and wealth-based taxation systems. Figure 2 shows the percentages of taxes collected in 20201, including the data of the World, OECD countries, and China[20]. The figure presents the relative contribution of various tax categories to the total tax revenues. It reveals that property-related taxes represent a lower share compared to the other forms. Conversely, it can be observed that taxes on goods and services, as well as on incomes and profits are much higher and constitute the majority of the total taxes. The figure makes reference to other taxes, which are payroll, capital gain, and inheritance taxes, along with social security contributions. Footnote 1: The year 2020 was specifically chosen because it is the most recent and complete dataset accessible at the time we collected the data. Econophysics has proven to be very valuable to model economic systems using the tools provided by Statistical Physics. Through first principles and numerical simulations, empirical data have been successfully reproduced. Moreover, researches on this field showed that unregulated markets inevitably evolve to condensation [21; 7; 22]. Most of the models consider money, properties, and capital (hereafter referred to as wealth, \(w\)) as an extensible variable related to agents and that can flow among them [23]. Recent studies have shown an anti-correlation between inequality and liquidity, the average wealth exchanged per unit of time. In general, the liquidity tends to zero when \(G\to 1\)[24; 25; 26]. Hence, the implementation of regulations to prevent condensation becomes crucial to maintain the dynamics of the economy [8]. This work primarily investigates the impact of wealth and exchanges taxation systems on inequality through agent-based simulations. We explore a combination of both to determine the most effective approach Figure 1: Evolution of the mean Gini coefficient of countries between 2000 and 2020, before and after taxation and redistribution, from OECD data [9]. in reducing inequality within wealth-conservative systems. Before presenting these details, the model is explained in the next section. ## 2 Kinetic exchange model with mixed taxation forms Our model consists in an ensemble of \(N\) interacting agents with wealth \(\{w_{i}\}\), \(i=1,2,\ldots,N\). Agents are also characterized by their risk-aversion factor \(\{\beta_{i}\}\), which sets the portion of wealth they are willing to put at stake during an exchange. At time \(t=0\), each agent is assigned a randomly distributed wealth and risk-aversion factor. While \(\beta_{i}\) are in the interval \([0,1)\) and remains the same, \(w_{i}\) must satisfy \(\sum_{i}^{N}w_{i}=1\) and is exchanged throughout the simulation. The minimum wealth required for an agent to engage a trade is defined as \(w_{\min}=1\times 10^{-9}\). If an agent possesses less than \(w_{\min}\), its wealth is set to zero. A Monte-Carlo step (MCS) is the unit of time when three processes take place: * _exchange of wealth_: pairs of agents, each with \(w_{i}>w_{\min}\), are randomly chosen and exchange part of their wealth; * _tax collection_: a fraction is taken from the agents' wealth and/or from the amount traded during the exchange * _redistribution_: the tax collected is equally redistributed to the fraction \(\tau\) of the poorest agents, also referred as target. The yard-sale rule is employed to determine the amount exchanged between two agents, \(i\) and \(j\), ensuring a fair and equal opportunity for both parties [27; 28]. This rule defines the amount traded (\(\Delta w\)) as \[\Delta w=\min[(1-\beta_{i})w_{i},(1-\beta_{j})w_{j}]. \tag{1}\] Each trade is taxed a fraction \(\varepsilon\lambda\) (tax on exchanges), where \(\lambda\) is the total tax rate and \(\varepsilon=[0,1]\) is a variable that accounts for the taxation system. With \(\varepsilon=1\), taxes are collected solely on the exchanges. For \(\varepsilon=0\), the taxes are applied just on the agent's wealth. If \(0<\varepsilon<1\), it is used a combination of the two systems. Considering two agents, \(i\) and \(j\), and assuming the former wins the exchange, the after-trade wealth's are \[w_{i}^{*} =w_{i}+(1-\varepsilon\lambda)\Delta w\] \[w_{j}^{*} =w_{j}-\Delta w, \tag{2}\] where \(w_{i(j)}^{*}\) represents the wealth of the agent \(i(j)\) after, and \(w_{i(j)}\) before the transaction. It is worth noting that although it may appear that the winner pays the tax, this is not entirely accurate, as the tax is typically Figure 2: Tax collection and different taxation systems in 2020. Data is from reference [20]. included in the price of the goods or services being exchanged. After the trade, the fraction \((1-\varepsilon)\lambda\) is taken from the wealth of all the agents. Finally, the amount collected as taxes is distributed equally among the \(\tau N\) poorest agents. The inequality is measured through the Gini coefficient [29], \[G_{k}(t)=\frac{1}{2N\sum_{i}w_{i}(t)}\sum_{ij}^{N}|w_{i}(t)-w_{j}(t)|. \tag{3}\] The index \(k\) in the Eq. 3 indicates the taxation system in use. Specifically, \(G_{0}\) represents a wealth-based tax system, and \(G_{1}\) refers to a system where taxes are imposed only on trades or transactions. When no index is given, the Gini coefficient is calculated by considering a combination system of wealth and transaction. The simulations were carried out with \(N=10^{4}\) interacting agents until equilibrium is achieved. We define equilibrium as the situation in which the mean time-step difference of \(G\) is less than \(1\times 10^{-8}\) for \(10^{3}\) MCS. The results are the average of \(10^{3}\) independent samples for each set of the parameters (\(\lambda\), \(\tau\), and \(\varepsilon\)). ## 3 Taxes and redistribution effect on inequality In this section, we present the results of the above described model with the different tax systems. First, for the wealth-based system, then for the transaction based system, and finally for the mixed system. We emphasize that wealth is conserved, so no wealth is created or destroyed during the simulations. ### Wealth-based taxes Models with taxes on wealth have already been studied in different works. Specifically, we mention references [26] and [28]. For comparative purposes, those results are reproduced here with our model. In Fig. 3, we show the dependence of the Gini coefficient on \(\lambda\) and \(\tau\). It can be observed that for a significant reduction in \(G\), both the tax rate and the fraction of agents participating in the re-distributive process should be higher than 4--5%. Furthermore, Fig. 3 reveals a non-trivial relationship between \(G_{0}\), \(\lambda\), and \(\tau\). That is, for each collected tax rate \(\lambda\), there is an optimal target fraction \(\tau\) which produces the lowest Gini coefficient \(G_{0}\). Therefore, it is possible to find the line along which the variation of \(G_{0}(\lambda,\tau)\) is the steepest (we show that curve in Fig.5(b)). Figure 3: Contour plot of equilibrium \(G_{0}\) (wealth tax base) as functions of the tax rate \(\lambda\) and the fraction \(\tau\) of poorest agents in log-log scale. ### Exchange-based taxes We present here the analysis of the effects of trade taxation (\(\varepsilon=1\)) on economic inequality measured by the Gini index \(G_{1}\), displayed in Fig. 4 as a function of \(\lambda\) and \(\tau\). Clearly, this taxation policy is inefficient when \(\lambda<0.25\), as \(G_{1}\) exceeds \(0.9\) for almost every \(\tau\) ranging from \(0.01\) to \(1\). Only when \(\lambda\gtrapprox 1/3\) one can see values of \(G_{1}\leq 0.5\). It is important to emphasize that even a Gini value of \(0.5\) indicates a considerable level of economic inequality. Clearly, as depicted in Fig. 4, this tax system also possesses an optimal target fraction corresponding to the point of least inequality (smallest Gini) for a given tax rate. However, in somehow contrast to the corresponding plot of the wealth-base system (Fig. 4), the optimal line of \(G_{1}\) follows an almost horizontal path in the \(\lambda\)-\(\tau\) space, which means that there is an almost constant target fraction, independent of the tax rate. Figure 5 shows the impacts of those tax policies on the Gini coefficient with (a) universal redistribution, \(\tau=1\); and (b) optimal targeted redistribution. It is noteworthy that inequality in wealth conservative economies is lower with wealth-based tax regardless of \(\lambda\) for both universal and optimal targeted redistributions. Taking the case when the tax rate is similar to the OECD countries apply (\(\lambda\approx 1/3\)) and targeted redistribution as an example, one can see that \(G_{1}\) is approximately \(4.5\) times greater than \(G_{0}\). Note that a totally egalitarian society (\(G=0\)) is not achieved with tax on trade even with \(\lambda=1\), which is unrealistic but a possible result when \(\varepsilon=0\) and \(\tau=1\). This leads to the question: "Is there a specific combination of tax rate and target fraction where one system is superior to the other?" We seek to address this question by illustrating the difference \(G_{1}-G_{0}\) for various pairs \((\lambda,\tau)\) in the top panel of figure 6. Positive values of \(G_{1}-G_{0}\) indicate that wealth-based taxation is more effective, whereas negative values mean that trade taxation is preferable for diminishing inequality. One can see that for taxes larger than \(0.2\) the trade taxation has greater impact depending on the target. The lower panels displayed in Fig. 6 illustrate the relationship of \(G\) and \(\tau\) across three distinct tax rates. When \(\lambda=0.11\), clearly, there is no advantage of applying a transaction-base tax system as the Gini index stays almost at the maximum value no matter the fraction of target receivers (\(\tau\)) is, while for that case, the Gini index changes dramatically in the wealth based system with Gini being lower than \(0.25\) if \(\tau\approx 0.25\). The other two plots (\(\lambda=0.32\) and \(0.64\)) indicate that exchange-based taxation is preferable, for low population redistribution fraction: \(\tau<0.1\) in the middle panel, \(\tau<0.55\) in the right one. Figure 4: Contour plot of equilibrium \(G_{1}\) (consumption tax base) as functions of the tax rate \(\lambda\) and the fraction \(\tau\) of poorest agents in log-linear scale. Figure 5: Equilibrium Gini coefficients for (a) universal and (b) optimal targeted redistributions considering taxation on wealth (\(G_{0}\)) and on trade (\(G_{1}\)). Figure 6: Top: Difference between equilibrium \(G_{1}\) and \(G_{0}\) and its dependency on \(\lambda\) and \(\tau\). The orange region represents the combinations of tax rates and target fractions where wealth-based tax is more efficient than the transaction-based. The dashed line shows where both policies are equivalent regarding the reduction of inequality. Bottom: Gini coefficients for both taxation systems as functions of the \(\tau\) for three different tax rates: \(\lambda=0.11\), \(0.32\), and \(0.64\). ### Wealth and transaction taxes The taxation system in real economies takes into account a combination of taxes on wealth and transactions. This way, we considered mixtures of both and how it impact on the inequality. We vary \(\varepsilon\) from \(0\) to \(1\) for each target and tax rates. Interestingly, we find that for \(\lambda\geq 0.39\) there are combinations of \(\lambda\) and \(\tau\) in which this hybrid taxation is preferable. Representative \(G(\varepsilon,\tau)\) for two different tax rates are shown in Fig. 7. In the left panel (\(\lambda=0.16\)) one observes that the minimum Gini coefficient, \(G_{\min}\), is approximately \(0.14\), which occurs when solely wealth is taxed. Conversely, in the case when \(\lambda=0.44\) (right panel), \(G_{\min}\approx 0.099\) when \(\varepsilon\) remains between \(1/3\) and \(0.48\). This reduction in \(G_{\min}\) is an indication that mixed taxation policies may be the best option to reduce inequality. In both cases, the target for achieving the minima should be larger than \(50\%\) of the poorest agents. Most of the governmental projects, however, restrict the allocations to a very small portion of the poorest individuals, resulting in negligible efficiency in terms of Gini coefficient reduction. For a significant impact on reducing inequality, a broader target is necessary in both wealth and trade bases, as well as in a combination of the two. This ensures that the effects of the taxation approach results into real and substantial reductions in inequality. ## 4 Conclusion The trend of increasing economic inequality leads to a state of condensation, where the economy reaches its "thermal death", characterized by the absence of wealth flowing among individuals. This situation highlights the necessity for state intervention. Taxation and subsequent redistribution are perhaps the most common methods to impede the rising inequality. Through agent-based simulations we examine the impact of trade and wealth bases on economic inequalty in wealth-conservative economies. Our results demonstrate the existence of an optimal redistributional target, corresponding to the minimum \(G\) for each tax rate, in both taxation systems. Notably, wealth-based taxation outperforms taxation on exchange in universal and optimal targeted redistributions. For non-optimal targets we see a phase diagram that exhibits two different regions, each one related to the higher impact on lowering \(G\) for the respective taxation system. Considering combinations of wealth and trade tax bases, our results show that mixtures of the taxation systems can have greater impact in reducing the Gini coefficient. Specifically, we notice that for \(\lambda\geq 0.39\) collecting approximately \(1/3\) of total tax rate on the exchanges is necessary to achieve the \(G_{\min}\). On the other cases taxation solely on wealth is preferable in the context of minimizing the inequality. Figure 7: Impact of the target \(\tau\) and the tax combination \(\varepsilon\) parameters on the Gini coefficient for two tax rates (\(\lambda=0.16\) and \(0.44\)) in combined taxation systems. In spite its higher effectiveness in reducing inequality, wealth-based taxation contributes a smaller share to the total tax revenue compared to trade-based taxation, which encompasses consumption, income, and various forms of wealth transfers. Our findings underscore the importance of striking a balance between the two taxation systems to achieve effective reduction of economic inequality.
2307.06384
Machine learning accelerated discovery of corrosion-resistant high-entropy alloys
Corrosion has a wide impact on society, causing catastrophic damage to structurally engineered components. An emerging class of corrosion-resistant materials are high-entropy alloys. However, high-entropy alloys live in high-dimensional composition and configuration space, making materials designs via experimental trial-and-error or brute-force ab initio calculations almost impossible. Here we develop a physics-informed machine-learning framework to identify corrosion-resistant high-entropy alloys. Three metrics are used to evaluate the corrosion resistance, including single-phase formability, surface energy and Pilling-Bedworth ratios. We used random forest models to predict the single-phase formability, trained on an experimental dataset. Machine learning inter-atomic potentials were employed to calculate surface energies and Pilling-Bedworth ratios, which are trained on first-principles data fast sampled using embedded atom models. A combination of random forest models and high-fidelity machine learning potentials represents the first of its kind to relate chemical compositions to corrosion resistance of high-entropy alloys, paving the way for automatic design of materials with superior corrosion protection. This framework was demonstrated on AlCrFeCoNi high-entropy alloys and we identified composition regions with high corrosion resistance. Machine learning predicted lattice constants and surface energies are consistent with values by first-principles calculations. The predicted single-phase formability and corrosion-resistant compositions of AlCrFeCoNi agree well with experiments. This framework is general in its application and applicable to other materials, enabling high-throughput screening of material candidates and potentially reducing the turnaround time for integrated computational materials engineering.
Cheng Zeng, Andrew Neils, Jack Lesko, Nathan Post
2023-07-12T18:13:20Z
http://arxiv.org/abs/2307.06384v3
# Machine learning accelerated discovery of corrosion-resistant high-entropy alloys ###### Abstract Corrosion has a wide impact on society, causing catastrophic damage to structurally engineered components. An emerging class of corrosion-resistant materials are high-entropy alloys. However, high-entropy alloys live in high-dimensional composition and configuration space, making materials designs via experimental trial-and-error or brute-force ab initio calculations almost impossible. Here we develop a physics-informed machine-learning framework to identify corrosion-resistant high-entropy alloys. Three metrics are used to evaluate the corrosion resistance, including single-phase formability, surface energy and Pilling-Bedworth ratios. We used random forest models to predict the single-phase formability, trained on an experimental dataset. Machine learning inter-atomic potentials were employed to calculate surface energies and Pilling-Bedworth ratios, which are trained on first-principles data fast sampled using embedded atom models. A combination of random forest models and high-fidelity machine learning potentials represents the first of its kind to relate chemical compositions to corrosion resistance of high-entropy alloys, paving the way for automatic design of materials with superior corrosion protection. This framework was demonstrated on AlCrFeCoNi high-entropy alloys and we identified composition regions with high corrosion resistance. Machine learning predicted lattice constants and surface energies are consistent with values by first-principles calculations. The predicted single-phase formability and corrosion-resistant compositions of AlCrFeCoNi agree well with experiments. This framework is general in its application and applicable to other materials, enabling high-throughput screening of material candidates and potentially reducing the turnaround time for integrated computational materials engineering. **Keywords: High-entropy alloy, Corrosion protection, Machine learning potential, Random forest classification Graphical abstract:** Introduction High-entropy alloys are generally defined as alloys comprising no less than four elements and the percentage of each principal element is between 5 at.% and 35 at.%. The high-entropy concept was coined by Cantor [1] and Yeh [2] for equiatomic alloys with no less than five elements in 2004 almost the same time. The definition has been slightly extended to non-equimolar alloys with no less than four principal elements. This new class of materials has attracted increasing attention, found to display superior materials performance for mechanical properties [3, 4, 5, 6, 7], radiation resistance [8, 9] and corrosion resistance [10, 11, 12]. The high entropy of mixing usually leads to the formation of a disordered single phase for high-entropy alloys, such as face-centered cubic (FCC), body-centered cubic (BCC) and hexagonal closely-packed structures (HCP) [13, 14]. The homogeneous single phase improves passivity. In addition, high-entropy alloys can consist of elements with high passivation potency such as nickel, chromium, aluminum and titanium, leading to high pitting corrosion resistance. Conventional corrosion-resistant alloys are mostly found by serendipity. Advances in physical theories, computational hardware and algorithms allow for rapid screening of candidate materials, paving the way for integrated computational materials engineering which aims to demystify the linkage between process, structure, property and performance. However, computational screening of corrosion-resistant alloys is challenging in that many factors can influence corrosion performance, including environmental conditions, chemical compositions and microstructures. Moreover, fundamental understanding of corrosion and various corrosion types adds more complexity to the material design. Recent works have been focused on building reliable databases for corrosion informatics, identifying reliable descriptors for corrosion performance and understanding the corrosion kinetics with multi-physics simulations [15, 16, 17]. Nyby et al. compiled a database for four types of alloys with an emphasis on six metrics used to describe their localized pitting corrosion [16]. Diao et al. collected a dataset for low-alloy steel and built machine learning models to predict their corrosion rate [18]. Roy et al. used machine learning algorithms to select the top three descriptors for prediction of the corrosion rates, including pH of the medium, halide concentration and composition of elements with the minimum reduction potential [19]. Taylor et al. identified a number of corrosion descriptors, such as cohesive energies, oxide formation energies and surface enrichment of passive elements, and related those descriptors to corrosion resistance with respect to surface passivation, dissolution and microstructure control [15]. Ke and Taylor reviewed the role of density functional theory (DFT) in modeling corrosion, and they pointed out corrosion metrics accessible by DFT, including oxygen and chloride adsorption energy, dissolution potential and surface energy [20]. Other computational methods based on peridynamics and phase-field modeling are often used to study the evolution of pitting corrosion [21, 17]. Unfortunately, the complexity of corrosion process makes it almost impossible to relate chemical compositions and microstructures of alloys directly to the corrosion performance. The vast composition and microstructure space of high-entropy alloys create complexity for the materials design problem. A workaround is multi-objective optimization based on empirical rules, which allows for screening material candidates with relative superior corrosion resistance. While some data-driven approaches and first-principles calculations exist to identify corrosion descriptors, those data-driven methods in nature lack physical insights and first-principles calculations are costly computationally. A physically meaningful and efficient approach to relating compositions with corrosion performance is still lacking. The objective of this work is to bridge the technical gap for locating high-entropy alloys with potential high corrosion resistance in the high-dimensional composition space, in particular for pitting corrosion. We focused on pitting corrosion because the rate of localized pitting corrosion can be faster than uniform corrosion by orders of magnitude, hence pitting corrosion is more critical in applications where it exists [16]. A complete picture of pitting corrosion requires multi-scale and multi-physics simulations to understand the formation of passive film, passive film breakdown and pit growth stability [22], which to the best of our knowledge are not available. Pitting corrosion resistance is empirically associated with the ability of alloys to form a passive film, protectiveness of the passive film and pitting growth rate when the passive film breaks down. In this work, we chose three corrosion metrics considered to be influential to pitting corrosion, including _single phase formability_, _Pilling-Bedworth ratio_ of passive elements, _surface energy_. A physics-informed machine learning (ML) framework was introduced to quantify the three corrosion metrics for a wide range of compositions of high-entropy alloys. The compositions with desired corrosion performances were identified by mapping out those corrosion metrics as a function of compositions. We tested this framework for AlCrFeCoNi high-entropy alloys because they belong to an emerging class of materials with superior mechanical properties and corrosion resistance [10, 23]. ## 2 Theories and Methods The aim of this work is to develop machine learning methods to quantify three metrics dictating pitting corrosion resistance of high-entropy alloys. The inputs of the machine learning models are the chemical compositions and the outputs are the corrosion metric. Random forest classification models trained on experimental data were used to decide whether a single phase is likely to form or not for a given chemical composition, whereas machine learning potentials trained on first-principles data were employed to calculate Pilling-Bedworth ratios and surface energies. In this section, we discuss in detail the qualitative relationship of the three metrics with pitting corrosion resistance and how to quantify each of them by machine learning models. The overall workflow of how to train the machine learning models is illustrated in Figure 1. ### Single phase formability The single phase formability was evaluated by a random forest classification model, also known as random forest classifier. It is crucial to form a homogeneous single phase for enhanced corrosion protection because it enhances passivity and prevents the fast galvanic corrosion. A physically rigorous approach to model single phase formation is thermodynamic modeling carried out with CALPHAD. The reliability of CALPHAD calculations is determined by the quality of experimental data as well as relevant first-principles calculations [24]. Instead of thermodynamic modeling, we used random forest models trained on experimental data to predict the probability of forming single phases for an arbitrary alloy composition. The experimental dataset was summarized by Yan et al. [25]. The workflow for single phase formability is shown in the top row of Figure 1. The raw data in total has 1807 entries and it takes input as the chemical compositions and output as the indicator for single phase formability. Single-phase alloys are labeled as '1', whereas multiple-phase alloys are labeled as '0'. It should be noted that phase formations are also dependent on the manufacturing processes and thermal history. It is assumed that most alloys found in this experimental dataset were processed with similar techniques and environmental conditions, and the remaining exceptions represent outliers and noise in the dataset, whose impact on the robustness of ML models will be diminished by a cross-validation strategy due to the averaging effect. Each input composition was converted to eight physical descriptors, as used by Yan et al [25], including atomic size difference, mixing enthalpy, mixing entropy, Pauli electronegativity difference, molar volume, bulk modulus, melting temperature and valence electron concentration. Next, random forest models were trained to relate the eight descriptors to the single phase formability. The trained random forest models are thus able to predict if alloys will form single phase or multiple phases for a given composition. For the purpose of model validation, we held out 20% of the entire dataset for testing, which were used to examine the prediction accuracy of the final model on some unseen dataset, hence avoiding overfitting. Five-fold cross-validation was used on the remaining 80% dataset to tune the hyperparameters and to train the random forest classifier. The entire dataset was shuffled before splitting with a given random state to a test set and a data set for cross-validation, and ten random states were used to estimate uncertainties due to data splitting. Details of each descriptor, exploratory data analysis (Figure S1-S3), and hyperparameters of random forest models are included in the supporting information (SI). ### Pilling-Bedworth ratio The Pilling-Bedworth ratios and surface energies were quantified by machine learning potentials, which are trained following the procedure outlined in the bottom row of Figure 1. Pilling-Bedworth ratio (PBR) was used to describe the growth stress of an oxidation process. It describes the volume change due to oxidation on an alloy surface, which follows Eq. (1) with respect to the oxidation of a metallic element B. \[PBR_{\text{B}}=\frac{\text{Volume of a mole of B}_{\text{x}}\text{O}_{\text{y}}}{ \text{Volume of x moles of B in metal}} \tag{1}\] It is well accepted that when \(PBR<1\), the formed oxide offers no protection to the alloy surface. If \(1\leq PBR\leq 2\), the oxide forms a passive layer and prevents structural alloys from direct corrosion although some compression stresses develop inside the oxide. When \(PBR\gg 2\) Figure 1: Workflow of machine learning accelerated discovery of corrosion-resistant high-entropy alloys. the compression stresses become significant, causing the breakdown of the oxides. This simple analysis well explains that corrosion-resistant alloys typically contain Al, Zr, Ni, Ti, Fe or Cr whose PBR values are larger than 1 and not much larger than 2 [12, 26, 27]. When it comes to the oxidation of alloys, one or more elements may oxidize and form passive layers. Hence we need to identify elements that are thermodynamically preferential for oxidation. We can then calculate the PBR of the identified passive element by analyzing the volume change due to its oxidation. Xu and Gao introduced methods to compute PBR for the oxidation of alloys [28]. There are two possible cases for PBR values of alloys, depending on the relative diffusion rate of the passive element in alloys versus that in oxides. Generally the diffusion rate of passive elements within alloys are much faster relative to the rate within oxides so that alloy compositions near the surface can maintain a stoichiometry close to the original composition. For example, the diffusion coefficient of Cr in CrCrFeMnNi high-entropy alloys at 900 \({}^{\circ}\)C is about \(10^{-12}\) cm\({}^{2}\)/s while self-diffusion of Cr in Cr\({}_{2}\)O\({}_{3}\) has a coefficient on the order of \(10^{-21}\) to \(10^{-17}\) cm\({}^{2}\)/s [29, 30]. We recommend that readers consult the work of Xu and Gao [28] for calculation details of PBR for oxidation of alloys. For our benchmark material system AlCrFeCoNi, we examined the passivation of the Cr element although thermodynamic data, as tabulated in the SI, favors the formation of Al oxides over Cr oxides. Experimental data suggest that the addition of Al in CrFeCoNi alloys reduces passive film protection, implying that the major protection against pitting corrosion might be offered by the passivation of Cr [31]. Therefore, we consider the oxidation of Cr, which forms Cr\({}_{2}\)O\({}_{3}\) with a mole weight of 102 g/mol and a density of 5.22 g/cm\({}^{3}\). The volume of Cr in the alloy was calculated using a FCC crystal whose lattice parameters were obtained by machine learning potentials. ### Surface energy The role of crystallographic orientation and the corresponding surface energy in corrosion protection was first investigated by Song et al [32]. It is found that a densely packed surface with a lower surface energy could lead to a stronger bonding between surface atoms that impedes the dissolution of surface atoms to solutions. The electrochemical dissolution rate \(I_{A}\) of a metal 'A' with an exposed crystal plane (h,k,l) at a temperature \(T\) follows the relation: \[I_{A,(h,k,l)}\propto\exp\left(\frac{\alpha\gamma_{(h,k,l)}}{RT}\right) \tag{2}\] where R is the gas constant, \(\gamma_{(h,k,l)}\) is the surface energy and \(\alpha\) is a transition coefficient to relate surface energy to dissolution activation energy. Ramachandran and Nosonovsky found that a lower surface energy leads to a more hydrophobic surface, and hydrophobic surfaces tend to show higher corrosion resistance [33]. In this work, we used surface energy as a metric to describe the trend of average dissolution of atoms on the crystallographic plane FCC(111) of AlCrFeCoNi alloys with different compositions. An FCC(111) facet was used because of its high stability over other types of facets. It is arguable that a higher surface energy is associated with a higher average dissolution rate, resulting in faster pitting growth, although a more rigorous treatment may need to take into account sequential atom-by-atom dissolution on a metallic surface constrained by broken passive films formed on top of it, which we elected not to consider for the sake of simplicity. Surface energy of a facet reads as: \[\gamma=\frac{E_{\rm slab}-E_{\rm bulk}}{2A} \tag{3}\] where \(E_{\rm slab}\) and \(E_{\rm bulk}\) are respective potential energies of the FCC(111) facet and bulk cell, and A is the exposed area of the facet. The bulk cells used to calculate surface energies are of L1\({}_{2}\) structures, and the surface structures are the putative most stable structures found by Markov chain Monte-Carlo (MCMC) simulations. All items in Eq. 3 were found by atomistic modeling using machine learning potentials. The details of MCMC simulations are provided in the SI. ### Machine learning potentials Potential energy surfaces (PESs) represent one-to-one mappings between atomic positions (\(\{R\}\)) and potential energy (\(E\)) of a material system. PESs provide a plethora of information for material systems. For example, local minima on PESs represent stable states and the minimum energy trajectory connecting two local minima indicates a fundamental reaction pathway. The most often used methods to build reliable PESs are DFT calculations. However, standard DFT calculations are limited to hundreds of atoms due to the formidable \(\mathcal{O}(M^{2-3})\) scaling with system sizes (M), such as numbers of basis sets, atoms or electrons [34]. It is thus computationally prohibitive to sample all points on _ab initio_ PESs. One should note that DFT, first-principles and _ab initio_ calculations are used interchangeably as they have the same meaning in this work. In the past decade, fitting _ab initio_ PESs with machine learning (ML) algorithms have gained increasing momentum, and the ML-fitted PESs are termed machine learning potentials (MLP). Most MLPs relies on the nearsight-edness principles [35], also known as _all chemistry is local_, implying that the total potential energy of a system with \(N\) atoms can be largely decomposed into a linear sum of all atomic contributions and each atomic contribution comes from the atom \(i\) interacting with neighboring atoms in a cutoff region, written as Eq. 4. \[E=E(\{R\})=\sum_{i=1}^{N}E_{i}=\sum_{i=1}^{N}E_{i}^{(\rm local)} \tag{4}\] Thanks to and only because of the nearsightedness principle, MLPs can be trained with small-size first-principles data while allowing for reliable predictions on much larger systems [36]. It should be noted that the nearsightedness of first-principles calculations and machine learning algorithms should be well aligned to strike a good balance between computational efficiency and prediction accuracy [37]. A variety of ML algorithms have proven effective in fitting _ab initio_ PESs, such as neural networks [36, 38], Gaussian process [39] and kernel ridge regression [40]. MLPs find applications in many fields, ranging from small molecules, to nanoparticle alloy catalysts and extended systems [41, 42, 43]. In this work, we employed a class of machine learning potentials termed moment tensor potentials (MTPs) for the high-entropy alloys which were found to be superior to other types of MLPs when tested on single-element systems by various simulation tasks [44]. Readers should refer to the work of Shapeev for implementation details of MTPs [45, 46]. MTPs were trained with systematically generated training data for high-entropy alloys AlCrFeCoNi, as outlined in the bottom row of Figure 1. We primed the algorithm with FCC bulk and surface structures. For each of the initial structures, atomic positions and lattice geometry are simultaneously optimized to find the stable structure, the process of which is termed structure optimization, also known as relaxation. The structure optimization used the embedded atom method (EAM) developed by Farkas and Caro [47]. Starting with relaxed structures and using EAM, we also sampled a diverse pool of atomic configurations via molecular dynamics and Monte-Carlo simulations. The molecular dynamics simulations were used to perturb atomic positions, whereas the Monte-Carlo simulations were adopted to simulate exchange of two different atoms. Electronic structure calculations with GPAW were performed to refine the energy and forces of a part of the EAM-sampled configurations [48]. Additionally, we carried out first-principles calculations for simple bulk and surface structures with numbers of elements ranging from one to five. Special quasi-random structures were generated using the tool in the alloy theoretic automated toolkit (ATAT) for simple bulk structures with more than two elements to best approximate a random solid solution [49, 50]. In total, 1569 first-principles structures were curated. We then trained MTPs upon those data, and we used the MTPs to carry out simulations needed to calculate PBRs for the oxidation of Cr and surface energies of FCC(111) facets. Atomic structures were created and manipulated with the Atomic Simulation Environment (ASE) [51] and LAMMPS [52]. Computational settings of MTPs and GPAW calculations and details of the training data can be found in the SI. MTP enabled simulations to calculate relevant corrosion metrics are also elaborated in the SI. Scripts and notebooks for atomistic modeling and curation of training data will be supplied as a supporting dataset. ### Mapping corrosion metrics with respect to Al and Cr compositions in AlCrFeCoNi We tested the above methods on predicting the three corrosion metrics for AlCrFeCoNi high-entropy alloys. We varied the compositions of Al and Cr while equalizing remaining Fe, Co and Ni compositions. For a given composition, its single-phase formability was calculated by a random forest classifier and its Pilling-Bedworth ratio and surface energy were quantified by the MTPs. Therefore, we mapped the three corrosion metrics as a function of AlCrFeCoNi compositions, based on which we can identify composition regions with desired values for all corrosion metrics, which are potentially associated with superior relative corrosion resistance. We changed Al compositions in the range of 0-25 at.%, and Cr compositions in the range of 10-30 at.% (see Figure 3). The lower bound for the Cr composition was set as 10% because Cr is the passive element, and a percolation model for passivation of alloys suggests that the smallest amount of elements to enable passivation is around 10% [53]. In other words, an alloy only forms a continuous and protective passive film with the passive element being of no less than 10 at.%. For single-phase formability, an interval of 1% was used for both Al and Cr composition mesh grids as the inference by the trained random forest classifier for each composition took less than 1 second. For PBR\({}_{\rm Cr}\), an interval of 5% was used to find the lattice parameters of L1\({}_{2}\) bulk cells using MTPs. The lattice parameters of MTP-relaxed structures were then fitted by a linear regression as a function of Al and Cr compositions. The fitted function was used to calculate lattice parameters for arbitrary Al and Cr compositions. The volume of Cr will be used to calculate PBR\({}_{\rm Cr}\). In terms of surface energies, a composition interval of 5% was used to generate structures needed. ## 3 Results and discussion ### ML prediction accuracy and transferability ML models are often criticized for their poor transferability to data that are not existent in the training data set. As a result, it is of great importance to evaluate ML model performance before we deploy the models. #### 3.1.1 Random forest classifier The experimental dataset used to train the random forest classifier includes in total 1807 entries. The 1807 data points were split to 80% and 20% for cross-validation and test, respectively. Hence 361 data points were used for testing, and ten different random states for the data splitting were used to obtain the standard deviation of model prediction accuracy on the test set. The random forest classifier gave a prediction accuracy of 89% on the test set with a standard deviation of 1%. The best model with the highest accuracy on the test set was chosen for subsequent inferences. We also studied the feature importance using shapley values based on game theories [54]. Mixing entropy, atomic size difference and melting temperature were identified as the top three most important features, largely consistent with the work of Yan et al [25]. More feature importance results can be found in Figure S4 of SI. #### 3.1.2 Moment tensor potentials The trained MTP gave \(\sim\)5 meV/atom for the average absolute difference of energy and 0.058 eV/A for the average absolute difference of atomic forces. To further validate the MTPs, we compared predicted lattice constants of single-element FCC crystals to values by DFT. We also compared the predicted surface energies of single-element FCC(111) facets with DFT. The comparison is summarized in Table. 1. One can see that MTP-predicted lattice constants are close to DFT calculations, with relative deviations around 1%. In terms of FCC(111) surface energies, although large deviations exist for elements Ni, Co and Al, the relative order of surface energy magnitude by MTP is in accordance with that by DFT. We also compared the phase stability among various single-crystal structures, including FCC random alloys (FCC_A1), FCC L\(1_{2}\) ordered structures and BCC B2 structures. This comparison was used to test the ability of MTPs to predict the most stable phase of Al\({}_{x}\)(CrFeNiCo)\({}_{100-x}\) as a function Al compositions. Experimental observation and EAM-based calculations suggest that a \begin{table} \begin{tabular}{c c c|c c} \hline \hline & \multicolumn{2}{c|}{Lattice constant [Å]} & \multicolumn{2}{c}{Surface energy [J/m\({}^{2}\)]} \\ \hline Element & MTP & DFT & MTP & DFT \\ \hline Al & 4.08 & 4.04 & 0.77 & 0.86 \\ Cr & 3.62 & 3.62 & 2.61 & 2.65 \\ Fe & 3.44 & 3.46 & 2.49 & 2.45 \\ Co & 3.49 & 3.46 & 1.87 & 2.12 \\ Ni & 3.51 & 3.52 & 1.93 & 2.14 \\ \hline \hline \end{tabular} \end{table} Table 1: Lattice constants and FCC(111) surface energies for single-element structures: DFT _versus_ MTP. low Al composition favors FCC-type phases while B2 phases are thermodynamically more stable at higher Al compositions [7, 47]. We calculated the cohesive energies of L1\({}_{2}\) and B2 for Al compositions up to 40%, with all Al in one sublattice and the remaining four elements randomly distributed. The FCC_A1 structures were generated by randomly placing the atoms in a FCC lattice. Figure 2 shows that at low Al contents (0-10%), L1\({}_{2}\) and FCC_A1 are both more stable than B2 phases. When Al compositions increase (10-20%), the ordered L1\({}_{2}\) becomes the most stable phase. In comparison, larger Al compositions (\(>20\%\)) favor the formation of ordered B2 phases, in good agreement with well-parameterized empirical potentials and experiments [7, 47]. The dashed line represents the most stable phases at each Al composition. One should note that the first-principles data used to train MTPs only consist of FCC structures. Despite not seeing any BCC structures, the MTPs accurately predicted the trends of phase stability that are consistent with experiments and first-principles data, indicating decent transferability of the MTPs. ### Variation of three metrics as a function of Al and Cr compositions in AlCrFeCoNi We examined the proposed machine learning framework by studying the three corrosion metrics for AlCrFeCoNi high-entropy alloys. This specific high-entropy system was used because of its superior corrosion performance and the availability of extensive experimental corrosion data, which were used to validate ML predictions [55, 56, 10]. The variations of three corrosion metrics are plotted in Figure 3 as a function of Al and Cr compositions in AlCrFeCoNi. Figure 3(a) shows results of single-phase formability. Although there is some noise in the training data as both multiple phases and single phase are found in the composition regions marked with black boxes, the random forest classifier accurately predicts 91% of all training data listed in Figure 3(a). Besides, one can see a decision boundary at around 10% Al composition to separate single-phase alloys and multiple-phase alloys. The decision boundary is mostly determined by Al compositions and Figure 2: Cohesive energies of the FCC_A1, L1\({}_{2}\), and the ordered B2 phases for Al\({}_{x}\)(CrFeCoNi)\({}_{100-x}\) as a function of Al compositions. The most stable phases at all Al compositions are connected with dashed lines to guide the eyes. slightly associated with the Cr composition. An increased Cr composition will marginally shift the boundary to a lower Al composition. The trend of this decision boundary semi-quantitatively agrees with experimental data summarized by Wu et al [57], as indicated by the grey line in Figure 3(a). However, the prediction on the high Al composition region is confounded by the noisy training data, implying high prediction uncertainties in this region. If a single-phase structure is formed, we are also interested in which type of single phase structures will be formed since experiments found FCC crystals to be more resistant to pitting corrosion than BCC crystals [10, 56]. We identified the type of single phase to form for a given composition by comparing the cohesive energies of L1\({}_{2}\) FCC and B2 BCC phases following a similar procedure describe in the Section ML prediction accuracy and transferability. The variation of single-phase structures across different Al and Cr compositions are presented in Figure S5. Figure 3(a) and Figure S5 jointly show that FCC phases form at low Al and Cr compositions while BCC phases are more likely to form at high Al and Cr compositions. Figure. 3(b) shows PBR\({}_{\rm Cr}\) for different Al and Cr compositions. Likewise, a major dependence on Al compositions and a minor dependence on Cr compositions are identified. PBR\({}_{\rm Cr}\) values in the entire composition space studied have a lower bound of 2.00 and a upper bound of 2.18, both of which are close to the PBR\({}_{\rm Cr}\) of pure Cr (2.04). According to the analysis by Bernstein [58] and Huntz et al. [59], the oxide stresses are related to PBR by \(\epsilon=\omega[(PBR)^{1/3}-1]\) where \(\omega\) is a correction factor around 0.18. Therefore, the oxide stresses across all Al and Cr compositions studied only show a negligible difference of around 0.5%. In contrast, the surface energies exhibit more significant variations over compositions, ranging from 1.92 to 2.24 J/m\({}^{2}\). Using an atomic density of \(1\times 10^{19}\) atoms/m\({}^{2}\) and a transition coefficient \(\alpha\) of 1/2, electrochemical dissolution rates given by Eq. 2 vary by 50 folds. High surface energies are concentrated at the regions with high Al and Cr compositions, while low surface energies are found with Cr contents around 18% and with either low or high Al contents. To understand surface energy dependency on Al and Cr, we studied the surface segregation of Al and Cr for each composition, and the results are depicted in Figure S6. It is thus inferred that the low surface energies at the right-bottom region of Figure 3(c) originate from Al segregation and Cr depletion, probably because a Al FCC(111) surface has a much lower surface energy (0.77 J/m\({}^{2}\)) compared to a Cr FCC(111) surface (2.61 J/m\({}^{2}\)), as shown in Table 1. The Cr depletion on the surface lends itself difficult to formation of Cr\({}_{2}\)O\({}_{3}\) passive films. Thus, highly corrosion-resistant AlCrFeCoNi alloys can potentially be found with low Al contents and around 18% Cr contents because the alloy with these compositions tend to form single-phase alloys and to exhibit low surface energies. The identified Al composition is consistent with experimental measurement [10, 31, 56]. ## 4 Conclusion and outlook A machine learning framework was proposed and developed to accelerate the discovery of corrosion-resistant high-entropy alloys. We demonstrated that the proposed framework can provide an accurate evaluation of relative corrosion resistance for a wide range of compositions for high-entropy alloys. The physics-informed framework consists of two machine learning approaches. One approach uses experimental data to train random forest classifier for predictions of single phase formability. The other approach uses first-principles data to develop robust machine learning potentials, allowing for fast downstream simulations to obtain corrosion metrics such as Pilling-Bedworth ratio Figure 3: Single phase formability (a), Pilling-Bedworth ratios for oxidation of Cr (b) and FCC(111) surface energies (c) as a function of Al and Cr compositions in Al\({}_{x}\)Cr\({}_{y}\)(FeCoNi)\({}_{100-x-y}\). The training data involving AlCrFeCoNi high-entropy alloys are included as scatter points in (a). Red squares and green circles in (a) represent single phase and multiple phase data, respectively. The grey dashed line in (a) is roughly the decision boundary by Wu et al [57]. The regions where multi phases and single phase are almost overlapped are marked with black boxes in (a). and surface energy. Current computational methods to understand corrosion performance of alloys mostly use pure statistical fitting or first-principles calculations. Unlike statistical fitting, the random forest classifier encodes meaningful physical knowledge into the feature engineering process. In comparison with first-principles calculations, the machine learning potentials can significantly mitigate the computational overhead of massive first-principles calculations. This framework was tested on a specific class of high-entropy alloys AlCrFeCoNi. The AlCrFeCoNi compositions were sampled by varying the Al and Cr compositions while enforcing the remaining Fe, Co and Ni compositions to be almost identical. The three corrosion metrics were evaluated on those sampled compositions, based on which the desired compositions for corrosion protection were identified. We found that low Al compositions and around 18% Cr compositions tend to form corrosion-resistant alloys, in satisfactory agreement with experimental observations. Although additional corrosion descriptors, such as cohesive energy, and adsorption energy of oxygen and chloride, may be needed to provide a more comprehensive description of corrosion performance, the three simple corrosion metrics used in this work have proved to be effective in narrowing down the composition space for further selection. Our scheme is not limited to AlCrFeCoNi high-entropy alloys and corrosion properties. The methodology can be easily adapted for other material applications where the relationship of chemical compositions with properties is sought after and where ML accelerated molecular simulations are indispensable for high-throughput screening of material candidates. For instance, ductility of alloys can be evaluated by stacking fault energies which can be calculated by using MLPs, and hardness can be estimated using machine learning regression on experimental dataset [60, 61]. One should note that most state-of-the-art machine learning potentials use element-specific features, which limit the transferability of MLPs. In other words, MLPs trained on certain elements cannot be applied to elements not existing in the training data. Moreover, a large amount of training structures are required to build reliable ML models for high-entropy systems. Developing a robust element-agnostic featurization method, and reducing numbers of representative images and features are promising future directions. Element-agnostic featurization methods are just emerging in recent years and needs further development. For example, there are methods using multipole expansions of the electron density around atoms [62] or graph representation of materials [63]. Current image and feature selection methods use linear correlations in the feature space [64]. More advanced methods may require understand the complex non-linear mapping between features and outputs (e.g. energy and forces). ### Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ### Acknowledgments This work was completed in part using the Discovery cluster, supported by the Research Computing team at Northeastern University. This work is partially supported by The Experiential AI Institute, Roux Institute, and the Alfond Foundation at Northeastern University. We thank Dr. Liang Qi (University of Michigan) for providing valuable technical feedback to our poster presented at ICME2023 conference in Orlando, Florida, USA. ## Supporting information Standard enthalpy of formation for oxides of certain metals, Computational settings of DFT calculations, Details of moment tensor potential (MTP), Random forest classifier for single phase formability, Fast sampling of configurations using the embedded atom method (EAM), First-principles data for MTP, MTP-enabled simulations for corrosion metrics, and Supporting results are included in the supporting information. ### Data and code availability All scripts and notebooks for simulation tasks and data analytics are saved in a private repository and will be publicized after this manuscript is accepted for publication. Data generated in this work are available from the corresponding authors upon reasonable requests.
2302.03422
Bimeromorphic geometry of LCK manifolds
A locally conformally K\"ahler (LCK) manifold is a complex manifold $M$ which has a K\"ahler structure on its cover, such that the deck transform group acts on it by homotheties. Assume that the K\"ahler form is exact on the minimal K\"ahler cover of $M$. We prove that any bimeromorphic map $M'\rightarrow M$ is in fact holomorphic; in other words, $M$ has a unique minimal model. This can be applied to a wide class of LCK manifolds, such as the Hopf manifolds, their complex submanifolds and to OT manifolds.
Liviu Ornea, Misha Verbitsky
2023-02-07T12:12:00Z
http://arxiv.org/abs/2302.03422v1
# Bimeromorphic geometry of LCK manifolds # Bimeromorphic geometry of LCK manifolds Liviu Ornea\({}^{1}\), Misha Verbitsky\({}^{2}\) \({}^{1}\)Liviu Ornea is partially supported by Romanian Ministry of Education and Research, Program PN-III, Project number PN-III-P4-ID-PCE-2020-0025, Contract 30/04.02.2021 \({}^{2}\)Misha Verbitsky is partially supported by the HSE University Basic Research Program, FAPERJ SEI-260003/000410/2023 and CNPq - Process 310952/2021-2. **Keywords:** Locally conformally Kahler, global Kahler potential, bimeromorphism, minimal model, normal variety. **2010 Mathematics Subject Classification:** 32H04, 53C55 **Abstract** A locally conformally Kahler (LCK) manifold is a complex manifold \(M\) which has a Kahler structure on its cover, such that the deck transform group acts on it by homotheties. Assume that the Kahler form is exact on the minimal Kahler cover of \(M\). We prove that any bimeromorphic map \(M^{\prime}\to M\) is in fact holomorphic; in other words, \(M\) has a unique minimal model. This can be applied to a wide class of LCK manifolds, such as the Hopf manifolds, their complex submanifolds and to OT manifolds. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 LCK manifolds * 2.2 LCK manifolds with potential * 3 Bimeromorphisms of LCK manifolds * 3.1 Normal varieties * 3.2 Fundamental group and holomorphic maps * 3.3 Manifolds bimeromorphic to an LCK manifold ## 1 Introduction The notion of a minimal model is central in birational algebraic geometry since the XIX-th century. In 1901, Castelnuovo proved his famous contraction criterion, establishing that any birational contraction of smooth surfaces contracts rational curves of self-intersection (-1). This is when the theory of minimal surfaces was born: a surface is minimal if it cannot be contracted. Jointly with his student Enriques, Castelnuovo classified minimal surfaces in a series of works which extended into 1920-ies. Kodaira extended the minimal model theory from projective surfaces to complex surfaces, showing that most results of Castelnuovo and Enriques apply also to compact complex surfaces. The minimal model program, due to Shigefumi Mori, generalizes the Castelnuovo-Enriques minimal model theory from surfaces to projective manifolds in higher dimension. Its version for Kahler threefolds was more recently completed by Horing and Peternell ([HP]). However, for non-Kahler complex manifolds in dimension \(>2\), not much is known about their bimeromorphic geometry. The simplest species of non-Kahler manifolds are LCK (locally conformally Kahler) manifolds. These are complex manifolds which admit a Kahler covering \(\tilde{M}\longrightarrow M\), with the deck transform group acting by homotheties. For compact \(M\), this geometry is sharply distinct from the Kahler geometry. Indeed, by Vaisman's theorem ([V2]), a compact LCK manifold does not admit a Kahler metric, unless the deck transform acts on \(\tilde{M}\) by isometries. Surprizingly, the minimal model program for LCK manifolds (at least, for a significantly large class of LCK manifolds) is much simpler comparing with the Kahler or even projective manifolds. Let \(M^{\prime}\) be a compact complex variety bimeromorphic to an LCK manifold \(M\) which belongs to two major subclasses of LCK manifolds, LCK manifolds with potential or OT manifolds (Subsection 2.2). Then the map \(M^{\prime}\dashrightarrow M\) is holomorphic. In other words, \(M\) has a unique minimal model. The proof is complex-geometric in nature, not using any of the state of the art results of birational and bimeromorphic geometry. However, it seems to be possible to deduce this result from the weak factorization theorem ([M, Theorem 5-1-1]). Our proof works even in a bigger generality: let \(M\) be a compact complex manifold, and \(\tilde{M}\) its covering. If \(\tilde{M}\) admits a Kahler form which is exact, then any bimeromorphic map \(M^{\prime}\dashrightarrow M\) is holomorphic (Theorem 3.5). ## 2 Preliminaries We present briefly the notions we need from locally conformally Kahler geometry. For more details examples and an up to date account of the results, please see [OV4]. ### LCK manifolds Locally conformal Kahler manifolds were defined by Izu Vaisman in [11]. In this paper, we shall not need the orginal definition, but the following equivalent one: **Definition 2.1**: ([11, Remark 2.9]) A Hermitian manifold \((M,I,\omega)\) is called **Locally Conformally Kahler** if it admits a Kahler cover \((\tilde{M},I,\tilde{\omega})\) with deck group \(\Gamma\) acting by homotheties with respect to the Kahler metric. The Hermitian form \(\omega\) is then called **an LCK form**. **Definition 2.2**: **:** Let \((M,I,\omega)\) be LCK. The very definition implies the existence of a group morphism \(\chi:\ \pi_{1}(M)\to\mathbb{R}^{>0}\), given by \(\chi(\gamma)=\frac{\gamma^{*}\tilde{\omega}}{\tilde{\omega}}\) for each \(\gamma\in\pi_{1}(M)\) viewed as a deck transformation of the Kahler universal cover. This group morphism is called **the homothety character**. **Definition 2.3**: **:** Let \((M,I,\omega)\) be an LCK manifold and \(\chi\) the associated homothety character. The **minimal Kahler cover** of \((M,I,\omega)\) is the Kahler cover associated to \(\ker\chi\subset\pi_{1}(M)\). **Example 2.4**: **:** Let \(\gamma:\ \mathbb{C}^{n}\longrightarrow C^{n}\) be an invertible holomorphic contraction with apex in \(0\). Then the quotient \(H=\frac{\mathbb{C}^{n}\backslash 0}{\langle\gamma\rangle}\) is called **a Hopf manifold**. All Hopf manifolds are LCK. For the case \(\gamma\in\operatorname{GL}(n,\mathbb{C})\) (when we speak about "linear Hopf manifolds"), the proof was given gradually, in a series of papers: [1, 2]; for non-linear contractions, the proof appeared only recently, in [2]. **Example 2.5**: **:** Almost all the Inoue surfaces ([I]) are LCK ([T, B]). The Oeljeklaus-Toma (OT) manifolds, which are higher dimensional generalizations of the Inoue surface \(S^{0}\) are LCK. For details, see [2, Chapter 22]. ### LCK manifolds with potential **Definition 2.6**: **:** Let \((M,I,\omega)\) be an LCK manifold. It is called **LCK with potential** if: **(i)**: The Kahler form \(\tilde{\omega}\) has a smooth, positive Kahler potential: \(\tilde{\omega}=dd^{c}\varphi\), with \(\varphi:\tilde{M}\longrightarrow\mathbb{R}^{>0}\), and **(ii)**: The deck group acts by positive homotheties with respect to the potential: \(\gamma^{*}\varphi=c_{\gamma}\varphi\), with \(c_{\gamma}\in\mathbb{R}^{>0}\), for all \(\gamma\in\Gamma\). By abuse of terminology, we say that "\(\omega\) has potential \(\varphi\)". **Remark 2.7**:: In general, a differential form \(\eta\) on \(\tilde{M}\) with the property that \(d^{*}\eta=c_{\gamma}\eta\), \(c\in\mathbb{R}\), \(\gamma\in\Gamma\), is called **automorphic**. In particular, for an LCK manifold, the Kahler form \(\tilde{\omega}\) on a Kahler covering is always automorphic. **Remark 2.8**:: The Kahler forms on the universal covers of Inoue surfaces \(S^{0}\) and the Oeljeklaus-Toma manifolds do have positive, global potentials. However, these are not automorphic in the sense of Remark 2.7 (see [OT], [OV4, Chapter 22]). **Proposition 2.9**:: All smooth submanifolds of an LCK manifold with potential are LCK manifolds with potential. **Example 2.10**:: All Hopf manifolds (Example 2.4) are LCK with potential ([OV1, OV2, OV5]). All non-Kahler elliptic surfaces are LCK with potential ([B, VVO]). Proposition 2.9 admits the following partial converse: **Theorem 2.11**:: ([OV1, OV3]) Let \((M,I,\omega)\) be a compact LCK manifold with potential, \(\dim_{\mathbb{C}}M\geqslant 3\). Then \((M,I)\) admits a holomorphic embedding to a linear Hopf manifold. In conclusion, if we restrict to complex dimension at least \(3\), we can say that compact LCK manifolds with potential are smooth submanifolds of linear Hopf manifolds. If the Global Spherical Shell (GSS) conjecture (also called the Kato conjecture) is true, the same holds for dimension \(2\) ([OV3]). ## 3 Bimeromorphisms of LCK manifolds ### Normal varieties We shall need a result about normal varieties in the analytic category. Recall that a complex variety \(X\) is called **normal** if any locally bounded meromorphic function on an open subset \(U\subset X\) is holomorphic ([D, Definition II.7.4]). **Proposition 3.1**:: Let \(Z\) be a normal variety, and \(\varphi:\;Z_{1}\longrightarrow Z\) a holomorphic, closed map such that \(\varphi^{-1}(z)\) is finite for all \(z\) and bijective in a general point. Then \(\varphi^{-1}\) is holomorphic. **Proof:** For proper morphisms of algebraic varieties, this statement serves as one of the definitions of normality: \(Z\) is normal if any finite, birational, regular map \(Z_{1}\mathop{\longrightarrow}Z\) is an isomorphism. When \(f\) is bijective, this statement can be found in [R, Prop. 14.7] or in [GLS, Theorem 1.102]. By [GLS, Lemma 1.54], for any proper map \(\varphi:\ Z_{1}\mathop{\longrightarrow}Z\) such that \(\varphi^{-1}(z)\) is always finite, there exist an open neighbourhood \(U_{z}\) for each \(z\in Z\) such that \(\varphi^{-1}(U_{z})\) is a disjoint union of open subsets \(V_{1},...,V_{n}\); each of these would give a coordinate neighbourhood for some \(z_{i}\in\varphi^{-1}(z)\). Since \(\varphi\) is bijective in a general point, the number of open subsets obtained by application of this lemma is just \(1\); this implies that \(\varphi\) is bijective, and [GLS, Theorem 1.102] can be applied. ### Fundamental group and holomorphic maps **Proposition 3.2:** Let \(\varphi:\ M\dashrightarrow M_{1}\) be a bimeromorphic map of compact complex connected manifolds, and \(X\subset M\times M_{1}\) the closure of its graph.1 Then the natural projections \(X\mathop{\longrightarrow}M\) and \(X\mathop{\longrightarrow}M_{1}\) induce isomorphisms of the fundamental groups. Footnote 1: By definition of a meromorphic morphism, \(X\) is a complex subvariety of \(M\times M_{1}\). **Proof. Step 1:** Denote by \(\tilde{X}\) the resolution of singularities of \(X\). By [K1, SS7.8.1], the bimeromorphic holomorphic maps \(\tilde{X}\mathop{\longrightarrow}M_{1}\) and \(\tilde{X}\mathop{\longrightarrow}M\) induce isomorphisms on the fundamental groups. **Step 2:** We show that the variety \(X\) is normal. By definition, a normal variety is one where all locally bounded meromorphic functions are holomorphic. Note that the projections from \(X\) to \(M\) and to \(M_{1}\) are bimeromorphic; this allows us to interpret the meromorphic functions on \(X\) as meromorphic functions on \(M\) and \(M_{1}\). Any locally bounded meromorphic function \(f\) on \(X\) defines a locally bounded meromorphic function on the manifolds \(M\) and \(M_{1}\), which are smooth and hence normal. Therefore, \(f\) is holomorphic on \(M\) and on \(M_{1}\). This implies that \(f\) is the pullback of a holomorphic function on \(M\) and on \(M_{1}\), hence it is holomorphic on \(X\subset M\times M_{1}\). **Step 3:** The composition of the maps \(\pi_{1}(\tilde{X})\mathop{\longrightarrow}\pi_{1}(X)\mathop{\longrightarrow} \pi_{1}(M)\) (respectively \(\pi_{1}(\tilde{X})\mathop{\longrightarrow}\pi_{1}(X)\mathop{\longrightarrow} \pi_{1}(M_{1})\)) is an isomorphism by Step 1. There fore, to show that \(\pi_{1}(X)\mathop{\longrightarrow}\pi_{1}(M_{1})\) (respectively \(\pi_{1}(X)\mathop{\longrightarrow}\pi_{1}(M)\)) is an isomorphism, it would suffice to show that the natural map \(\pi_{1}(\tilde{X})\mathop{\longrightarrow}\pi_{1}(X)\) is surjective. Let \(M^{\circ}\subset M\) be the complement of the exceptional set of \(\varphi\). Clearly, the graph of \(\varphi\Big{|}_{{}_{M^{\circ}}}\) is dense and Zariski open in \(X\). Let \(U\subset Z\) be a Zariski open subset in a normal complex variety. Then the natural map \(\pi_{1}(U)\mathop{\longrightarrow}\pi_{1}(Z)\) is surjective ([SGA1, IX, Cor. 5.6], [C1, SS1.3], or [K2, Lemma 3.3]). We apply this argument to \(M^{\circ}\subset X\) and \(M^{\circ}\subset\tilde{X}\), and obtain that the natural maps \(\pi_{1}(M^{\circ})\mathop{\longrightarrow}\pi_{1}(\tilde{X})\) and \(\pi_{1}(M^{\circ})\mathop{\longrightarrow}\pi_{1}(X)\) are surjective. Then the natural map \(\pi_{1}(\tilde{X})\mathop{\longrightarrow}\pi_{1}(X)\) is also surjective. ### Manifolds bimeromorphic to an LCK manifold Let \(\chi:\ \pi_{1}(M)\mathop{\longrightarrow}\mathbb{R}^{>0}\) be the homothety character of the LCK structure on \(M\) (Definition 2.2); it is a homomorphism from the deck transform group of \(\tilde{M}\) to \(\mathbb{R}^{>0}\) taking a map \(\Psi:\ \tilde{M}\mathop{\longrightarrow}\tilde{M}\) to the scale factor \(\frac{\Psi^{*}\tilde{\omega}}{\tilde{\omega}}\). For our main result, we use the following proposition. **Proposition 3.3:** Let \((M,\omega)\) be a compact LCK manifold, \((\tilde{M},\tilde{\omega})\) its minimal Kahler cover (Definition 2.3), and \(Z\subset M\) a subvariety of positive dimension. Assume that the Kahler form \(\tilde{\omega}\) is exact. Then the image of \(\pi_{1}(Z)\) in \(\pi_{1}(M)\) contains an infinite cyclic subgroup. **Proof:** Denote by \(\tilde{Z}\subset\tilde{M}\) the cover of \(Z\) obtained by the homotopy lifting lemma. If the image of \(\pi_{1}(Z)\) in \(\pi_{1}(M)\) is finite, the variety \(\tilde{Z}\) is compact. This is impossible, because \(\tilde{Z}\) admits a Kahler form \(\tilde{\omega}\) which is exact, hence \(0=\int_{\tilde{Z}}\tilde{\omega}^{\dim_{\mathbb{C}}Z}=\operatorname{Vol}( \tilde{Z})>0\); a contradiction. By the same argument, the Kahler form \(\tilde{\omega}\) restricted to \(\tilde{Z}\) is not the pullback of a Kahler form on \(Z\). This implies that the deck transform group acts on \((\tilde{Z},\tilde{\omega}\Big{|}_{\tilde{Z}})\) by non-trivial homotheties, implying that \(\chi(\pi_{1}(Z))\subset\mathbb{R}^{>0}\) is non-trivial. Consider an element \(\gamma\in\pi_{1}(Z)\) such that \(\chi(\gamma)\in\mathbb{R}^{>0}\backslash 1\). Then \(\gamma\) is of infinite order; its image in \(\pi_{1}(M)\) is also of infinite order, because \(\chi\) is factorized through \(\pi_{1}(M)\). **Theorem 3.4:** Let \(M\), \(M_{1}\) be compact complex manifolds and \(\varphi:\ M_{1}\mathop{\longrightarrow}M\) a bimeromorphism. Assume that \(M\) is an LCK manifold, and \((\tilde{M},\tilde{\omega})\) its minimal Kahler cover; assume also that the Kahler form \(\tilde{\omega}\) on \(\tilde{M}\) is exact. Then \(\varphi\) is holomorphic. **Proof. Step 1:** Let \(X\subset M\times M_{1}\) be the graph of \(\varphi\). By definition, \(X\) is a complex subvariety of \(M\times M_{1}\) which projects to \(M\) and \(M_{1}\) bijectively in a general point. We denote by \(\sigma:\ X\longrightarrow M\), \(\sigma_{1}:\ X\longrightarrow M_{1}\) the projection maps. To prove Theorem 3.4, we need to show that \(\sigma_{1}^{-1}(z)\) is finite for all \(z\in M_{1}\). Then Theorem 3.4 follows from Proposition 3.1, because \(M_{1}\) is smooth, and therefore normal. **Step 2:** Assume, on the contrary, that for some \(z\in M_{1}\), its preimage \(Z_{1}:=\sigma_{1}^{-1}(z)\) is positive-dimensional. Since the projection of \(M\times M_{1}\) to \(M\) is bijective on the set \(M\times\{z\}\), the set \(Z_{1}\) projects to \(M\) holomorphically and bijectively. Let \(Z\subset M\) be the image of \(Z_{1}\) in \(M\). By Remmert's proper mapping theorem ([D, SS8.2]), \(Z\) is a complex subvariety in \(M\). **Step 3:** By Proposition 3.3, the image of \(\pi_{1}(Z)\) in \(\pi_{1}(M)\) contains an infinite order cyclic subgroup. Therefore, its image in \(\pi_{1}(X)=\pi_{1}(M)\) also contains an infinite order cyclic subgroup. This is impossible, because \(\pi_{1}(X)=\pi_{1}(M)=\pi_{1}(M_{1})\), and the projection of \(Z\) to \(M_{1}\) is a point. Since the above proof did not use the full strength of LCK geometry the same argument can be used to prove the following result, which might be independently useful. **Theorem 3.5:** Let \(M\), \(M_{1}\) be compact complex manifolds and \(\varphi:\ M_{1}\dashrightarrow M\) a bimeromorphism. Assume that \(M\) admits a cover which admits an exact Kahler form. Then \(\varphi\) is holomorphic. **Acknowledgements:** We thank Florin Ambro and Marian Aprodu for very useful discussions and for bibliographical hints. We are most of all grateful to Victor Vuletescu for finding an error in an early version of the paper.
2305.09816
Constraints on the spectral signatures of superconducting cosmic strings
If they exist, networks of superconducting cosmic strings are capable of injecting copious amounts of electromagnetic energy into the background over a broad range of frequencies. We study this injection both analytically, as well as numerically using the thermalization code CosmoTherm. With our refined analytic formalism, we update constraints from CMB spectral distortions by following the injection of entropy, as well as energy, on the amplitude of the $\mu$-distortion, leading to a significant improvement in those limits. Furthermore, we utilize the full shape of the distorted spectrum from CosmoTherm to include constraints from non-$\mu$, non-$y$ type distortions. Additionally, we use the outputs for the ionization history and global 21cm signal to derive and update constraints on string model parameters using measurements from other datasets. Analysis of CMB anisotropies provides the most stringent constraints, though with a slightly modified shape and strength when compared to previous results. Modifications of the reionization history provide new bounds in the high current domain, and we also find that the observations of the low-frequency radio background probe a small region of parameter space not explored by other datasets. We also analyze global $21$-cm constraints, and find that the inclusion of soft photon heating plays a crucial role, essentially removing any constraints in the considered parameter domain. Spectral distortion measurements from COBE/FIRAS are covered by other constraints, but our conservative forecast shows that a PIXIE-type satellite would probe important unexplored regions of parameter space.
Bryce Cyr, Jens Chluba, Sandeep Kumar Acharya
2023-05-16T21:43:54Z
http://arxiv.org/abs/2305.09816v1
# Constraints on the spectral signatures of superconducting cosmic strings ###### Abstract If they exist, networks of superconducting cosmic strings are capable of injecting copious amounts of electromagnetic energy into the background over a broad range of frequencies. We study this injection both analytically, as well as numerically using the thermalization code CosmoTherm. With our refined analytic formalism, we update constraints from CMB spectral distortions by following the injection of entropy, as well as energy, on the amplitude of the \(\mu\)-distortion, leading to a significant improvement in those limits. Furthermore, we utilize the full shape of the distorted spectrum from CosmoTherm to include constraints from non-\(\mu\), non-\(y\) type distortions. Additionally, we use the outputs for the ionization history and global 21cm signal to derive and update constraints on string model parameters using measurements from other datasets. Analysis of CMB anisotropies provides the most stringent constraints, though with a slightly modified shape and strength when compared to previous results. Modifications of the reionization history provide new bounds in the high current domain, and we also find that the observations of the low-frequency radio background probe a small region of parameter space not explored by other datasets. We also analyze global 21-cm constraints, and find that the inclusion of soft photon heating plays a crucial role, essentially removing any constraints in the considered parameter domain. Spectral distortion measurements from _COBE/FIRAS_ are covered by other constraints, but our conservative forecast shows that a _PIXIE_-type satellite would probe important unexplored regions of parameter space. keywords: Cosmology - Cosmic Microwave Background; Cosmology - Theory ## 1 Introduction We are well into an age of precision cosmology, with detailed observations being taken over much of the electromagnetic spectrum. At radio frequencies, an anomalous background is being uncovered (Fixsen et al., 2011; Dowell & Taylor, 2018) with no known astrophysical source. Meanwhile, data from the epoch of reionization is slowly building up, with a first claimed detection of the differential brightness temperature at cosmic dawn coming from the EDGES experiment (Bowman et al., 2018). We also have exquisite measurements of the cosmic microwave background (CMB) anisotropies from the Planck satellite (Planck Collaboration at al., 2020), as well as the frequency spectrum from _COBE/FIRAS_(Fixsen et al., 1996), which teaches us still further about our thermal history. One of the aims of these cosmological observations is to help us understand our origins. Here, we will consider how the spectral signatures of a network of superconducting cosmic strings can be probed using these observations. Cosmic strings are a class of topological defect that may form at the interface of cosmological phase transitions if the true vacuum manifold is degenerate, and not simply connected (see Brandenberger, 1994; Hindmarsh & Kibble, 1995; Vilenkin & Shellard, 2000, for comprehensive reviews). If they form, cosmic strings (also known as line defects) are nearly one dimensional objects with a small, but finite width. The interior of a string resembles the state of the universe as it was in the false vacuum, consisting of a condensate of scalar and gauge particles from this previously unbroken phase. The detection (or non-detection) of cosmic strings in the various observations give us pieces of information about our thermal history that would otherwise be very challenging to infer. String models exhibit a property known as _scaling_, where the macroscopic properties of the string network can be described by one parameter. This parameter is known as the string tension (\(G\mu\)), and is related to the energy scale of the phase transition (\(\eta\)) through \(G\mu\simeq G\eta^{2}\), where \(G\) is Newton's gravitational constant. It has also been shown that some symmetry breaking patterns can imhe the strings with superconductive properties, leading to the generation of significant currents, \(\mathcal{I}\), as the string traverses through the plasma (see Witten, 1985; Ostriker et al., 1986, for seminal examples). Superconducting cosmic string models are typically described by these two parameters, \(G\mu\) and \(\mathcal{I}\). As we will discuss below, a cosmic string network generally consists of a small number of long strings which run through our Hubble patch at any given time, as well as a distribution of smaller loops with curvature radii on all scales up to some \(O(0.1)\) fraction of the Hubble scale. Although these string loops act as seeds for density perturbations at early times, detailed observations of the microwave background have relegated them to only being a highly subdominant component of structure formation (Perivolaropoulos, 1995; Magueijo et al., 1996; Pen et al., 1997; Albrecht et al., 1997), requiring \(G\mu\lesssim 10^{-7}\). Even so, there are hints that the enhanced gravitational effects of string loops could play a role in the formation of supermassive black holes (Bramberger et al., 2015; Cyr et al., 2022). Superconducting loops are capable of producing copious amounts of gravitational (Vachaspati & Vilenkin, 1985) and electromagnetic radiation (Vilenkin & Vachaspati, 1987; Garfinkle & Vachaspati, 1987; Vachaspati, 2008) during every epoch after their formation. This has led to constraints on the \(G\mu\)-\(T\) parameter space from a number of observations, including primordial CMB spectral distortions (Ostriker & Thompson, 1987; Tashiro et al., 2012), anisotropies (Tashiro et al., 2012), radio transient events (Cai et al., 2012; Miyamoto & Nakayama, 2013), gamma ray bursts (Babul et al., 1987), changes in the differential brightness temperature at cosmic dawn (Brandenberger et al., 2019, 2019), and more. In this work, we revisit some of the constraints considered by these authors using improved analytic estimates and the thermalization code CosmoTherm(Chluba & Sunyaev, 2012). First, we recompute the spectral distortion signature obtained by following an approximate Green's function approach, as described in Chluba (2016). Our analytic approach improves upon the previous treatments in at least two ways. First, we compute the negative \(\mu\)-distortion generated by direct photon (entropy) injection into the pre-recombination plasma (Chluba, 2015), an effect that has been overlooked thus far but significantly alters the overall constraints. Second, we include a more precise treatment for the determination of the instantaneous spectrum of emitted photons. Following this, we implement the photon injection numerically into the thermalization code CosmoTherm. This allows us to further strengthen our constraints by analyzing the full shape of the string induced spectral distortions, accounting for non-\(\mu\), non-\(y\) type deviations from the _COBE/FIRAS_ measurement (Fixsen et al., 1996). This implementation also allows us to efficiently compare against other independent datasets. Using the outputs from CosmoTherm, we generate constraints from the CMB anisotropies (Komatsu et al., 2011; Planck Collaboration et al., 2020), the radio synchrotron background (RSB) (Fixsen et al., 2011; Dowell & Taylor, 2018), the EDGES experiment (Bowman et al., 2018), and the optical depth to reionization as measured by the Planck Collaboration et al. (2020). We also generate a conservative forecast to a _PIXIE_-type experiment (Kogut et al., 2011, 2016; Chluba et al., 2021), assuming a fiducial sensitivity to energy release of \(\Delta\rho/\rho=10^{-8}\) after foreground marginalization. While the constraints we find from _COBE/FIRAS_ are less stringent than other datasets, it is important to stress that in principle, we have had access to this spectral distortion data since the late 90s. Had a more sophisticated analysis of the full shape of the _COBE/FIRAS_ data been possible at that time, the constraints that were derived would have been dominant for many years. This highlights the legacy value that _COBE/FIRAS_ still has today, and showcases the constraining power that a next generation space-based spectrometer such as _PIXIE_ could obtain for exotic energy injection scenarios. The rest of the paper is organized as follows: in Section 2, we describe the loop distribution model that we implement, and discuss the microphysics of gravitational and electromagnetic wave production. Section 3 discusses energy release rates for the string network, and reviews the simple estimates for the \(\mu\) and \(y\) parameters. Afterwards, in Section 4, we refine this estimate by including entropy release contributions, which changes the analytic constraint curve significantly. Section 5 describes how to implement this source term into CosmoTherm, and derives a useful expression for the instantaneous injection spectrum. In Sections 6 and 7, we discuss the output from CosmoTherm and illustrate the constraints we derive from multiple different datasets. We conclude in Section 8. Throughout the analytic derivations, we use natural units where \(\hbar=c=k_{\rm b}=1\). When presenting the resultant spectra in Sections 6 and onwards, we use more astrophysicist-friendly units. To minimize confusion, we include a particle physics to astrophysics conversion dictionary in Appendix B. ## 2 Number density of cosmic string loops If the universe undergoes a phase transition in which the true vacuum manifold permits cosmic string solutions, the Kibble mechanism states that a network of these defects will form (Kibble, 1980, 1982). Importantly, Kibble's mechanism only guarantees the existence of long cosmic strings, with curvature radius larger than the horizon scale. A second population of smaller, sub-horizon loops is populated by the intersections and self-intersections of the long strings. Numerical simulations in which the widths of cosmic strings are neglected (known as Nambu-Goto simulations) show that the distribution of sufficiently large loops follow a scaling solution (Vanchurin et al., 2006; Martins & Shellard, 2006; Ringeval et al., 2007; Lorenz et al., 2010; Blanco-Pillado et al., 2014). We should note that another class of simulations (the so-called Abelian-Higgs type) use field-theoretic input to resolve the cores of these strings, and observe no significant loop production (Vincent et al., 1998; Moore et al., 2002; Hindmarsh et al., 2009, 2017). This has sparked numerous debates about the true nature of the string network on small scales, with no consensus being reached as of yet. One should not undersate the numerical challenges that arise when attempting to perform cosmological simulations over such a wide range of scales, from the string core to the horizon at a given time. For this work, we assume that a scaling loop distribution is formed as indicated by the Nambu-Goto simulations. ### Formation in the Radiation Era We begin by examining the distribution of loops formed before matter-radiation equality. In most regions of the \(G\mu\)-\(T\) parameter space, they are the dominant source of primordial spectral distortions, which makes them a useful case to study. Focusing now on the results from Nambu-Goto simulations, one finds that the differential number density of loops (in physical coordinates) with initial length \(L_{\rm i}\) is given by \[\frac{\mathrm{d}N}{\mathrm{d}L_{\rm i}}=\begin{cases}\frac{\alpha}{t^{4-p}L_{ \rm i}^{p}}&(t\leq t_{\rm eq})\\ \frac{\alpha}{t_{\rm eq}^{4-p}L_{\rm i}^{p}}\left(\frac{t_{\rm eq}}{t}\right) ^{2}&(t>t_{\rm eq}).\end{cases} \tag{1}\] Simulations performed in Blanco-Pillado et al. (2014) yield \(\alpha=0.18\) and \(p=5/2\). For \(t>t_{\rm eq}\) we simply redshift the loops which formed during the radiation era, and discuss the formation and evolution of matter-dominated loops in the next subsection. Simulations indicate that at a given time, \(t\), most string loops are formed with roughly the same initial radius, given by some fraction \(\beta\approx O(0.1)\) of the Hubble length, i.e., \(L_{\rm i}(t)\approx\beta t\). After formation, the loops oscillate with period \(T\approx L\), and develop transient substructures known as cusps and kinks. Their decay (in particular, the cusps) releases energy in the form of gravitational waves, photons, and exotic particles, causing the loop to shrink in size. All loops radiate gravitational waves with an oscillation-averaged power (Vachaspati & Vilenkin, 1985) \[P_{\rm g}\simeq\Gamma_{\rm g}G\mu^{2}\simeq 1.5\times 10^{18}\,\left[\frac{ \Gamma_{\rm g}}{100}\right]\,\left[\frac{G\mu}{10^{-11}}\right]^{2}\,{\rm GeV }^{2}, \tag{2}\] where \(\Gamma_{\rm g}\) is a normalization factor \(\simeq O(100)\). In some symmetry breaking schemes, cosmic strings can acquire an electromagnetic current, \(T\), and are said to be superconducting (see Witten, 1985, for a seminal example). A proper treatment of the current generation and dissipation on a cosmological network of loops is beyond the scope of this work, and so for simplicity we assume that all string loops carry the same time-independent current. Superconducting string loops also generate sizeable electromagnetic bursts at their cusps. This leads to an additional decay channel into photons (Vilenkin & Vachaspati, 1987; Cai et al., 2012), with \[P_{\gamma} \simeq\Gamma_{\gamma}\mathcal{I}\mu^{1/2}\] \[\simeq 3.8\times 10^{18}\ \left[\frac{\Gamma_{\gamma}}{10}\right] \left[\frac{G\mu}{10^{-11}}\right]^{1/2}\left[\frac{\mathcal{I}}{10^{4}\;\text {GeV}}\right]\;\text{GeV}^{2}, \tag{3}\] where \(\Gamma_{\gamma}\simeq\mathcal{O}(10)\) depends on the precise geometry of a loop. Superconducting strings gradually decay through these two main channels, with a total rate that is determined by \[\Gamma\,G\mu\simeq(P_{\text{g}}+P_{\gamma})/\mu\simeq\Gamma_{\text{g}}G\mu+ \Gamma_{\gamma}\mathcal{I}\mu^{-1/2}. \tag{4}\] By comparing the power emitted in both gravitational and electromagnetic waves, we can define a critical current, \[\mathcal{I}_{*}=\frac{\Gamma_{\text{g}}}{\Gamma_{\gamma}}G\mu^{3/2}\simeq 3.2 \times 10^{3}\;\text{GeV}\ \left[\frac{\Gamma_{\text{g}}}{10}\right]\ \left[\frac{\Gamma_{\gamma}}{10}\right]^{-1}\left[\frac{G\mu}{10^{-11}} \right]^{3/2}. \tag{5}\] For a given string tension, \(G\mu\), loops decay primarily into gravitational waves if \(\mathcal{I}<\mathcal{I}_{*}\), or into photons when \(\mathcal{I}\geq\mathcal{I}_{*}\). The dimensionless decay coefficient \(\Gamma\) can then be expressed as \[\Gamma=\Gamma_{\text{g}}\left(1+\frac{\mathcal{I}}{\mathcal{I}_{*}}\right). \tag{6}\] We note that \(\Gamma\) is a function of both the string tension and the current. Given the decay coefficient \(\Gamma\), the loop size then shrinks as \[L=L_{\text{i}}-\Gamma\,G\mu(t-t_{\text{i}}). \tag{7}\] Noting that \(t_{\text{i}}=L_{\text{i}}/\beta\leq t_{\text{eq}}\), we then have \(L_{\text{i}}=(L+\Gamma\,G\mu\,t)/(1+\lambda)\), where \(\lambda=\Gamma G\mu/\beta\) is the decay-rapidity parameter, a measure of how long after formation a given loop will exist before complete evaporation. Low values of \(\lambda\) describe long-lived loops, while for \(\lambda>1/\beta\) the loops decay within one oscillation, and the expressions for the oscillation-averaged power should be re-examined. A loop forming at \(t_{\text{i}}\) decays at cosmic time \(t_{\text{decay}}\) given by \[t_{\text{decay}}=\left(\frac{1+\lambda}{\lambda}\right)t_{\text{i}}. \tag{8}\] From this, it is easy to derive that the total lifetime of any given loop is \(t_{\text{lifetime}}=t_{\text{decay}}-t_{\text{i}}=t_{\text{i}}/\lambda\). At any given time, the initial loop lengths that contribute to background injections are \(L_{\text{i,min}}\leq L\leq\beta t_{\text{i}}\), where \(L_{\text{i,min}}=\beta\lambda t/(1+\lambda)<\beta t\). As \(\lambda\) increases we find that \(L_{\text{i,min}}\rightarrow\beta t\), such that the largest initial loops vanish rapidly. For typical choices of our parameters, \(G\mu\) and \(\mathcal{I}\), the loops are long-lived, implying \(t_{\text{decay}}\gg t_{\text{i}}\), although we will also discuss more extreme cases where fast loop decay is possible. Using \(L_{\text{i}}=(L+\Gamma\,G\mu\,t)/(1+\lambda)\), with Eq. (1) we can then write \[\frac{\text{d}N_{\text{loops}}}{\text{d}L}\bigg{|}_{\text{r}}=\frac{\alpha \,(1+\lambda)^{3/2}}{t^{3/2}(L+\Gamma G\mu\,t)^{5/2}}\times\begin{cases}1&(t \leq t_{\text{eq}})\\ (\frac{t_{\text{eq}}}{t})^{1/2}&(t>t_{\text{eq}}),\end{cases} \tag{9}\] where the distribution is defined for \(0\leq L\leq L_{\text{max}}(t)\). At \(t\leq t_{\text{eq}}\), one has \(L_{\text{max}}(t)=\beta t\). However, for \(t>t_{\text{eq}}\), the loops last sourced at \(t=t_{\text{eq}}\) have shrunken to \(L_{\text{max}}(t)=\beta t_{\text{eq}}[1+\lambda-\lambda t/t_{\text{eq}}]< \beta t_{\text{eq}}\) at time \(t\). This implies that the last loops sourced at \(t_{\text{eq}}\) only exists at \(t\leq t_{\text{end}}\equiv t_{\text{eq}}(1+\lambda)/\lambda\). Figure 1 highlights the region of parameter space which undergoes rapid decays (i.e. \(\lambda\geq 1/\beta\)). For reference we also show the line for which \(t_{\text{end}}\simeq t_{0}\) (i.e., \(z_{\text{end}}=0\)) is equal to the age of the universe, \(t_{0}\), or \(\lambda\simeq 4\times 10^{-6}\). ### Formation in the Matter era For loops formed in the matter-dominated era, a very similar picture can be developed. Simulations show that these evolve according to \[\frac{\text{d}N}{\text{d}L_{\text{i}}}\simeq\frac{\alpha_{\text{m}}}{t^{2}L_{ \text{i}}^{2}}. \tag{10}\] We will assume that the sourcing scale in the matter-dominated era is the same as in the radiation dominated era, i.e., \(\beta_{\text{m}}t\simeq\beta t\). Then, at \(t\) the _smallest_ loop scale is \(L_{\text{min}}(t)=\beta[t_{\text{eq}}-\lambda(t-t_{\text{eq}})]\). This implies two regimes: at \(t_{\text{eq}}\leq t\leq t_{\text{end}}=t_{\text{eq}}(1+\lambda)/\lambda\) the minimal loop length fulfills \(0\leq L_{\text{min}}(t)\leq\beta t_{\text{eq}}\), while at \(t>t_{\text{end}}\) one has \(L_{\text{min}}(t)=0\). The loop distribution is then defined at \(L_{\text{min}}(t)\leq L\leq\beta t\). To fix the normalization, \(\alpha_{\text{m}}\), we assume that at \(t_{\text{eq}}\) the radiation-dominated loop distribution at the sourcing scale is the same as the matter-dominated one. This gives the condition \[\frac{\alpha_{\text{m}}}{t_{\text{eq}}^{2}L_{\text{i,eq}}^{2}}=\frac{\alpha}{t_ {\text{eq}}^{3/2}L_{\text{i,eq}}^{5/2}}\quad\to\quad\alpha_{\text{m}}=\alpha /\sqrt{\beta}\approx 0.57. \tag{11}\] with \(L_{\text{i,eq}}=\beta t_{\text{eq}}\). With these expressions we then have \[\frac{\text{d}N_{\text{loops}}}{\text{d}L}\bigg{|}_{\text{m}}=\frac{\alpha_{ \text{m}}\left(1+\lambda\right)}{t^{2}(L+\Gamma G\mu\,t)^{2}} \tag{12}\] at \(L_{\text{min}}(t)\leq L\leq\beta t\) with \(L_{\text{min}}(t)=0\) at \(t>t_{\text{end}}\). ## 3 Analytic Estimates - Energy Release The injection of non-thermal photons into the microwave background before recombination can lead to distortions in the frequency spectrum as the plasma attempts to thermalize this new radiation. The time-dependence of photon injection determines the spectral shape of this distortion, with roughly three eras emerging (Chluba, 2015). At high redshifts (\(z\gtrsim z_{\text{th}}\), with \(z_{\text{th}}\approx 2\times 10^{6}\)), injections are fully thermalized through a combination of Bremsstrahlung, Compton, and double Compton scattering, producing an average (unobservable) temperature increase. At lower redshifts, number changing processes freeze out. However, repeated Compton scattering with the background electrons drives the CMB photons towards kinetic equilibrium with a spectrum described by the Bose-Einstein distribution with a small chemical potential (\(\mu\)) for \(z\geq 3\times 10^{5}\)(Illarionov & Sunyaev, 1975; Burigana et al., 1991; Hu & Silk, 1993). At redshifts lower than this, Compton scattering becomes less efficient and the distorted spectrum may be analytically described by a \(y\)-distortion (Zeldovich & Sunyaev, 1969). The _COBE/FIRAS_ instrument remains state-of-the-art when it comes to upper bounds on these distortion parameters, providing \(|\mu|<9\times 10^{-5}\) and \(|y|<1.5\times 10^{-5}\) at \(2\sigma\)(Fixsen et al., 1996). Some improvements have been discussed in Gervasi et al. (2008b); Bianchini & Fabbioan (2022); however, only in the future can we expect significant advances with _PIXIE_(Kogut et al., 2011, 2016), BISOU (Maffei et al., 2021), COSMO (Masi et al., 2021), TMS (Rubino Martin et al., 2020) or a spectrometer within the ESA Voyage 2050 space program (Chluba et al., 2021). As a network of superconducting cosmic strings decay, they are capable of injecting both significant energy, and entropy, into the background radiation. Previously, Tashiro et al. (2012a) computed analytic bounds on the string tension and current by considering the \(\mu\) and \(y\) distortions caused by a pure energy injection. As we will illustrate below, their analysis neglected a factor of \(\Gamma^{-1/2}\), an omission which yielded more stringent constraints at high currents than we find here. Our \(\mu\)-distortion estimates agree better with those presented in Miyamoto & Nakayama (2013); however, we will show that the simple energy release arguments become inaccurate at late times and for high currents, requiring a more detailed treatment of the distortion evolution using CosmoTherm. Additional simple analytic estimates for several astrophysical and cosmological observables can be found in Miyamoto & Nakayama (2013). In the following sections, we will further refine these analytic distortion estimates by including the resultant entropy injection, yielding a novel shape for the constraints on \(G\mu\) and \(I\). Later on, we validate these approximations by comparing against the full numerical solution, computed with the thermalization code CosmoTherm. The numerical analysis allows us to go beyond the simple \(\mu\) and \(y\) estimates, by comparing the full spectral shape obtained with a cosmic string network to the residuals of the _COBE/FIRAS_ experiment. ### Pure Energy Injection - Radiation Loops To obtain analytic approximations for the amplitude of these distortion parameters, we apply a Green's function approach (method B of Chluba, 2016) in which \(\mu\) and \(y\) are determined through \[\mu\approx 1.401\int_{z_{\rm orb}}^{\infty}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! However, in the very large current regime, one can also find regions where \(\lambda\gg 1\), such that the heating rate from radiation loops becomes \[\frac{\mathrm{d}Q}{\mathrm{d}t}\bigg{|}_{\mathrm{r}}^{\mathrm{high}} \approx\frac{\alpha\Gamma_{\gamma}}{G^{1/2}}\frac{\mathcal{I}}{ \Gamma^{3/2}G\mu}\frac{\lambda^{1/2}}{t^{3}}=\frac{\alpha\Gamma_{\gamma}}{G^{ 1/2}}\frac{\mathcal{I}}{\Gamma(G\mu)^{1/2}\beta^{1/2}}\frac{1}{t^{3}}\] \[\approx\frac{\alpha\Gamma_{\gamma}}{G^{1/2}}\frac{T_{*}}{\Gamma_ {8}(G\mu)^{1/2}\beta^{1/2}}\frac{1}{t^{3}}=\frac{\alpha_{\mathrm{m}}}{G}\frac{ G\mu}{t^{3}} \tag{16}\] independent of \(\mathcal{I}\) at \(t\leq t_{\mathrm{eq}}\). Here we used \(\Gamma\simeq\Gamma_{\mathrm{g}}\mathcal{I}/T_{*}\) for the decay rate and \(\alpha_{\mathrm{m}}=\alpha/\sqrt{\beta}\). This expression is not valid for \(t\geq t_{\mathrm{eq}}\), as for \(\lambda\gg 1\) the radiation loops fully evaporate at \(t_{\mathrm{end}}\approx t_{\mathrm{eq}}\). One should note that the electromagnetic power generated by a cusp decay (\(P_{\gamma}\)) is an oscillation-averaged quantity, as the loop must undergo at least one oscillation for cusp formation to take place. However, an implication of \(\lambda>1/\beta\) is that the loop should fully decay before a single oscillation can even take place. We therefore urge the reader to be cautious about statements made in regions of parameter space with \(\lambda>1/\beta\), which is indicated by the red region in Fig. 1. Inserting the injection rate into Eq. (13), and noting that the background energy density (in natural units) is \(\rho_{\gamma}=(\pi^{2}/15)T^{4}\), we find the following estimates for \(\mu\) and \(y\) from only the radiation loops \[\mu \simeq 5.5\times 10^{-6}\left[\frac{\zeta_{\rho}}{10^{11}\,\mathrm{GeV}}\right] (\lambda\ll 1) \tag{17a}\] \[y \simeq 1.3\times 10^{-6}\left[\frac{\zeta_{\rho}}{10^{11}\, \mathrm{GeV}}\right] (\lambda\ll 1), \tag{17b}\] where we introduced the variable \(\zeta_{\rho}=\mathcal{I}/\Gamma^{3/2}G\mu\), which describes the overall non-linear scaling of the total energy release with \(\mathcal{I}\) and \(G\mu\), provided the loop distribution is not at their evaporation redshift. Note that \(\Gamma\) is a function of both parameters. It can also be useful to phrase our constraints in terms of the fractional energy injection into the background, \(\Delta\rho_{\gamma}/\rho_{\gamma}\), which is given by \[\frac{\Delta\rho_{\gamma}}{\rho_{\gamma}}\bigg{|}_{\mathrm{r}}=\int_{z_{ \mathrm{end}}}^{\infty}\mathrm{d}z\frac{1}{\rho_{\gamma}}\,\frac{\mathrm{d}Q} {\mathrm{d}z}\bigg{|}_{\mathrm{r}}\,\mathcal{J}_{\mathrm{bb}}(z). \tag{18}\] The fractional energy release from superconducting cosmic strings over the entire distortion window is therefore \[\frac{\Delta\rho_{\gamma}}{\rho_{\gamma}}\bigg{|}_{\mathrm{r}}\simeq 8.9\times 10^{-6 }\left[\frac{\zeta_{\rho}}{10^{11}\,\mathrm{GeV}}\right]\qquad\qquad( \lambda\ll 1). \tag{19}\] This is the expression one should use in deriving constraints based purely on the total amount of energy release. However, it neglects the effects of partial Comptonization and also photon injection, which can change the character of the distortion significantly (Chluba, 2015). ### Energy Injection from Matter Loops At \(t\leq t_{\mathrm{eq}}\), loops that are formed during the matter-dominated era can contribute. To compute the energy release rate from these loops, we can follow the same steps as above but instead average over the loop distribution from the matter-era: \[\frac{\mathrm{d}Q}{\mathrm{d}t}\bigg{|}_{\mathrm{m}}=\int_{L_{\mathrm{min}}}^{ \beta t}\mathrm{d}L\,\left.\frac{\mathcal{I}N_{\mathrm{loops}}}{\mathrm{d}L} \right|_{\mathrm{m}}\,P_{\gamma}(L) \tag{20a}\] \[L_{\mathrm{min}}=\left[\beta t_{\mathrm{eq}}(1+\lambda-\lambda t/t_{ \mathrm{eq}}),0\right]_{>}. \tag{20b}\] Here and below, we use the shorthand notation \([a,b]_{>}=\max\{a,b\}\) and \([a,b]_{<}=\min\{a,b\}\). Computation of this integral gives an energy injection rate \[\frac{\mathrm{d}Q}{\mathrm{d}t}\bigg{|}_{\mathrm{m}}=\frac{\alpha_{ \mathrm{m}}\Gamma_{\gamma}}{G^{1/2}}\frac{\mathcal{I}\left(G\mu\right)^{1/2}}{ \beta}\frac{1}{t^{3}}\] \[\qquad\qquad\times\begin{cases}\left[\frac{t}{t_{\mathrm{eq}}}-1 \right]&(t_{\mathrm{eq}}\leq t\leq t_{\mathrm{end}})\\ \left[\frac{t_{\mathrm{end}}}{t_{\mathrm{eq}}}-1\right]&(t_{\mathrm{end}}<t), \tag{21}\] with \(t_{\mathrm{end}}=t_{\mathrm{eq}}\left(1+\lambda\right)/\lambda\). Note that for \(\lambda\lesssim 4\times 10^{-6}\), all matter loops survive until today (\(t_{\mathrm{end}}\geq t_{0}\)). In Fig. 3 we illustrate the contribution to the heating rate from cosmic string loops created during the matter dominated era. The overall amplitude of the signal is governed primarily by two factors: the number density of the distribution, and the smallest loop size, \(L_{\mathrm{min}}\), as is evident from Eq. (21). After matter radiation equality, \(L_{\mathrm{min}}\) decreases as the loops oscillate and decay, leading to a strong increase in the injection rate as can be seen on the right hand side of the knees in Fig. 3. At \(t_{\mathrm{end}}\) (the position of the knee), matter loops begin leaving the distribution as they fully evaporate, leading to a slowing of the injection rate. Since the distribution redshifts like matter, their contribution to the background energy density grows linearly in \(z\) relative to the CMB. For large currents, we can again approximate the heating rate in the \(\lambda\gg 1\) regime: \[\frac{\mathrm{d}Q}{\mathrm{d}t}\bigg{|}_{\mathrm{m}}^{\mathrm{high}}\approx \frac{\alpha_{\mathrm{m}}\Gamma_{\gamma}}{G^{1/2}}\frac{\mathcal{I}\sqrt{G\mu}}{ \beta\,t^{3}}\frac{1}{\lambda}\approx\frac{\alpha_{\mathrm{m}}}{G}\frac{G\mu}{ t^{3}}. \tag{22}\] This is identical to Eq. (16), which is expected since fundamentally nothing special happens to the loop distribution at \(t=t_{\mathrm{eq}}\). For \(\lambda\gg 1\), all of the energy in the newly formed loops is immediately emitted. The steady rise in fractional energy injection shown in the \(\lambda\to\infty\) contour of Fig. 3 is obtained because the energy density in the loops redshifts as \(a^{-3}\) while the CMB photons redshift as \(a^{-4}\). We show the total contribution of both matter and radiation loops in Fig. 4. Interestingly, the fractional injection rate converges at late times for a wide range of parameters. This behaviour persists along Figure 3: Energy injection rate from loops which form in the matter dominated era. The amplitude of the signal is primarily governed by the number density of matter loops at any given time. A “knee” develops when \(t_{\mathrm{end}}\) is crossed for a given parameter set. The slowing of the growth of the injection rate below that redshift is because the distribution now exhibits a source-sink behaviour as the smallest loops fully evaporate, in contrast to just a source before that point. contours of constant \(G\mu\) once \(\mathcal{I}\gg I_{*}\). In this current domination region of parameter space, adjustments to \(G\mu\) will linearly shift the amplitude of the late time emission signal, while in the gravitational wave regime (\(\mathcal{I}\leq I_{*}\)), the scaling of the signal is more complicated. The overall degradation of the signal with high currents is due to the fact that a smaller density of loops is available at any given time for these models. For \(\mathcal{I}\gg I_{*}\), we have we have \(\zeta_{\rho}\propto I^{-1/2}\), which encodes this rapid decay effect. With the injection function, we may now estimate the fractional energy release from matter loops. For small \(\lambda\) (equivalently, \(t_{\rm end}\gg t_{\rm eq}\)), we find that from \(z_{\rm rec}\leq z\leq z_{\rm eq}\), the energy release is \[\left.\frac{\Delta\rho_{\gamma}}{\rho_{\gamma}}\right|_{\rm m}\simeq 2.9\times 1 0^{-16}\left[\frac{\mathcal{I}}{10^{4}\,{\rm GeV}}\right]\left[\frac{G\mu}{10 ^{-10}}\right]^{1/2}\ \ \ \ \ (\lambda\ll 1). \tag{23}\] Recall that small values of \(\lambda\) imply a long lifetime for a given loop. Since most of the energy release comes from the smallest loops present in a given distribution, matter loops will always source a subdominant contribution to the primordial distortion signal relative to radiation loops for \(\lambda\ll 1\). As indicated in Fig. 3 and 4, matter loops begin to play more of a role in pre-recombination injection for \(\lambda\gtrsim 0.1\). While not a factor for primordial spectral distortions, low redshift emission from the matter loops can be constrained by the evolution of the ionization fraction. This emission may also play a role in our understanding of the EDGES and ARCADE-2 signals, as we will see below. In Fig. 5 we show the constraints on energy injection in this picture using the _COBE/FIRAS_ data (\(\Delta\rho/\rho\leq 6\times 10^{-5}\)), as well as a simple forecast from a _PIXIE_-like instrument (\(\Delta\rho/\rho\leq 2\times 10^{-8}\)). For _COBE/FIRAS_, the quoted energy release levels are consistent with the \(2\sigma\) errors on \(\mu\) and \(y\), while for the _PIXIE_-like setup we assume that a penalty from marginalization over foregrounds (Abitbol et al., 2017; Roti & Chluba, 2021) is already included. The constraints are nearly symmetric about the \(\mathcal{I}=I_{*}\) contour, with departures arising in the \(\lambda\simeq 1\) regime due to rapidly decaying loops. The work of Tashiro et al. (2012) presented a flat constraint in the \(\mathcal{I}\geq I_{*}\) region, a symptom of them missing a factor of \(\Gamma^{-1/2}\) in their equivalent expression of Eq. (19). Our energy injection constraints are roughly consistent with the analysis of Miyamoto & Nakayama (2013). In the next section, we include the effects of entropy injection which cause further significant modifications to these constraints, as can be seen in Fig. 11. ## 4 Analytic estimate - entropy release Energy release is not the only way one can disturb the CMB spectrum. An often overlooked fact of distortion theory is that the production of additional photons also perturbs the spectrum. In the \(\mu\) era, the total distortion should be expressed as (Hu, 1995; Chluba, 2015) \[\mu\simeq 1.401\left[\frac{\Delta\rho_{\gamma}}{\rho_{\gamma}}-\frac{4}{3} \frac{\Delta N_{\gamma}}{N_{\gamma}}\right]\,. \tag{24}\] Therefore, models which introduce direct photon production (such as this one) must necessarily take into account both energy and entropy injection when determining constraints. In fact, this entropy injection is capable of producing a net negative distortion signature. Of course, _COBE/FIRAS_ is only sensitive to \(|\mu|\), but as we will see below, there are regions of parameter space in which the constraints of Tashiro et al. (2012) and Miyamoto & Nakayama (2013) are significantly altered. There are also regions in parameter space that cannot be easily estimated analytically, since strong non-\(\mu\)/non-\(y\) distortion signals are created ( Chluba, 2015), requiring the numerical treatment presented below. To obtain a description for the photon source term, we adapt the work of Cai et al. (2012) who derived the power emitted in photons per unit frequency, per unit solid angle from a superconducting cosmic string cusp annihilation (averaged over an oscillation time): \[\frac{\mathrm{d}^{2}P_{\gamma}^{\rm c}}{\mathrm{d}\omega\mathrm{d}\Omega} \simeq\left(\frac{\Gamma_{\gamma}}{3}\right)\mathcal{I}^{2}L. \tag{25}\] The cusp is a highly relativistic object, and so the radiation is heavily beamed, with all emission in a solid angle \(\Omega=(\omega L)^{-2/3}\). Here, \(\omega\) is the frequency of emitted radiation. Integrating Eq. (25) over solid Figure 4: The total energy injection rate from a string loop distribution. For a constant \(G\mu\), the late time injection rate converges for \(\mathcal{I}\gg I_{*}\). In contrast, contours of current \(\mathcal{I}\) will experience amplitude shifts proportional to \(G\mu\) in the current-dominated regime. Figure 5: Solid lines indicate constraints on string parameters obtained by requiring \(\Delta\rho/\rho\leq 6\times 10^{-5}\) (\(2\sigma\) for _COBE/FIRAS_), and \(\Delta\rho/\rho\leq 2\times 10^{-8}\) (\(2\sigma\) forecast for _PIXIE_-type instrument). Statements in the \(\lambda\geq 1\) region of the plot should be viewed skeptically, as rapid decays may cause the injection framework to break down. The dash-dotted lines are constraints obtained by \(\mu\) distortions over the redshift range \(3\times 10^{5}\leq z\leq 2\times 10^{6}\), and may be directly compared to the energy+entropy constraints illustrated in Fig. 11. angle we find the instantaneous spectrum from a single loop \[\frac{\mathrm{d}^{2}E_{\gamma}^{\mathrm{c}}}{\mathrm{d}t\omega}\approx\left( \frac{\Gamma_{\gamma}}{3}\right)\frac{T^{2}L^{1/3}}{\omega^{2/3}},\qquad\qquad \frac{\mathrm{d}^{2}N_{\gamma}^{\mathrm{c}}}{\mathrm{d}t\omega}\approx\left( \frac{\Gamma_{\gamma}}{3}\right)\frac{T^{2}L^{1/3}}{\omega^{5/3}}. \tag{26}\] Here, \(E_{\gamma}^{\mathrm{c}}\) and \(N_{\gamma}^{\mathrm{c}}\) are the total energy and number of photons produced by a given loop. We introduced the factor of \(\Gamma_{\gamma}/3\) in these equations to properly match the total power produced by a loop with Eq. (3).1 The numerical prefactor depends on the precise shape of a loop, and so the \(\Gamma_{\gamma}\) factor averages over the geometries of a network of loops. Upon integration of the differential energy spectrum, one notices that the total power is dominated by the highest frequency photons produced at a cusp. Cai et al. (2012) have estimated this frequency to be Footnote 1: This can be easily confirmed by integrating over \(\omega\) between \(0\) and \(\omega_{\mathrm{max}}\), which yields \(\mathrm{d}E_{\gamma}^{\mathrm{c}}/\mathrm{d}t\approx\Gamma_{\gamma}\,T^{2}L^{ 1/3}\,\omega_{\mathrm{max}}^{1/3}=\Gamma_{\gamma}\,I\,\mu^{1/2}\), reproducing Eq. (3). \[\omega_{\mathrm{max}}\approx\frac{\mu^{3/2}}{T^{3}L}. \tag{27}\] The cusp covers a finite spatial extent on the string, and so this result is derived by requiring the total energy released in a cusp decay does not exceed the rest-mass energy of this region. ### Total emission from loops To obtain the number density injection from all loops, one may first determine the number of photons produced per unit time, and then average over the full loop distribution. For our purposes, it will be useful to introduce the dimensionless frequency as \(x=\omega/T(z)\). The photon spectrum from cosmic strings in principle extends to arbitrarily low energies2; however, low frequency photons are at risk of being absorbed by the plasma and converted into heat (Chluba, 2015; Bolliet et al., 2021). The survival probability, \(\mathcal{P}_{\mathrm{s}}(x,z)\approx\mathrm{e}^{-x_{\mathrm{c}}/x}\) tells us how many photons survive as a true entropy injection. Footnote 2: Plasma effects (i.e., the plasma frequency or the Razin effect) become important at frequencies well below the domain of interest. At high redshifts (\(\gtrsim 10^{5}\)), we use the simple expression for the critical frequency (Chluba, 2014) \[x_{\mathrm{c}}\approx 8.6\times 10^{-3}\sqrt{\frac{1+z}{2\times 10^{6}}} \sqrt{1+\left[\frac{1+z}{3.8\times 10^{5}}\right]^{-2.344}}. \tag{28}\] This frequency is determined through the balance of Compton, double Compton (DC), and Bremsstrahlung (BR) effects. At later times (\(z\lesssim 10^{5}\)), the absorption probability is mainly determined by the free-free process, without much contribution from Compton scattering. In this regime, we estimate the absorption probability by finding the frequency at which the free-free optical depth is close to unity (Chluba, 2015; Bolliet et al., 2021). The two regimes are merged in Fig. 6 over the relevant redshift range. At recombination, the fraction of free electrons and protons substantially, leading to a sharp decrease in the absorption probability which then remains constant until reionization begins. Integrating the right hand expression of Eq. (26) with the factor3\(\mathcal{P}_{\mathrm{s}}(x,z)\approx\mathrm{e}^{-x_{\mathrm{c}}/x}\) yields the total number of photons produced per cosmic string which will contribute to an entropy injection Footnote 3: Although the free-free optical depth scales as \(\tau_{\mathrm{f}}\ll\ln(2.25/x)/x^{2}\) at low frequencies, we keep our estimates simple and just replace the critical frequency with the free-free absorption frequency. This does not significantly alter the illustrations that are presented below. For the final constraints, we explicitly compute the distortion signal without these approximations. \[\frac{\mathrm{d}N_{\gamma}^{\mathrm{c}}}{\mathrm{d}t}=\frac{\Gamma_{\gamma}}{ 3}\frac{T^{2}L^{1/3}}{T^{2/3}}\int_{0}^{x_{\mathrm{max}}}\mathrm{d}x\,\frac{ \mathrm{e}^{-x_{\mathrm{c}}/x}}{x^{5/3}}\] \[=\frac{\Gamma_{\gamma}}{3}\frac{T^{2}L^{1/3}}{(x_{\mathrm{c}}T)^{2/3}}\int_{0} ^{x_{\mathrm{max}}}\left[\frac{2}{3}\frac{x_{\mathrm{c}}}{x_{\mathrm{max}}} \right]. \tag{29}\] For most reasonable choices of \(G\mu\) and \(I\), \(x_{\mathrm{c}}\ll x_{\mathrm{max}}\), where the incomplete gamma function is well approximated by \(\Gamma[2/3,x_{\mathrm{c}}/x_{\mathrm{max}}]\approx\Gamma[2/3]\approx 1.354\). However, as is evident from Eq. (27), large values of \(I\) and small values of \(G\mu\) serve to decrease \(x_{\mathrm{max}}\), implying that for those parameter choices, most of the produced photons are below \(x_{\mathrm{c}}\) and hence do not contribute as entropy. With Eq. (29), we can compute the entropy injection rate from the loop distribution in a similar way to the energy injection, \[\frac{\mathrm{d}N_{\gamma}}{\mathrm{d}t}=\int_{0}^{I_{\mathrm{up}}}\mathrm{d}L \frac{\mathrm{d}N_{\mathrm{loops}}}{\mathrm{d}L}\frac{\mathrm{d}N_{\mathrm{ cycles}}^{\mathrm{c}}}{\mathrm{d}t}. \tag{30}\] where \(L_{\mathrm{up}}\) depends on the case of interest (i.e. matter/radiation loops at a particular redshift). For clarity, let us first focus on loops created and evolving in the radiation-dominated era (i.e., \(t\leq t_{\mathrm{eq}}\)). The relevant integral then becomes \[\frac{\mathrm{d}N_{\gamma}}{\mathrm{d}t}=\frac{\alpha\Gamma_{\gamma}}{ 3}\frac{T^{2}(1+\lambda)^{3/2}}{(\Gamma G\mu)^{7/6}\,\nicefrac{{8}}{{3}}}\frac{ \Gamma[2/3]}{(x_{\mathrm{c}}T)^{2/3}}\times\mathcal{J}_{\mathrm{f}}(t_{\mathrm{ up}},\kappa) \tag{31a}\] \[\mathcal{J}_{\mathrm{f}}(t,\kappa)=\int_{0}^{I}\mathrm{d}t^{\prime}\,\frac{l^ {\prime\,1/3}}{(1+l^{\prime\,5/2}}\frac{\Gamma\left[\frac{2}{3},\kappa\,l^{ \prime}\right]}{\Gamma[2/3]}, \tag{31b}\] where we have substituted \(l=L/(\Gamma G\mu t)\) and \(l_{\mathrm{up}}=L_{\mathrm{up}}/(\Gamma G\mu t)\). We also used \(x_{\mathrm{c}}/x_{\mathrm{max}}=\kappa\,l\) with \(\kappa=x_{\mathrm{c}}\,T\,T^{2}\,\Gamma\,G\,t/\mu^{1/2}\). Figure 7 shows the general form of \(\mathcal{J}_{\mathrm{c}}(l,\kappa)\). Heuristically, \(\Gamma[2/3,\kappa\,l^{\prime\prime}]/\Gamma[2/3]\) acts as a normalized window function which closes rapidly for \(\kappa\,l^{\prime}\leq 1\). Smaller loops emit higher frequency photons, meaning that they are able to contribute to entropy injections more efficiently than the large loops. This is reflected by the fact that the argument of the window function depends on the length of any particular loop, as we illustrate in Fig. 8. For \(t>t_{\mathrm{eq}}\), we need to consider the evolution of loops created Figure 6: Evolution of the dimensionless critical frequency over redshift. Photons produced with a frequency \(x<x_{\mathrm{c}}\) are absorbed by the plasma quickly after their creation, and therefore do not contribute significantly to a photon number injection. The approximate expression in Eq. (28) is valid for \(z\gtrsim 10^{5}\), while below we use the pre-tabulated free-free absorption frequency to estimate the absorption probability. in the radiation dominated era as well as newly created loops. For the former, two changes occur. One is due to the change in the time-dependence of the evolution, which leads to an extra factor of \((t_{\rm eq}/t)^{1/2}\) and the other is due to the fact that the maximal loop scale now evolves from \(\beta l_{\rm eq}\) at \(t=t_{\rm eq}\) to \(\beta l_{\rm eq}(1+\lambda-\lambda t/t_{\rm eq})\) at \(t>t_{\rm eq}\). Put together, this yields \[\frac{{\rm d}N_{\gamma}}{{\rm d}t}\bigg{|}_{\rm m} =\frac{\alpha\Gamma_{\gamma}}{3}\frac{\mathcal{I}^{2}(1+\lambda) ^{3/2}}{(\Gamma G\mu)^{2/3}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, allows us to quantify the \(\mu\) distortion by computing the relative energy and entropy injections during this era and using Eq. (24). Figure 10 illustrates the net buildup of the \(\mu\) distortion for some examples of string parameters. The figure indicates that the amplitude of a \(|\mu|\) distortion can be boosted substantially compared to the case where entropy injection is neglected, leading to more stringent constraints in some regions of parameter space. In Fig. 11 we show the \(\mu\) distortion constraints obtained by considering both entropy and energy injection from a distribution of string loops. The figure clearly shows a region of parameter space that is ruled out from entropy injection but was missed in the analysis of Tashiro et al. (2012) and Miyamoto & Nakayama (2013). For _PIXIE_, the left and right solid contours correspond to the usual energy injection constraints seen in Fig. 5, while the central contour is novel, and comes from entropy injection. For the _COBE/FIRAS_ results, the sensitivity to \(|\mu|\) is too weak to resolve the energy injection constraint at lower currents. The energy injection constraints appear weaker than in Fig. 5 simply because here we consider a smaller redshift window in which the injection is active. This choice is because for \(z\leq 3\times 10^{5}\), injected photons are not efficiently redistributed and hence Eq. (24) is not valid. To consider the spectral distortions produced below this redshift, one has to go beyond the simple analytic picture and solve the problem numerically. In the remainder of this work we compute and analyze these numerical solutions using CosmoTherm. ## 5 Numerical implementation in COSMOTEM In order to provide robust spectral distortion constraints, as well as to validate our analytic treatment, we now aim to implement source terms from cosmic strings into CosmoTherm (Chluba & Sunyaev, 2012), which is a state of the art cosmological thermalization code that allows explicitly treating multiple energy release and photon injection scenarios.4 Practically, CosmoTherm takes as input a redshift-dependent injection function describing the change in photon occupation number, and evolves it forward to determine the spectral distortion signature. To determine the total source term, we average Eq. (26) over the full loop distribution present at any given time. For clarity we first consider injection in the radiation era (i.e., cases with \(t\leq t_{\rm eq}\)), and then generalize to the matter-dominated era. Inserting the corresponding loop distribution, we then have Footnote 4: www.Chluba.de/CosmoTherm \[\left.\frac{\mathrm{d}^{2}E_{\gamma}}{\mathrm{d}t\mathrm{d}\omega} \right|_{\mathrm{t}} =\int_{0}^{L_{\rm max}}\mathrm{d}L\,\frac{\mathrm{d}^{2}E_{\gamma }^{2}}{\mathrm{d}t\mathrm{d}\omega}\,\frac{\mathrm{d}N_{\rm loops}}{\mathrm{d}L }\right|_{\mathrm{t}}\] \[=\left(\frac{\alpha\Gamma_{\gamma}}{3}\right)\frac{\Gamma^{2}}{t^ {3/2}}\frac{(1+\lambda)^{3/2}}{\omega^{2/3}}\int_{0}^{L_{\rm qp}(\omega)}\frac {\mathrm{d}L\,L^{1/3}}{(L+\Gamma G\mu)^{5/2}}, \tag{34}\] where the upper bound of the \(L\) integral depends on \(\omega\). Since the maximal frequency of the emission for a loop of length \(L\) is given by \(\omega_{\rm max}(L)=\mu^{3/2}/\left[\Gamma^{3}L\right]\)(Cai et al., 2012), at a given frequency \(\omega\) this means that a limiting length, \[L_{\rm lim}(\omega)=\frac{\mu^{3/2}}{\Gamma^{3}\omega}, \tag{35}\] should not be exceeded, implying \(L_{\rm qp}=\left[\beta t,L_{\rm lim}(\omega)\right]_{<}\). With the substitution \(l=L/\left[\Gamma G\mu t\right]\) we then find \[\left.\frac{\mathrm{d}^{2}E_{\gamma}}{\mathrm{d}t\mathrm{d}\omega}\right|_{ \mathrm{t}}=\left(\frac{\alpha\Gamma_{\gamma}}{3}\right)\frac{\Gamma^{2}\,\,(1+ \lambda)^{3/2}}{(\Gamma G\mu)^{7/6}\,t^{8/3}}\times\frac{\mathcal{F}_{t}(t_{ \rm qp})}{\omega^{2/3}}, \tag{36}\] where \(\mathcal{F}_{t}(l)=\mathcal{J}_{t}(l,0)\) with various approximations discussed in Appendix A. Since \(l_{\rm qp}\) scales strongly with frequency, the emission spectrum steepens at high frequencies. Also considering the evolution of loops from the radiation dominated era at \(t_{\rm eq}\leq t\leq t_{\rm end}\) we then find \[\left.\frac{\mathrm{d}^{2}E_{\gamma}}{\mathrm{d}t\mathrm{d}\omega} \right|_{\mathrm{t}} =\left(\frac{\alpha\Gamma_{\gamma}}{3}\right)\frac{\Gamma^{2}\,\,(1+ \lambda)^{3/2}}{(\Gamma G\mu)^{7/6}\,t^{8/3}}\frac{1}{\omega^{2/3}} \tag{37}\] \[\times\begin{cases}\mathcal{F}_{t}\left(\left[\frac{t_{\rm end}} {t_{\rm eq}}-1,l_{\rm lim}\right]_{<}\right)&(t\leq t_{\rm eq})\\ \mathcal{F}_{t}\left(\left[\frac{t_{\rm end}}{t}-1,l_{\rm lim}\right]_{<} \right)\left(\frac{t_{\rm eq}}{t}\right)^{1/2}&(t_{\rm eq}<t\leq t_{\rm end}).\end{cases}\] Figure 11: Constraints (\(2\sigma\)) obtained requiring \(|\mu|\leq 8.4\times 10^{-5}\) from _COBE/FIRAS_, and \(|\mu|\leq 2.8\times 10^{-8}\) for a _PIXIE_-like experiment. At high currents and string tensions, energy injection is responsible for sourcing a positive \(\mu\) distortion. However, as the string parameters are lowered, we see a large strip of parameter space that is ruled out due to excessive photon production, yielding a negative \(\mu\). Finally, _PIXIE_ is capable of constraining further regions of parameter space from energy injection at still smaller currents and tensions. Dashed-dotted lines correspond to negative \(\mu\) distortions. Figure 10: The buildup of a \(\mu\) distortion from energy and entropy injection for a range of cosmic string parameters. Solid lines represent positive \(\mu\) (where energy injection is dominant), while dashed lines show negative \(\mu\) (indicating entropy injection is the stronger process). The green contour illustrates an interesting case where a transition takes place between strong energy and strong entropy injection, which would lead to an anomalously low total \(\mu\). For the loops created in the matter-dominated era, we similarly find \[\left.\frac{\mathrm{d}^{2}E_{Y}}{\mathrm{d}t\mathrm{d}\omega_{\mathrm{ d}}}\right|_{\mathrm{m}} =\frac{\alpha_{\mathrm{m}}\Gamma_{Y}}{3}\frac{\mathcal{I}^{2}(1+ \lambda)^{3/2}}{(\Gamma G\mu)^{2/3}\,\mathcal{I}^{3/3}}\frac{1}{\omega^{2/3}} \tag{38}\] \[\times\begin{cases}\mathcal{F}_{\mathrm{m}}\left(\frac{t_{ \mathrm{end}}}{t}-1,\left[\frac{t_{\mathrm{end}}}{t_{\mathrm{eq}}}-1,l_{ \mathrm{lim}}\right]_{<}\right)&(t_{\mathrm{eq}}<t\leq t_{\mathrm{end}})\\ \mathcal{F}_{\mathrm{m}}\left(0,\left[\frac{t_{\mathrm{end}}}{t_{ \mathrm{eq}}}-1,l_{\mathrm{lim}}\right]_{<}\right)&(t_{\mathrm{end}}<t),\] with \(\mathcal{F}_{\mathrm{m}}\left(l_{a},l_{b}\right)=\mathcal{F}_{\mathrm{m}} \left(l_{a},l_{b},0\right)=\mathcal{F}_{\mathrm{m}}\left(l_{b}\right)- \mathcal{F}_{\mathrm{m}}\left(l_{b}\right)\) while \(l_{b}>l_{a}\) and zero otherwise. The expression for \(\mathcal{F}_{\mathrm{m}}\left(l\right)\) is given in Eq. (A4). This allows us to compute all the required emission spectra. As a simple example, let us approximate the spectrum of photons produced before matter-radiation equality. Starting from Eq. (36), we can use the approximation for \(\mathcal{F}_{\mathrm{r}}(l_{\mathrm{up}})\) found in Appendix A to obtain \[\left.\frac{\mathrm{d}^{2}E_{Y}}{\mathrm{d}t\mathrm{d}\omega_{\mathrm{d}}} \right|_{\mathrm{r}}\approx\left(\frac{\alpha\Gamma_{Y}}{3}\right)\frac{ \mathcal{I}^{2}(1+\lambda)^{3/2}}{\omega^{2/3}(\Gamma G\mu)^{7/6}\,\mathcal{I }^{8/3}}\frac{\mathcal{F}_{\infty}}{1+(l_{\mathrm{c}}/l_{\mathrm{up}})^{7/6}}, \tag{39}\] where \(\mathcal{F}_{\infty}=\Gamma\left(\frac{\gamma}{6}\right)\Gamma\left(\frac{ \gamma}{3}\right)/\sqrt{\pi}\approx 0.6232\) and \(l_{\mathrm{c}}\approx 1.314\) were determined by inspecting the asymptotic regimes of \(\mathcal{F}_{\mathrm{r}}(l_{\mathrm{up}})\) The approximation is valid to within \(10\%\) for \(l_{\mathrm{up}}\gtrsim 0.1\), and we discuss better approximations in the Appendix. For a given frequency and parameter set, \(l_{\mathrm{up}}\) can take on different values. Specifically, \[l_{\mathrm{up}} =\left\{\frac{t_{\mathrm{end}}}{t_{\mathrm{eq}}}-1=\frac{1}{ \lambda}\right. \left(\omega\leq\omega_{\mathrm{H}}\right) \tag{40}\] \[\left.\frac{L_{\mathrm{lim}}(\omega)}{\Gamma G\mu t}=\frac{1}{ \lambda}\frac{\omega_{\mathrm{H}}}{\omega}\right. \left(\omega>\omega_{\mathrm{H}}\right),\] where, \(\omega_{\mathrm{H}}\equiv\omega_{\mathrm{max}}(\beta t)=(G\mu)^{3/2}/[G^{3/2} \mathcal{I}^{3}\beta t]\). For most parameter values, \(\lambda\ll 1\), allowing us to simplify the spectrum as \[\left.\frac{\mathrm{d}^{2}E_{Y}}{\mathrm{d}t\mathrm{d}\omega} \right|_{\mathrm{r}} \approx\left(\frac{\alpha\Gamma_{Y}}{3}\right)\frac{\mathcal{I}^{ 2}(1+\lambda)^{3/2}}{\omega^{2/3}\,(\Gamma G\mu)^{7/6}\,\mathcal{I}^{8/3}} \begin{cases}\mathcal{F}_{\infty}&\omega\leq\omega_{\mathrm{H}}\\ \frac{\mathcal{F}_{\infty}}{1+(\omega/\omega_{\mathrm{eq}})^{7/6}}&\omega> \omega_{\mathrm{H}}\end{cases}\] \[\approx\left(\frac{\alpha\Gamma_{Y}}{3}\right)\frac{\mathcal{I}^ {2}(1+\lambda)^{3/2}}{\omega^{2/3}\,(\Gamma G\mu)^{7/6}\,\mathcal{I}^{8/3}} \frac{\mathcal{F}_{\infty}}{1+(\omega/\omega_{\mathrm{eq}})^{7/6}}. \tag{41}\] Here, we defined \(\omega_{\mathrm{k}}=\omega_{\mathrm{H}}/[\lambda l_{\mathrm{c}}]\). Going from the first to the second line, we note that for small \(\lambda\), we have \(\omega\leq\omega_{\mathrm{H}}\ll\omega_{\mathrm{k}}\). At low frequencies (\(\omega\ll\omega_{\mathrm{k}}\)), the spectrum decays as \(\omega^{-2/3}\). As we increase \(\omega\), we encounter a knee in the spectrum at \(\omega\approx\omega_{\mathrm{k}}\) after which the spectrum decays much more rapidly as \(\omega^{-11/6}\). The position of this knee can be seen for different values of the current and redshift in Fig. 12, where we illustrate the exact spectrum which we compute numerically. In Fig. 13 we separately show the contribution to the emission spectrum from loops formed in the matter and radiation eras for \(G\mu=10^{-11}\) and \(\mathcal{I}=4.2\times 10^{7}\) GeV. For \(z\gg z_{\mathrm{end}}\simeq 10\), the radiation loops dominate the contribution at all frequencies. As one approaches \(z\simeq z_{\mathrm{end}}\), the matter loops play a more important role, except at the highest frequencies. This is because the highest frequency photons are produced by the smallest loops, which still date back to the radiation era until \(z\) drops below \(z_{\mathrm{end}}\). Figure 12: The instantaneous spectrum of photons produced by a network of superconducting cosmic string loops as a function of the dimensionless frequency \(x=\hbar\omega/kT\). The left plot shows the response of the spectrum at constant \(G\mu\) and redshift, for variations of \(\mathcal{I}\). On the right we fix \(G\mu\) and \(\mathcal{I}\) to observe how the injection spectrum varies at different redshifts. For increasing current and decreasing redshift, the position of the knee moves to lower frequencies, indicating that fewer high energy photons are produced by the network. The knee position \(x_{\mathrm{k}}=\omega_{\mathrm{H}}(t)/[\lambda l_{\mathrm{c}}T\,]\) by indicated as dashed vertical lines. Figure 13: Contributions to the emission spectrum from loops formed in the matter era (dashed) and radiation era (solid) at different redshifts near \(z_{\mathrm{end}}\). At \(z\simeq z_{\mathrm{end}}\), the radiation loops play a dominant role only at the highest frequencies. At low frequencies, the contribution from matter and radiation loops is roughly equal at \(z=20\). Before this time, matter loops are subdominant for the entire frequency range for the chosen parameters. To validate this approximation, we can readily integrate over frequencies to find the total energy release rate \[\frac{\mathrm{d}Q}{\mathrm{d}t}\bigg{|}_{\mathrm{r}}=\frac{2}{3}\frac{\alpha \Gamma_{\gamma}}{G^{1/2}}\frac{\Gamma}{\Gamma^{3/2}G\mu}\frac{(1+\lambda)^{3/2 }}{t^{3}}\times\frac{3\pi\mathcal{F}_{\infty}}{\mathcal{I}_{\mathrm{c}}^{1/3} \cos(3\pi/14)}. \tag{42}\] This matches the exact result found in Eq. (15) to better than 3%. This small departure is caused by corrections to the total emission by means of our approximation to \(\mathcal{F}_{\mathrm{r}}(l)\). When using a refined approximation for \(\mathcal{F}_{\mathrm{r}}(l)\) in Eq. (23) and numerically integrating the function find agreement with the exact result at the level of \(\simeq 0.06\%\). For the photon number injection, the integral over frequency must be modulated by the probability factor, \(\mathrm{e}^{-\alpha\epsilon_{\mathrm{x}}/\omega}\), which makes it more difficult to tract analytically. For applications, we recommend simply taking the integral numerically. As input, CosmoTherm requires a time-dependent occupation number injection function. Schematically this can be done for expressions (36)-(38) by noting \[\frac{\mathrm{d}n_{\gamma}}{\mathrm{d}t}=\frac{\pi^{2}}{\omega^{3}}\frac{ \mathrm{d}^{2}E_{\gamma}}{\mathrm{d}t\mathrm{d}\omega}, \tag{43}\] where \(n_{\gamma}\) is the occupation number of the injected photons. We validated our implementations for \(\mathrm{d}^{2}E_{\gamma}/\mathrm{d}t\mathrm{d}\omega\) in multiple regimes, finding excellent agreement with the exact analytic expressions. ## 6 Numerical results After feeding in a source term of the form Eq. (43), CosmoTherm follows the direct evolution of photons with frequency \(x_{\mathrm{min}}\leq x\leq x_{\mathrm{max}}\)5. Heuristically, this code solves the coupled evolution equations for the photons and electrons on a finely spaced grid of redshift slices, which allows us to analyze spectral data at any time. In principle, the source term can take any form, making CosmoTherm a powerful and flexible tool to quickly and accurately assess the validity of many beyond the standard model (BSM) scenarios, such as decaying dark matter (Bolliet et al., 2021). Footnote 5: We usually use \(x_{\mathrm{min}}=10^{-8}\) and \(x_{\mathrm{max}}=150\) with 4000 grid points. ### Spectral information The main data product produced by CosmoTherm is the differential spectra \(\Delta I=I_{\mathrm{CT}}-I_{\mathrm{pl}}\), the difference between the numerically computed spectrum with source functions and a pure blackbody. Figure 14 illustrates the buildup of spectral distortions for a range of different redshifts and string parameters. In the top left panel, we note that even in the absence of an external source term, a non-zero distortion is observed. This comes from the fact that the adiabatic cooling of electrons is more rapid than the photons (\(T_{\mathrm{e}}\propto a^{-2}\) vs \(T_{\gamma}\propto a^{-1}\)). In reality, \(T_{\mathrm{e}}\simeq T_{\gamma}\) as the electrons are continuously heated by the photons through the process of Compton cooling, which is effective until \(z\simeq 10^{2}\). This process of energy extraction from the CMB produces the observed distortion in this panel. The remaining three panels show benchmark models with values of \(\mathcal{I}\) below, equal to, and above the critical current \(I_{\mathrm{c}}\). In each of these three cases, we can observe the buildup of a low frequency background of photons over time. Higher currents typically lead to stronger backgrounds, though the background is only significant for \(\omega\leq\omega_{\mathrm{k}}\) as can be seen by Fig. 12. This implies that low frequency data such as that from the ARCADE-2 (Fixsen et al., 2011) experiment will not be sensitive to arbitrarily large values of \(\mathcal{I}\). Positive \(\mu\) and \(y\) distortions are signified by an excess of photons at high frequencies, and a decrement at low frequencies when compared to a blackbody. For \(\mathcal{I}=10\) GeV and \(\mathcal{I}=\mathcal{I}_{\mathrm{s}}\), the distortion is primarily sourced by energy release, while for \(\mathcal{I}=10^{7}\) GeV, a strong entropy release generates a negative distortion. Radio observations and CMB experiments probe complementary regions of the induced cosmic string spectra, and with CosmoTherm we are able to utilize constraints from both datasets simultaneously. See Fig. 17, for additional illustration of the final distortion signals for these benchmark scenarios. Throughout this work we have chosen \(G\mu=10^{-11}\) as a fiducial value for illustrations. This choice has been made because these contours highlight all of the relevant physical effects. Here, we would like to discuss the changes one observes when varying \(G\mu\), and assuming that \(\lambda\ll 1\). First, we note that the energy injection rate scales as \(\mathrm{d}Q/\mathrm{d}t\propto\mathcal{I}/G\mu\) in the gravitational decay regime (GDR, \(\mathcal{I}\ll\mathcal{I}_{\mathrm{s}}\)), and as \(\mathrm{d}Q/\mathrm{d}t\propto(G\mu)^{5/4}/\mathcal{I}^{1/2}\) in the electromagnetic decay regime (EDR, \(\mathcal{I}\gg I_{\mathrm{s}}\)). This implies that the total energy injection is maximized for \(\mathcal{I}\simeq I_{\mathrm{s}}\), and that reducing \(G\mu\) causes a faster decay of the signal compared to increases in \(\mathcal{I}\) while in the EDR. This broadly explains the constraints presented in Fig. 5. In terms of direct photon production, the scaling is slightly more complicated. In the GDR, the overall amplitude of the spectrum follows \(\mathrm{d}^{2}N_{\gamma}/\mathrm{d}t\,\mathrm{d}\omega\propto\mathcal{I}^{2}/( G\mu)^{7/6}\), while in the EDR we have \(\mathrm{d}^{2}N_{\gamma}/\mathrm{d}t\,\mathrm{d}\omega\propto\mathcal{I}^{5/6} (G\mu)^{7/12}\). As expected, decreases of \(G\mu\) in the EDR cause a decrease in the amplitude. In contrast to the energy injection case, the total number of photons produced is not maximized on the \(\mathcal{I}=I_{\mathrm{s}}\) contour, but instead increases with \(\mathcal{I}^{5/6}\) in the EDR. However, the constraints on direct photon injection are sensitive to the precise frequencies of the produced photons. The position of the spectral knee determines an effective upper cutoff to the photon frequencies, and scales as \(\omega_{\mathrm{k}}\propto(G\mu)^{1/2}/\mathcal{I}^{3}\) in the GDR, and \(\omega_{\mathrm{k}}\propto(G\mu)^{2}/\mathcal{I}^{4}\) in the EDR. Thus, decreases in \(G\mu\) and increases in \(\mathcal{I}\) rapidly degrade this effective cutoff frequency. Once \(\omega_{\mathrm{k}}\leq\mathrm{MHz}\), the vast majority of the photons are produced outside of the sensitivity range of microwave and radio observations. ### Treating recombination history changes The emission produced by the cosmic string network also injects photons above the ionization thresholds of hydrogen and helium. This causes modifications to the recombination history of the Universe, which we treat approximately in this work. Specifically, photons outside the computational domain have to be added, leading to treatments discussed now. For the first treatment, we only include the total heating from photons outside of our computational domain, accounting for photons injected at both \(x\leq x_{\mathrm{min}}\) and \(x\geq x_{\mathrm{max}}\), but do not consider ionizations due to photons at frequency \(x\) above the atomic ionization thresholds. The energy density integrals are computed numerically at every time-step and then converted into heating of the baryonic matter. Since hotter electrons recombine less rapidly, this causes a delay of recombination. In the second treatment, we more carefully consider the interactions of photons above the atomic energy levels. For photons injected below the Lyman-\(\alpha\) line of hydrogen, we simply directly follow their evolution in the distortion domain, computing the Compton heating internally.6 At \(x\geq x_{\mathrm{I}\gg\alpha}\), we integrate the total energy density of photons and then use the method of Chen & Kamionkowski (2004) with refinements according to Chluba (2010) to add heating and ionizations to the recombination problem. At \(z\geq 3000\), we assume all the injected high-frequency energy is converted into heat. At \(z\leq 3000\), we do not add any photons at \(x\geq x_{\rm Ly-\alpha}\), assuming that these get efficiently absorbed and converted. A comparison of treatments one and two are shown in Fig. 15 with the orange and green contours respectively. In principle, we can also directly follow the evolution of photons in the Lyman continuum using CosmoTherm, though this also has limitations. While this process gives a more direct correspondence between the number of ionizing photons produced, and the total amount of ionizations (see Bolliet et al.2021, for details), it misses out on an important reprocessing effect. Namely, high energy photons (\(x\gg x_{\rm Ly-\alpha}\)) will both ionize and heat the background. This heating of the background can introduce important secondary ionizations, particularly in the high energy regime (e.g., Shull and van Steenberg 1985; Slatyer et al.2009; Valdes et al.2010) that can be missed otherwise. Bolliet et al. (2021) utilized this Lyman continuum treatment for generic decaying particle scenarios and at high energies found weaker bounds compared to different approaches employed by Capozzi et al. (2023) and Liu et al. (2021). The approximate treatment by Chen and Kamionkowski (2004) attempts to account for this by partitioning a fraction of the energy above the Lyman-\(\alpha\) threshold to use for ionizations, excitation and heating. We neglect the effect of excitations, which have been found to be minor (Galli et al.2013). A comparison of the free electron fraction for these two treatments can be found in Fig. 16, where we see that the pure Lyman continuum computation generically underestimates the effect. Prior treatments of the ionization history in the presence of strings (Tashiro et al.2012) utilized an incorrect spectral index for a large region of parameter space, which we correct here. Additionally, their analysis considered a simple photon counting procedure to derive their anisotropy constraints, which misses out on the reprocessing effect mentioned above. Ionization histories for our benchmark cases can be found in the right panel of Fig. 17. As mentioned above, our implementation is simplified and does not take into account complications of high-energy cascades (e.g., Figure 14: Snapshots of the distortion spectra \(\Delta I=I_{\rm CI}-I_{\rm pl}\) as output from CosmoTherm at different redshifts. Dashed lines represent a deficit of photons (a negative distortion) when compared with a blackbody. Top left: Output with no external heating sources. A small (negative) distortion is generated through the adiabatic cooling of electrons, which continuously extracts energy from the photon background. Top right: Inclusion of a weak source term from string loops. A positive \(\mu\) distortion is now generated through injection of energy at early times. At late times, a low-frequency spectrum of photons is generated. Bottom left: A string source with \(I=I_{\star}\), which produces a sizeable \(\mu\) distortion as well as a low frequency excess. Bottom right: String source with a high current. _COBE/FIRAS_ is capable of constraining this model based upon entropy injection. Slatyer et al., 2009; Hutsi et al., 2009; Slatyer, 2016), or most recently Liu et al. (2023a,b). However, it allows us to approximately follow the evolution of both the spectral distortions and the ionization fractions, which also can be used to compute the related 21-cm signals, as outlined in Acharya et al. (2022). Our treatment could be further improved by directly treating the ionizations Bolliet et al. (2021) and also adding secondary energetic particles. This would also allow us to go beyond the 'on-the-spot' approximation (Chen & Kamionkowski, 2004; Padmanabhan & Finkbeiner, 2005), but we leave a more detailed exploration to future work. ### Evolution at late stages At \(z\lesssim 500\), the effect of electron scattering on the evolution of the spectral distortions starts to become very small. We can therefore omit the broadening and shifting of distortions introduced at these late times. The photon evolution equation then simplifies, and we only have to include the \(y\)-distortion sources from differences in the electron and photon temperature, the emission and absorption of photons by the free-free process and the external photon source from cosmic strings. This greatly simplifies the calculation, as the evolution of the photon distribution in each frequency bin becomes independent. We confirmed that the results remain largely unaffected by this simplification. ### Soft photon heating and the global 21-cm signal As was recently discussed in Acharya et al. (2023), the presence of a sufficiently steep radio background produces an important backreaction effect on the 21-cm differential brightness temperature (\(\delta T_{\rm b}\)) at cosmic dawn. This brightness temperature is given by \[\delta T_{\rm b}=\frac{(1-\mathrm{e}^{-\tau_{21}})}{1+z}(T_{\rm s}-T_{\rm R}), \tag{44}\] where \(T_{\rm R}\) is the temperature of the background at 21-cm (usually assumed to be solely the CMB), \(\tau_{21}\) is the 21-cm optical depth, and \(T_{\rm s}\) is the spin temperature. The spin temperature is a measure of the ratio of hydrogen atoms in the triplet state relative to the singlet. Multiple prescriptions for calculations the evolution of \(T_{\rm s}\) can be found in the literature (Furlanetto, 2006; Hirata, 2006; Venumadhav et al., 2018). Overall, they can be expressed as \[T_{\rm s}^{-1}=\frac{x_{\rm R}T_{\rm R}^{-1}+x_{\rm c}T_{\rm m}^{-1}+x_{\alpha }T_{\alpha}^{-1}}{x_{\rm R}+x_{\rm c}+x_{\alpha}}. \tag{45}\] Here, \(T_{\rm R}\), \(T_{\rm m}\), and \(T_{\alpha}\) are the temperatures of the radiation, matter, and the colour temperature of the Lyman-\(\alpha\) radiation. Additionally, \(x_{\rm R}\), \(x_{\rm c}\), and \(x_{\alpha}\) are the radiative, collisional, and Wouthuysen-Field couplings respectively. During cosmic dawn, the dominant contribution to the spin temperature comes from the kinetic motion of the hydrogen atoms. In Acharya et al. (2023), it was shown that if an additional radio background is present with a significantly steep spectral index, the hydrogen atoms are heated and the spin temperatures rises. As the magnitude of \(\delta T_{\rm b}\) is proportional to \(T_{\rm R}/T_{\rm s}\), this soft photon heating (SPH) dampens the expected signal relative to the case where SPH is neglected. Brandenberger et al. (2019) considered the impact of superconducting strings on \(\delta T_{\rm b}\), but neglected this SPH. In addition, the estimates for the enhancement of the 21cm signal did not account for the radiative transfer effects and reduction of photons by free-free absorption. As a result, much of the string parameter space was ruled out by taking the EDGES result as a strict upper bound. Here, we find that by including soft photon heating, no string models can be ruled out by taking the EDGES upper limit alone. Figure 18 shows how the inclusion of SPH prevents a particular cosmic string model from being constrained by the EDGES measurement. We also note that Hernandez (2014, 2021) has studied the effects of cosmic string wakes on the brightness temperature at 21-cm and found that a network can amplify the signal. In the case of superconducting cosmic strings, the amplitude reduction from soft photon heating surpasses the potential amplification from the effect of the wakes, meaning that our results do not change. However, this effect is important for a network of non-superconducting strings. Figure 16: The free electron fraction for two different treatments of ionizing photons. For the green curve, CosmoTherm directly computes the number of ionizing photons produced by the strings, removing them as they liberate electrons (see Bolliet et al., 2021, for details). This is computationally expensive, and can systematically underestimate the number of ionizations that take place. To produce our constraints we follow the approximate treatment described by Chen & Kamionkowski (2004). Figure 15: A comparison of the free electron fraction with and without the direct production of ionizing photons from the string loops. When no ionizing photons are included, all energy is injected in the form of heat. This energy injection reduces the recombination rate and increases the collisional ionization efficiency, leading to the mild deviation from the case with no strings. When the direct production of ionizing photons is included, the \(X_{\rm e}\) history is strongly modified, which leads to much stronger constraints. ### Reionization treatment We give a brief description of our reionization modelling in this section. We refer the readers to Acharya et al. (2022) for a more detailed discussion. The evolution of the ionization fraction due to hydrogen and helium is given by, \[\frac{\mathrm{d}x_{\mathrm{HI}}}{\mathrm{d}t} =\xi_{\mathrm{ion}}(z)\,\frac{\mathrm{d}f_{\mathrm{coll}}}{\mathrm{ d}t}-\alpha_{A}C\,x_{\mathrm{HII}}\,n_{\mathrm{e}} \tag{46a}\] \[\frac{\mathrm{d}x_{\mathrm{HII}}}{\mathrm{d}t} =\xi_{\mathrm{ion}}(z)\,\frac{\mathrm{d}f_{\mathrm{coll}}}{\mathrm{ d}t}-\alpha_{A}C\,x_{\mathrm{HII}}\,n_{\mathrm{e}}, \tag{46b}\] where \(x_{\mathrm{HII}}=\frac{n_{\mathrm{HI}}}{n_{\mathrm{H}}},x_{\mathrm{HII}}=\frac {n_{\mathrm{HII}}}{n_{\mathrm{H}}}\) are the hydrogen and helium ionized fractions respectively, \(\xi_{\mathrm{ion}}\) is the ionizing efficiency parameter, \(f_{\mathrm{coll}}\) is the matter collapse fraction, \(\alpha_{A}\) is the case-A recombination coefficient, \(C\equiv\langle n_{\mathrm{e}}^{2}\rangle/\langle n_{\mathrm{e}}\rangle^{2}\) is the clumping factor which is a function of gas density, and \(n_{\mathrm{e}}\) is the total electron number density. We use the fitting function of Shull et al. (2012) for the clumping factor. The physics of this modelling can be explained as follows. Once we have sufficiently massive dark matter halos which can form galaxies, the photons emitted by these galaxies will ionize their environment. Such massive halos are rare at higher redshifts (an effect captured by the collapse fraction), leading to a few isolated ionization bubbles. As more structure forms at lower redshifts, the number of photon sources increases and reionization proceeds rapidly. The ionizing efficiency parameter is given by \[\xi_{\mathrm{ion}}=A_{\mathrm{He}}/\epsilon_{\mathrm{res}}N_{\mathrm{ion}}, \tag{47}\] where \(A_{\mathrm{He}}\) is a correction factor due to the presence of helium, \(N_{\mathrm{ion}}\) is the number of ionizing photons per stellar baryon, \(f_{\mathrm{esc}}\) is the fraction of ionizing photons escaping the host halo and \(f_{\star}\) is the star formation efficiency. Since \(\xi_{\mathrm{ion}}\) is a degenerate combination of parameters, there are multiple ways to get the same reionization history using different parameter choices. For our fiducial model, we use the following combination of parameters (\(N_{\mathrm{ion}}\), \(f_{\star}\), \(f_{\mathrm{esc}}\), \(f_{\alpha}\), \(f_{X}\)) = (4000, 0.1, 0.1, 1.0, 1.0). We include the heating of electrons due to energetic X-ray photons through the expression given by Furlanetto (2006), \[\frac{2}{3}\frac{\epsilon_{X}}{k_{\mathrm{B}}n_{\mathrm{H}}H(z)}=10^{3}\, \mathrm{K}_{X}\left[\frac{f_{\star}}{0.1}\right]\,\left[\frac{f_{X,h}}{0.2} \right]\,\left[\frac{\mathrm{d}f_{\mathrm{coll}}/\mathrm{d}z}{0.01}\right]\, \left[\frac{1+z}{10}\right], \tag{48}\] where \(f_{X,h}\simeq(1+2x_{\mathrm{e}})/3\) is the fraction of X-ray energy contributing to the heating (Chen & Kamionkowski, 2004), and \(f_{X}\) is a scaling factor. ## 7 Constraints To explore the \(G\mu\)-\(I\) parameter space, we run a finely spaced grid of models to generate our numerical data, which directly produces the likelihood values. We then obtain constraints from these CosmoTherm outputs using observational data from _COBE/FIRAS_(Fixsen et al., 1996), CMB anisotropies (Planck Collaboration et al., 2020), the radio synchrotron background (RSB) (Fixsen et al., 2011; Dowell & Taylor, 2018), the EDGES experiment (Bowman et al., 2018), and the optical depth to reionization as measured by the Planck Figure 17: Left: A comparison of the final spectra that we would observe today from our benchmark string parameters. Vertical lines indicate to frequencies bands probed by CMB and radio experiments. Higher currents tend to build up larger low frequency backgrounds, while parameters near the \(I=I_{\star}\) contour deposit more raw energy/entropy into the CMB, and are therefore more easily constrained by _COBE/FIRAS_. Right: Ionization histories for the parameters. Departures from the standard history are typically stronger for larger currents, until for a given set of parameters, \(\omega_{\mathrm{k}}\leq 13.6\) eV. In that case, ionizing photons are produced with a greatly reduced efficiency, as indicated in Eq. (41). Further details on this can be found in Fig. 20. Figure 18: The brightness temperature for a benchmark string model with and without the inclusion of soft photon heating (SPH). As was recently discussed in Acharya et al. (2023), the presence of a steep radio background increases the spin temperature of the gas. This leads to a strongly subdued \(\delta T_{\mathrm{b}}\) relative to the estimation one would obtain by neglecting this heating. The EDGES datapoint is plotted in green. Collaboration et al. (2020). In addition, we also forecast constraints from \(\mu\), as well as non-\(\mu\), non-\(\gamma\) type distortions from a _PIXIE_-type experiment (Kogut et al., 2011). In order to make contact with these observations, we analyze the output produced by CosmoTherm using a rudimentary likelihood analysis. This typically involves comparing the output produced with strings, to that without, as a test of the null hypothesis. An exception to this is when comparing against measurements of the radio synchrotron background, in which we compare to a best-fit power law of the datapoints. In this section we describe in detail how we obtain the constraint curves presented in Fig. 21 ### CMB spectral distortions The CMB spectrum can be well approximated by a Planck (black-body) spectrum at a temperature \(T_{0}=2.7255\,\)K, with upper limits on the distortions of the order \(\Delta I/I\lesssim 10^{-5}-10^{-4}\) at frequencies \(\nu\simeq 60-600\,\)GHz from _COBE/FIRAS_(Fixsen et al., 1996). As we have seen above, electromagnetic energy injection into the baryon-photon plasma heats the electrons which in turn boosts the CMB photons creating distortions to the CMB spectrum. Additionally, direct photon injection (entropy) can create unique spectral patterns which strengthen constraints in some regions of parameter space. The current \(2\sigma\) upper limit for the amplitude of the \(\mu\) and \(\gamma\) parameters is \(|\mu|\lesssim 9\times 10^{-5}\) and \(|y|\lesssim 1.5\times 10^{-5}\)(Fixsen et al., 1996), which translates into a constraint on the energy release as \(\Delta\rho_{\gamma}/\rho_{\gamma}\lesssim 6\times 10^{-5}\), where \(\rho_{\gamma}\) is the CMB energy density today. Using CosmoTherm, we can go beyond these simple \(\mu\) and \(\gamma\) parameters by comparing the full shape of the string-induced spectra to the residuals of the _COBE/FIRAS_ measurement. This allows a more precise determination of the validity of any particular model by utilising all of the available data, therefore provides stronger and more robust constraints when compared with previous analysis such as the work by Tashiro et al. (2012), or Miyamoto & Nakayama (2013). However, it assumes that the marginalization over galactic foregrounds does not further alter the constraints. In addition, we automatically deproject any contributions that are degenerate with a simple shift in the CMB temperature. This is achieved using a simple scalar product of the signal vector on the _COBE/FIRAS_ bands weighted by the inverse covariance matrix. The _COBE/FIRAS_ experiment measured the monopole of the background temperature to exquisite precision. After subtracting the best fit blackbody from this measurement, one is left with a series of residuals that are mostly consistent with 0 at the \(1\sigma\) level. The lowest order monopole distortion expected in standard cosmology is a \(y\)-type that comes from reionization at the level of \(\Delta I/I\simeq 10^{-7}-10^{-6}\)(Hill et al., 2015). This is far below the sensitivity of _COBE/FIRAS_, and so we can be confident that any distortion which exceeds the residuals comes from the cosmic strings and would therefore be constrained. To compute the likelihood, we assume that the _COBE/FIRAS_ datapoints are uncorrelated, and perform two \(\chi^{2}\) evaluations using the residuals. The first using the output spectrum from CosmoTherm with a cosmic string source term (\(\chi^{2}_{\rm{CS}}\)), and the second without the source (\(\chi^{2}_{0}\)). Finally, we compute \(\Delta\chi^{2}=\chi^{2}_{\rm{CS}}-\chi^{2}_{0}\) and require \(\Delta\chi^{2}\leq 2\) for our \(2\sigma\) constraint curves. The sensitivity of a _PIXIE_-type instrument is high enough that it would see the reionization \(y\)-distortion at more than \(100\sigma\)(Abitbol et al., 2017). In principle, this means that in order to claim a \(y\)-distortion signature of cosmic strings from _PIXIE_, we would need to subtract off the reionization signal at very high precision. The uncertainties in the model of reionization implemented in CosmoTherm make this a tricky procedure to pull off successfully. Therefore, we also choose to deproject the \(y\)-distortion from the forecasted _PIXIE_ residuals, and search for distortions of the \(\mu\) type, and the non-\(\mu\), non-\(y\) type that are much cleaner. We perform our likelihood analysis in the same way as for _COBE/FIRAS_, but with the deprojected data. For the forecasting, we assume _PIXIE_ has null residuals with an effective (foreground-marginalized) sensitivity of \(\Delta I=5\,\)Jy/Sr, which roughly reproduces a \(1\sigma\) sensitivity to \(\mu\simeq 1.4\times 10^{-8}\). We show the results of this analysis in Fig. 19, and compare with the analytic approximations computed in Section 4. It is important to note that the analytic approximations consider only an integration from \(3\times 10^{5}\leq z\leq z_{\rm{th}}\), as it is impossible to analytically follow the entropy injection constraints after this point. In contrast, the numerical result considers the full evolution in the CMB distortion band down to redshift \(z=0\). We therefore expect some natural increase in the contours between the analytic and numerical results. In general, the approximate constraints computed from the negative \(\mu\)-distortion due to entropy release are rather consistent with the numerical result. It is clear that the analysis of the _COBE/FIRAS_ residuals provides us with significantly increased constraining power. ### CMB anisotropies The addition of energetic electromagnetic particles will ionize and heat the background electrons during and after recombination, modifying the standard recombination history. As a result, temperature anisotropies are damped while polarization anisotropies are boosted (Adams et al., 1998; Chen & Kamionkowski, 2004; Padmanabhan & Finkbeiner, 2005; Galli et al., 2009), and these effects have been measured to great precision by CMB anisotropy experiments (Komatsu et al., 2011; Planck Collaboration et al., 2020). There have been several studies in past years which have improved our knowledge of recombination physics (Zeldovich et al., 1968; Peebles, 1968; Seager et al., 2000; Sunyaev & Chluba, 2009; Chluba & Thomas, 2011; Ali-Haimoud & Hirata, 2011) and one can now compute the recombination history accurately using publicly available codes such as CosmoRec (Chluba & Thomas, 2011). To avoid the time-consuming computations that individual samples of the CMB likelihood would entail, in this work, we use a direct projection method developed in Hart & Chluba (2020). This is a principal component analysis (PCA) method following the works developed in Farhang et al. (2012, 2013). For our case, with energy injections from string decay, we compute the changes to the standard recombination history of the universe, \(\xi(z)=\Delta x_{\rm{e}}/x_{\rm{e}}\), using the recombination module in CosmoTherm. We then compute the first three principal component coefficients by projecting \(\xi(z)\) onto the eigenmodes, \(E_{I}(z)\), with the integral \[\mu_{I}=\int\xi(x)E_{I}(z)\,{\rm d}z. \tag{49}\] We use the covariance matrix of the \(\mu_{I}\) obtained in Hart & Chluba (2020) and compute the likelihood of the model assuming Gaussian statistics. We do not include modifications to reionization in this setup, therefore, the eigenmodes \(E_{I}(z)\) are sensitive to changes in the ionization history only at high redshifts \(100\lesssim z\lesssim 4\times 10^{3}\)(Hart & Chluba, 2020). We treat the modification to reionization history separately which is described below. We show the \(2\sigma\) constraints from CMB anisotropies in Fig. 20. Of the datasets we consider, variations in the electron recombination history are the most wide-sweeping and stringent. As mentioned above, the photons produced from the strings heat the electrons (increasing collisional ionization rates and quenching recombinations), as well as directly ionize hydrogen and helium. Of these two effects, the production of photons with \(\omega\geq 13.6\) eV are the most potent (see Fig. 15). The spectrum of injected photons follows a broken power law given by Eq. (41), which falls off very rapidly for \(\omega\geq\omega_{\rm k}\). Therefore, a given parameter set with \(\omega_{\rm k}\leq 13.6\) eV will be much less efficient at ionizing the background. However, as discussed in Sec. 6.4, soft photons can be efficiently absorbed for \(x\leq 10^{-5}\) (Fig. 6), a process that heats the electrons and in turn ionize the neutral hydrogen. Since the total energy of photons within \(x\leq 10^{-5}\) is similar for certain choices of parameters, we see the almost horizontal feature in \(x_{\rm e}\) constraint at high currents. We show contours of constant \(\omega_{\rm k}\) in the figure, noting that constraints are greatly reduced for sufficiently low values of the knee frequency. ### Radio synchrotron background (RSB) data We use the RSB data from ARCADE experiment (Fixsen et al., 2011) and Dowell & Taylor (2018). The ARCADE-2 experiment measured a RSB between 3-90 GHz. They reanalyzed earlier data at lower frequencies and compiled it in their Table 4. With this, they found a best fit power law with spectral index 2.6 and temperature \(T\simeq 24\) K at 310 MHz. In Dowell & Taylor (2018), the authors redid this analysis using independent data points around \(\simeq 40-80\) MHz and found the best fit slope to be consistent with ARCADE but with slightly higher normalization with \(\simeq 30\)K at 310 MHz. In this paper, we use the ARCADE data points within 3-90 GHz and the independent data points of Dowell & Taylor (2018) within the 40-80 MHz band to compute the likelihood. In the analysis of these datasets, the contribution from resolved extra-galactic sources were not taken into account. To isolate the contribution to the radio background from string decay, we add an irreducible extra-galactic background to our solution obtained from CosmoTherm. The fitting function to the minimal extra-galactic background (MEG) is given by (Gervasi et al., 2008a), \[T_{\rm bg}(\nu)\simeq 0.23\,{\rm K}\left(\frac{\nu}{\rm GHz}\right)^{-2.7}. \tag{50}\] In contrast to the spectral distortion constraints, we compare the radio background from a network of strings not to the CosmoTherm output with no source term, but to a best-fit power law to the RSB data. With the inclusion of the MEG, this power law is given by \[T_{\rm RSB}(\nu)\simeq 1.230\,{\rm K}\left(\frac{\nu}{\rm GHz}\right)^{-2.555}. \tag{51}\] With this as our null hypothesis, we once again perform a \(\chi^{2}\) analysis against the string induced RSB as described in Sect. 7.1 to obtain our constraints. This allows us to treat the RSB as a strict upper bound on the amount of radio emission which can be produced by the string network. We note, however, that for the RSB limits presented below, we do not add a penalty if the total background is not reproduced by the sum of the MEG and our distortion outputs. The excluded region is illustrated in Fig. 21. ### Global \(21\)-cm measurements We use the claimed detection by the EDGES collaboration (Bowman et al., 2018) as a figure of merit to constrain the energy injection pro Figure 19: Comparison of the \(2\sigma\) constraints found from _COBE/FIRAS_, as well as the forecast for a _PIXIE_-type instrument, to the analytic predictions seen in Fig 11. A numerical treatment shows that _COBE/FIRAS_ is capable of constraining much more than simple analytic treatments would suggest. With current specifications, _PIXIE_ would observe a strong \(y\)-type distortion from reionization. To obtain a conservative estimate, we do not consider the sensitivity of _PIXIE_ to a primordial \(y\)-distortion, and instead focus on limits from \(\mu\) and non-\(\mu/y\) type signatures. Figure 20: \(2\sigma\) limits on cosmic string parameters from the induced changes to the free electron fraction \(x_{\rm e}\). The coloured lines indicate when \(\omega_{\rm k}=13.6\) eV. To the left of the lines, the spectrum of ionizing photons produced by the loop network decays as \(\omega^{-5/3}\), while to the right it falls off much faster as \(\omega^{-17/6}\). Consequently, the constraints are greatly relaxed in the latter region. cess from cosmic strings. EDGES has claimed a detection of a 21-cm absorption feature with \(\delta T_{\rm b}\simeq-500\) mK originating from \(z\approx 18\) and a \(1\sigma\) error of 200 mK. Recently, SARAS, an independent experiment, could not reproduce this result (Singh et al., 2022). Therefore, our discussions on constraints from global 21-cm measurements are broadly qualitative. To constrain our energy injection cases, we demand that \(-500\) mK \(\lesssim\delta T_{\rm b}\lesssim 0\) at \(z=18\). For \(\delta T_{\rm b}\leq-500\) mK, we use a Gaussian likelihood with an error of 200 mK to quantify the tension with this data. We also penalize cases with \(\delta T_{\rm b}>0\) at \(z=18\) by using a gaussian likelihood with an error of 84 mK which is the value of \(\delta T_{\rm b}\) at \(z=18\) for our fiducial 21-cm model without any energy injection. We find that from the EDGES measurement alone, none of the models exhibit a \(2\sigma\) tension with the data. This is why the left panel of Fig. 21 does not show a constraint curve. We comment that the soft photon heating effect described in Acharya et al. (2023) indeed eliminates any regions in tension with the EDGES measurement. Interestingly, we do find regions of parameter space where \(\delta T_{\rm b}\geq 0\), implying that the signal may be in emission in the presence of a string network. This is also a direct consequence of the soft photon heating effect. While we do make the choice to penalize these models, the affected regions of parameter space have already been ruled out at more than \(2\sigma\) by other datasets. ### Optical depth constraints We use the optical depth measurement of Planck Collaboration et al. (2020) with \(\tau=0.0544\pm 0.0073\) to constrain our energy injection cases. The significantly lower value of measured \(\tau\) has sparked great interest and has resulted in a shift of our understanding of the reionization epoch (Kulkarni et al., 2019). However, these works use detailed hydrodynamical simulations and include several physical effects such as a non-homogeneous ionizing photon background which are difficult to capture in a simple analytic setup. In this work, we assume that energy injections modify the reionization history, or \(\tau\), only perturbatively. Since our fiducial reionization model gives a \(\tau=0.078\),7 we compute the difference of the optical depth obtained with cosmic strings included, and then add a simple Gaussian penalty to constrain the model. We checked that tuning the reionization model parameters to more closely reproduce the measured \(\tau\) value, does not alter the constraints much. In Fig. 21, we show the optical depth constraints which becomes dominant at high currents, i.e., \(I\gtrsim 10^{7}\) GeV. As we discussed above, at these high currents, the heating due to soft photons is an important process that changes the ionization history of the universe appreciably. Footnote 7: For this we assume the starting redshift of reionization be \(z=30\), though it does not change significantly with small change to this starting redshift. ### Summary of constraints In the left panel of Fig. 21 we show the \(2\sigma\) constraints obtained through our likelihood analysis of the _COBE/FIRAS_, CMB anisotropy, radio synchrotron background, EDGES, and optical depth measurements. It is clear that of the datasets we analyzed, limits coming from CMB anisotropies are by far the most stringent. Our updated limits using a full spectral analysis of the _COBE/FIRAS_ data are superseded by more recent observational results. It is important to realize, however, that the _COBE/FIRAS_ data has been available since the mid 90s. If it had been used to its full extent at the time, these constraints would have been relevant to the parameter space of superconducting cosmic string models for many years. Reionization and RSB constraints cover a region of high currents, which is perhaps unsurprising as strong photon emitters in the late-time universe are easier to detect. Interestingly, regions on the boundary of the RSB data may offer a viable solution to this observed radio excess, as we discuss in a forthcoming publication. The right panel of Fig. 21 presents a joint likelihood analysis of the datasets analyzed by CosmoTherm, alongside our simple forecast for a _PIXIE_-type instrument. As described above, we make a conservative choice when forecasting by considering the _PIXIE_ sensitivity only to \(\mu\), and non-\(\mu\), non-\(y\) type distortions to avoid having to perform a careful subtraction of the \(y\)-distortion induced by reionization. With proper foreground subtraction and removal of this \(y\)-distortion, we expect a marginal improvement in the _PIXIE_ sensitivity to this cosmic string scenario. Importantly, _PIXIE_ would be capable of constraining an important region of parameter space that is currently not covered. Finally, in Fig. 22 we present a more complete illustration of the open regions of parameter space for superconducting cosmic string models. In addition to our work, we add constraint curves from Miyamoto & Nakayama (2013) from pulsar timing array data, big-bang nucleosynthesis, and radio transients that could be observed by the Parkes array. As a word of caution we mention that they utilize a slightly different set of input parameters (\(\alpha=0.1\) and \(\Gamma_{\rm g}=50\)) when computing their results, and so their curves here should be taken as a rough estimate rather than a robust boundary. They also consider a third form of energy release for the string network, through so-called plasma dissipation. The analysis of the plasma dissipation efficiency depends on many uncertain parameters, such as the velocity of any given loop, the local plasma viscosity and more. We choose not to model these effects in our work, but note that in Miyamoto & Nakayama (2013), that deviations from our results seem to appear at very low values of the string tension (\(G\mu\simeq 10^{-18}\)). ## 8 Discussion and conclusions In this work, we have investigated the spectral signatures produced by a network of superconducting strings in place at early times (\(z_{\rm form}\gg 10^{7}\)). We have made improvements to the approximate analytic understanding of the spectral distortions produced by this network, and strengthened these results by performing full numerical solutions to the thermalization problem using CosmoTherm. Analytically, we have refined the previous estimates on primordial distortion signatures made by Tashiro et al. (2012a) and Miyamoto & Nakayama (2013) by including a non-trivial contribution to the \(\mu\)-distortion coming from strong entropy injections. This negative \(\mu\) contribution can be seen by comparing Fig. 5, which neglects the entropy injection, to Fig. 11. Additionally, in Sect. 5 we also develop a more sophisticated analytic formalism for describing the instantaneous spectrum of photons produced by such a string network, given in Eq. (37) and Eq. (38). These can readily be applied to other physical scenarios to produce more robust analytic estimates. We then developed a simple module in CosmoTherm to handle the injection and processing of the cosmic string source function. The analytic approximations were compared against the full numerical solution in Fig. 19, where we find the analytics to be a conservative underestimate of the full constraining power of spectral distortions. The extra constraining power achieved by CosmoTherm is the result of being able to perform a full spectral analysis of the _COBE/FIRAS_ data (and the _PIXIE_ forecast). Previous estimates relied on constraining the specific distortion parameters, \(\mu\) and \(y\). With the full spectral data from CosmoTherm, we are able to go beyond this by comparing the string induced spectrum to the residuals of _COBE/FIRAS_, yielding constraints on non-\(\mu\) and non-\(y\) type distortions. This full shape spectral analysis will generically increase the constraining power of spectral distortions to models of exotic energy/entropy injection, when compared against the simple analytic estimates. With the numerical implementation, we also gain access to precise spectral information at virtually any redshift (\(z\lesssim 10^{7}\)), which allows us to easily and efficiently derive constraints from other datasets. In the left panel of Fig. 21, we show \(2\sigma\) constraints from _COBE/FIRAS_(Fixsen et al., 1996), CMB anisotropies (Planck Collaboration et al., 2020), the radio synchrotron background (Fixsen et al., 2011; Dowell and Taylor, 2018), and the optical depth to reionization as measured by the Planck Collaboration et al. (2020). Additionally, we utilize the EDGES (Bowman et al., 2018) datapoint as a strict lower limit on \(\delta T_{\rm b}\) at \(z\simeq 18\) when combining all of our constraints. This analysis Figure 21: Left: A breakdown of the individual (\(2\sigma\)) constraints analyzed by CosmoTherm. While changes to the ionization history are dominant for the majority of the parameter space, late time effects such as an excess radio background and changes to the optical depth to reionization can become important for high currents. We also utilize the observation from the EDGES (Bowman et al., 2018) experiment as a strict upper bound on the brightness temperature at cosmic dawn, though we find no constraints on strings at the \(2\sigma\) level from that. Right: A combination of the constraints, as well as a conservative forecast for a _PIXIE_-like instrument. Such an instrument would probe a new wedge of parameter space by searching for a negative \(\mu\)-distortion sourced by a strong entropy injection from the strings at early times. Figure 22: A summary plot of the constraints analyzed in this work, as well as limits obtained by Miyamoto and Nakayama (2013) on pulsar timing array measurements, big-bang nucleosynthesis, and transient radio bursts which we didn’t directly consider in our work. The work of Miyamoto and Nakayama (2013) uses a slightly different string loop model, so their constraints should be viewed as approximate boundaries when compared with ours. also presents an update and more robust treatment compared to other analytic estimates presented by Miyamoto & Nakayama (2013), and Tashiro et al. (2012b). In Acharya et al. (2023) it was recently discovered that the presence of a sufficiently steep soft photon background (spectral index \(\gtrsim 2.5\) at \(\nu\lesssim 1\) GHz) can cause significant heating of gas in the late universe, leading to an increase in the spin temperature \(T_{\rm s}\). In turn, this dampens the amplitude of the brightness temperature at cosmic dawn (\(\delta T_{\rm b}\)), relaxing the constraints derived from EDGES. Brandenberger et al. (2019b) derived strong limits on the high current region of the string parameter space using an analytic approximation which neglected this new effect as well as overestimated the photon flux due to omission of radiative transfer effects. We find that by including this effect, constraints driven by EDGES disappear. A summary of our constraints, as well as other datasets that we did not include, can be seen in Fig. 22. As with most models of BSM physics, there exists some degree of theoretical uncertainty that can be difficult to quantify. The form of the string loop density distribution is well established from Nambu-Goto simulations for non-superconducting strings (Blanco-Pillado et al., 2014), but to our knowledge, large-scale simulations with superconducting strings have not been performed (though recent progress has been made in Rybak et al. 2023). Another simplifying assumption is that all loops carry the same, time independent current from their formation until their eventual decay. Current generation and dissipation on these loops will ultimately depend on the local environment in which they propagate, and modelling of this is beyond the scope of our work. We also note that Miyamoto & Nakayama (2013) include an additional channel for the string decay through plasma dissipation. This is a frictional effect which again depends on the dynamics of the local environment, which we do not model. Based upon the work of these authors, it appears that this effect may become important for very low string tensions (\(G\mu\lesssim 10^{-18}\)), and so we advise caution in the interpretation of our constraints at that level. In addition to these BSM related uncertainties, we have treated the injection of energy and photons in an approximate manner (see Sect. 6.2). Similarly, significant uncertainties exist in the modeling of reionization and the 21cm signal, although the latter do not drive any constraint here. Finally, our treatment of the CMB anisotropy likelihood (see Sect. 7.2) had the goal of quickly exploring the range of models without a significant computational burden. Similarly, marginalization over spectral distortion foregrounds will have to be more carefully considered, in particular when assessing the constraints from future CMB spectrometers. We leave these improvements to future work, anticipating that some of the details may change, while leaving the broad conclusions unaltered. To conclude, superconducting cosmic strings offer an interesting and well-motivated model that can probe particle physics from the top down. By hunting for their signatures in different cosmological and astrophysical datasets, we learn more about the phase transitions that may (or may not) have taken place in the very early universe. CosmoTherm is a powerful and flexible tool capable of uncovering the many spectral nuances of not just cosmic strings, but virtually any scenario that injects energy or entropy into the background. Here, we have focused on the derivation of constraints for cosmic strings, but we plan to apply this toolbox to a wider array of physics beyond the standard model in the future. ## Acknowledgements We would like to acknowledge the initial work of Jien Dhandha in setting up the 21 cm and reionization modules in CosmoTherm. We would also like to thank Richard Battye and Wenzer Qin for helpful comments and discussions on the draft. This work was supported by the ERC Consolidator Grant _CMBSPEC_ (No. 725456). JC was furthermore supported by the Royal Society as a Royal Society University Research Fellow at the University of Manchester, UK (No. URF/R/191023). BC would also like to acknowledge support from an NSERC-PDF. ## Data availability The data underlying in this article are available in this article and can further be made available on request.
2303.04064
Tangent, cotangent, normal and conormal bundles are almost never instanton bundles
In this very short note we give an elementary characteristic free proof of the result claimed in the title (see Theorem 1.2 for a more precise formulation), generalizing a recent result proved for Ulrich bundles over the complex field by V. Benedetti, P. Montero, Y. Prieto Monta\~nez, S. Troncoso. Moreover, we also give a similar result about the twists of the cotangent bundle and make some comments about the possibility to obtain an analogous result for twists of the tangent bundle.
Gianfranco Casnati
2023-03-07T17:21:58Z
http://arxiv.org/abs/2303.04064v2
# Tangent, cotangent, normal and conormal bundles are almost never instanton bundles ###### Abstract. In this very short note we give an elementary characteristic free proof of the result claimed in the title (see Theorem 1.2 for a more precise formulation), generalizing a recent result proved in [6] for Ulrich bundles over the complex field. Moreover, we also give a similar result about the twists of the cotangent bundle and make some comments about the possibility to obtain an analogous result for twists of the tangent bundle. Key words and phrases:Ulrich bundle, Instanton bundle 2020 Mathematics Subject Classification: Primary: 14J60. Secondary: 14D21, 14F06 The author is a member of GNSAGA group of INdAM ## 1. Introduction and Notation In this paper a projective variety \(X\) is a closed, integral subscheme of some projective space over an algebraically closed field \(\mathbf{k}\) of characteristic \(p\). In [1] the following definition has been introduced. **Definition 1.1**.: Let \(X\) be a projective variety of dimension \(n\geq 1\) endowed with an ample and globally generated line bundle \(\mathcal{O}_{X}(h)\). A non-zero coherent sheaf \(\mathcal{E}\) on \(X\) is called (ordinary) \(h\)-instanton sheaf with quantum number \(k\in\mathbb{Z}\) if the following properties hold: * \(h^{0}\big{(}\mathcal{E}(-h)\big{)}=h^{n}\big{(}\mathcal{E}(-nh)\big{)}=0\); * \(h^{i}\big{(}\mathcal{E}(-(i+1)h)\big{)}=h^{n-i}\big{(}\mathcal{E}(-(n-i)h) \big{)}=0\) if \(1\leq i\leq n-2\); * \(h^{1}\big{(}\mathcal{E}(-h)\big{)}=h^{n-1}\big{(}\mathcal{E}(-nh)\big{)}=k\). If \(k=0\), then \(\mathcal{E}\) is called \(h\)-Ulrich sheaf. The existence of an instanton sheaf with fixed quantum number \(k\) on \(X\) is not obvious. E.g. the case \(k=0\), i.e. the case of Ulrich sheaves, has been object of deep study in the last two decades and the problem of their existence is still wide open: see [5] and the references therein for more details about this case. The interest in dealing with instanton and Ulrich sheaves is also motivated by the fact that their existence on a fixed variety \(X\) is often related to interesting geometric properties. E.g. in [2] it is shown that when \(X\subseteq\mathbb{P}^{n+1}\) is a hypersurface and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{X}\otimes\mathcal{O}_{\mathbb{P}^{n+1}}(1)\), the existence of a locally Cohen-Macaulay \(h\)-instanton sheaf is equivalent to the existence of a representation of a power of the form defining \(X\) as the determinant of a suitable morphism of vector bundles of the same rank on \(\mathbb{P}^{n+1}\) with a prescribed cohomology table, called Steiner bundles. Thus, it is perhaps reasonable to ask whether one of the bundles which are naturally associated to a smooth variety \(X\) are instanton bundles or not and, in the affirmative case, which one is also Ulrich. E.g. we can deal with the cotangent bundle, i.e. the sheaf of differentials \(\Omega_{X}\), and the tangent bundle, i.e. its dual \(\mathcal{T}_{X}\): \(\Omega_{X}\) and \(\mathcal{T}_{X}\) have rank \(n:=\dim(X)\). Moreover, if \(X\subseteq\mathbb{P}^{N}\) and \(\mathcal{I}_{X}\subseteq\mathcal{O}_{\mathbb{P}^{N}}\) is its sheaf of ideals, we can also consider two further sheaves, namely the conormal bundle, i.e. \(\mathcal{C}_{X}:=\mathcal{I}_{X}/\mathcal{I}_{X}^{2}\), and the normal bundle, i.e. its dual \(\mathcal{N}_{X}\): \(\mathcal{C}_{X}\) and \(\mathcal{N}_{X}\) have rank \(N-n\). As a preliminary example we consider the bundle \(\mathcal{N}_{X}\). There are exact sequences \[0\longrightarrow\mathcal{T}_{X}\longrightarrow\mathcal{O}_{X}\otimes\mathcal{T}_{ \mathbb{P}^{N}}\longrightarrow\mathcal{N}_{X}\longrightarrow 0, \tag{1.1}\] \[0\longrightarrow\mathcal{O}_{\mathbb{P}^{N}}\longrightarrow\mathcal{O}_{ \mathbb{P}^{N}}(1)^{\oplus N+1}\longrightarrow\mathcal{T}_{\mathbb{P}^{N}} \longrightarrow 0. \tag{1.2}\] The restriction to \(X\) of (1.2) combined with (1.1) yields that \(\mathcal{N}_{X}(-h)\) is certainly globally generated. In particular \(h^{0}(\mathcal{N}_{X}(-h))\neq 0\): we deduce that \(\mathcal{N}_{X}\) is never an \(h\)-instanton bundle, hence it is never \(h\)-Ulrich as well. In this short note we prove the following result with a very easy and direct characteristic free proof. **Theorem 1.2**.: _Let \(X\subseteq\mathbb{P}^{N}\) be a smooth projective variety of dimension \(n\geq 1\) and \(\mathcal{O}_{X}(h):=\mathcal{O}_{X}\otimes\mathcal{O}_{\mathbb{P}^{N}}(1)\)._ _Then the following assertions hold._ 1. \(\mathcal{T}_{X}\) _is an_ \(h\)_-instanton bundle if and only if either_ \(X\cong\mathbb{P}^{1}\) _and_ \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{1}}(3)\) _or_ \(X\cong\mathbb{P}^{2}\) _and_ \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{2}}(2)\)_: in these cases_ \(\mathcal{T}_{X}\) _is_ \(h\)_-Ulrich._ 2. \(\Omega_{X}\)_,_ \(\mathcal{N}_{X}\) _and_ \(\mathcal{C}_{X}\) _are never_ \(h\)_-instanton bundles._ As an immediate by-product we obtain the characterization of smooth projective varieties \(X\subseteq\mathbb{P}^{N}\) such that \(\mathcal{T}_{X}\) is Ulrich. Such a characterization has been proved for the first time with a deep, interesting and long proof when \(\mathbf{k}=\mathbb{C}\) in [6, Main Theorem]. **Corollary 1.3**.: _Let \(X\subseteq\mathbb{P}^{N}\) be a smooth projective variety of dimension \(n\geq 1\) and \(\mathcal{O}_{X}(h):=\mathcal{O}_{X}\otimes\mathcal{O}_{\mathbb{P}^{N}}(1)\)._ _Then \(\mathcal{T}_{X}\) is an \(h\)-Ulrich bundle if and only if either \(X\cong\mathbb{P}^{1}\) and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{1}}(3)\) or \(X\cong\mathbb{P}^{2}\) and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{2}}(2)\)._ The next step is to ask whether a twist of the aforementioned bundles is an instanton. E.g. if \(\mathbf{k}=\mathbb{C}\), then \(\mathcal{N}_{X}(-h)\) is an \(h\)-Ulrich bundle, i.e. an \(h\)-instanton with \(k=0\), when \(X\) is a standard linear determinantal variety (see [16, Theorem 3.6]). The complete classification of varieties such that \(\mathcal{N}_{X}(ah)\) is \(h\)-Ulrich for some \(a\in\mathbb{Z}\) can be found in [17, Theorem 1]). Recently some very partial results in this direction have been proved for \(\mathcal{N}_{X}(ah)\) without restrictions on \(p\) and \(k\): see [3]. Similarly, when \(\mathbf{k}=\mathbb{C}\) and \(k=0\), the behaviour of the twists of the cotangent sheaf has been described in [17, Section 4]. We deal with the case \(p\geq 0\) and \(k\geq 0\) in Section 4, proving the following result. **Theorem 1.4**.: _Let \(X\subseteq\mathbb{P}^{N}\) be a smooth projective variety of dimension \(n\geq 1\) and \(\mathcal{O}_{X}(h):=\mathcal{O}_{X}\otimes\mathcal{O}_{\mathbb{P}^{N}}(1)\)._ _Then \(\Omega_{X}(ah)\) is an \(h\)-instanton bundle if and only if \(a=2\), \(X\cong\mathbb{P}^{1}\) and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{1}}(1)\)._ The following corollary is an easy consequence of the above theorem, because \(h^{1}(\Omega_{\mathbb{P}^{1}}(1))=h^{1}(\mathcal{O}_{\mathbb{P}^{1}}(-1))=0\). When \(\mathbf{k}=\mathbb{C}\) it is [17, Proposition 4.1 (i)]. **Corollary 1.5**.: _Let \(X\subseteq\mathbb{P}^{N}\) be a smooth projective variety of dimension \(n\geq 1\) and \(\mathcal{O}_{X}(h):=\mathcal{O}_{X}\otimes\mathcal{O}_{\mathbb{P}^{N}}(1)\)._ _Then \(\Omega_{X}(ah)\) is an \(h\)-Ulrich bundle if and only if \(a=2\), \(X\cong\mathbb{P}^{1}\) and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{1}}(1)\)._ In Section 5 we list results and examples showing that the problem of determining whether \(\mathcal{T}_{X}(ah)\) is an \(h\)-instanton might be more difficult, even when \(p=0\): see the recent paper [18]. In particular, we are not able to prove a general result analogous to Theorem 1.4 above. ## 2. Notation and some helpful results Throughout we work over an algebraically closed field \(\mathbf{k}\) of arbitrary characteristic \(p\geq 0\): restrictions on the base field are explicitly indicated when they are assumed. The projective space of dimension \(N\) over \(\mathbf{k}\) is denoted by \(\mathbb{P}^{N}\): \(\mathcal{O}_{\mathbb{P}^{N}}(1)\) is the hyperplane line bundle. The structure sheaf of a scheme \(X\) is denoted by \(\mathcal{O}_{X}\). Let \(X\) be a smooth projective variety: we set \(\omega_{X}:=\det(\Omega_{X})\) and we denote by \(K_{X}\) any divisor such that \(\omega_{X}\cong\mathcal{O}_{X}(K_{X})\). We recall that \(\Omega_{X}\) and \(\mathcal{T}_{X}\) have rank \(n:=\dim(X)\), while \(\mathcal{C}_{X}\) and \(\mathcal{N}_{X}\) have rank \(N-n\) if \(X\subseteq\mathbb{P}^{N}\). For further notation and all the other necessary results not explicitly mentioned in the paper, we tacitly refer to [10] unless otherwise stated. In order to prove Theorems 1.4 and 1.2, we will make use of some results holding in arbitrary characteristic and concerning the Fujita conjecture on adjoint linear systems (see [9]). To this purpose we recall some definitions and results. Let \(X\) be a smooth variety. A curve in \(X\) is a closed subscheme of pure dimension \(1\). A line bundle \(\mathcal{O}_{X}(D)\) on \(X\) is nef if \(D\Gamma\geq 0\) for each irreducible curve \(\Gamma\subseteq X\). Notice that the nefness of \(\mathcal{O}_{X}(D)\) only depends on its class in the Neron-Severi group \(\operatorname{NS}(X)\). A scroll \(X\subseteq\mathbb{P}^{N}\) of dimension \(n\geq 2\) on a smooth curve \(B\) is a smooth projective variety endowed with a morphism \(\pi\colon X\to B\) whose fibres are isomorphic to \(\mathbb{P}^{n-1}\). In this case, there is a rank \(n\) vector bundle \(\mathcal{G}\) such that \(X\cong\mathbb{P}:=\mathbb{P}(\mathcal{G})\). Let \(\xi\in\operatorname{Pic}(X)\) be its antitautological class: we have \(\operatorname{Pic}(X)\cong\mathbb{Z}\mathcal{O}_{X}(\xi)\oplus\pi^{*} \operatorname{Pic}(B)\). All the fibres of \(\pi\) are algebraically equivalent and we denote by \(f\in\operatorname{NS}(X)\) their class. By the Chern equation \(\xi^{n}=\deg(\mathfrak{g})\) where \(\mathcal{O}_{B}(\mathfrak{g})=\det(\mathcal{G})\): by abuse of notation as in [10] we also write \[\omega_{X}\cong\mathcal{O}_{X}(-n\xi+(\mathfrak{g}+K_{B})f), \tag{2.1}\] where we set \(\mathcal{O}_{\mathbb{P}}(\mathfrak{a}f):=\pi^{*}\mathcal{O}_{B}(\mathfrak{a})\) for each divisor \(\mathfrak{a}\) on \(B\). Up to twisting \(\mathcal{G}\) by a suitable line bundle, we can assume that \(\mathcal{O}_{X}(h):=\mathcal{O}_{X}\otimes\mathcal{O}_{\mathbb{P}^{N}}(1) \cong\mathcal{O}_{\mathbb{P}}(\xi)\) is very ample. Thus \[\deg(\mathfrak{g})=\xi^{n}=h^{n}\geq 2, \tag{2.2}\] because \(X\not\cong\mathbb{P}^{n}\). **Theorem 2.1**.: _Let \(X\) be a smooth projective variety of dimension \(n\geq 1\) endowed with an ample line bundle \(\mathcal{O}_{X}(h)\)._ _Then either \(\omega_{X}((n-1)h)\) is nef or one of the following assertions hold._ 1. \(X\cong\mathbb{P}^{2}\) _and_ \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{2}}(2)\)_._ 2. \(X\cong\mathbb{P}^{n}\) _and_ \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{n}}(1)\)_._ 3. \(X\subseteq\mathbb{P}^{n+1}\) _is a smooth quadric hypersurface and_ \(\mathcal{O}_{X}(h)=\mathcal{O}_{X}\otimes\mathcal{O}_{\mathbb{P}^{n+1}}(1)\)_._ 4. \(n\geq 2\) _and_ \(X\subseteq\mathbb{P}^{N}\) _is a scroll on a smooth curve_ \(B\)_._ Proof.: See [14, Theorem 1] for the case \(n\geq 3\) and when \(n\leq 2\) the comments therein about the validity of the proofs in [12, 9] in any characteristic. The following corollaries are immediate by-products of the above theorem. **Corollary 2.2**.: _Let \(X\) be a smooth projective variety of dimension \(n\geq 1\) endowed with an ample line bundle \(\mathcal{O}_{X}(h)\)._ _Then either \(\omega_{X}(nh)\) is nef or \(X\cong\mathbb{P}^{n}\) and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{n}}(1)\)._ Proof.: The set of varieties \(X\) such that \(\omega_{X}(nh)\) is not nef is contained in the one of varieties such that \(\omega_{X}((n-1)h)\). Thus it suffices to check that \(\mathbb{P}^{n}\) endowed with \(\mathcal{O}_{X}(h):=\mathcal{O}_{\mathbb{P}^{n}}(1)\) is the only variety \(X\) listed in Theorem 2.1 such that \(\omega_{X}(nh)\) is nef. When \(Q\subseteq\mathbb{P}^{n+1}\) is a smooth quadric hypersurface, then \(\omega_{X}\cong\mathcal{O}_{X}(-nh)\), hence \(\omega_{X}(nh)\cong\mathcal{O}_{X}\) is trivially nef. When \(X\) is a scroll on a curve, we use the notation introduced above. Thanks to (2.1) we obtain \(\omega_{X}(nh)\cong\mathcal{O}_{X}((\mathfrak{g}+K_{B})f)\) in \(\mathrm{NS}(X)\). Thus, it suffices to show \[(nh+K_{X})\Gamma=(\deg(\mathfrak{g})+2p_{a}(B)-2)f\Gamma\geq 0\] for each irreducible curve \(\Gamma\subseteq X\). If \(\Gamma\) is contained in a fibre of \(\pi\), then \(f\Gamma=0\), because the general fibre does not intersect \(\Gamma\). If \(\Gamma\) is not contained in a fibre of \(\pi\), then \(\pi_{|\Gamma}\) is a finite map of degree \(f\Gamma\geq 1\). The equality (2.2) implies \(\deg(\mathfrak{g})+2p_{a}(B)-2\geq 0\), hence we deduce that \(\omega_{X}(nh)\) is nef. **Corollary 2.3**.: _Let \(X\) be a smooth projective variety of dimension \(n\geq 1\) endowed with an ample line bundle \(\mathcal{O}_{X}(h)\)._ _Then \(\omega_{X}((n+1)h)\) is nef._ Proof.: We use the same argument of the proof of Corollary 2.1. Assume now that \(\mathcal{E}\) is an \(h\)-instanton sheaf on a smooth projective variety \(X\) of dimension \(n\) endowed with an ample and globally generated line bundle \(\mathcal{O}_{X}(h)\): assume also that either \(\mathcal{O}_{X}(h)\) is very ample or \(p=0\). Thus the following strict restriction holds \[c_{1}(\mathcal{E})h^{n-1}=\frac{\mathrm{rk}(\mathcal{E})}{2}((n+1)h+K_{X})h^{n -1}, \tag{2.3}\] see [1, Theorem 1.6]: here \(h^{n}\) and \(K_{X}h^{n-1}\) denote the degrees of the line bundles \(\mathcal{O}_{X}(h)\) and \(\omega_{X}\) when \(n=1\). For further notation and all the other results used in the paper we tacitly refer to [10], unless otherwise stated. ## 3. Proof of Theorem 1.2 As pointed out in the introduction, the normal bundle is never an instanton bundle. We start by listing some easy examples of smooth varieties whose tangent bundle is or is not an instanton bundle: in [6] the same computations are used solely for dealing with the case \(k=0\). **Example 3.1**.: Let \(n=1\). If \(X\cong\mathbb{P}^{1}\) and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{1}}(d)\) with \(d\leq 3\), then \[h^{0}\big{(}\mathcal{T}_{X}(-h)\big{)}=h^{0}\big{(}\mathcal{O}_{\mathbb{P}^{1} }(2-d)\big{)}=3-d.\] In all the other cases \[h^{1}\big{(}\mathcal{T}_{X}(-h)\big{)}=h^{0}\big{(}\omega_{X}^{2}(h)\big{)} \geq 3p_{a}(X)-3+\deg(X)\geq 1.\] Thus \(\mathcal{T}_{X}\) is an \(h\)-instanton sheaf, if and only if \(X=\mathbb{P}^{1}\) and \(\mathcal{O}_{X}(h)=\mathcal{O}_{\mathbb{P}^{1}}(3)\). It is immediate to check that \(\mathcal{T}_{X}\) is the unique rank one \(h\)-Ulrich sheaf on \(X\). **Example 3.2**.: If \(X\cong\mathbb{P}^{2}\) and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{2}}(2)\), then the Bott's formulas imply \[h^{i}\big{(}\mathcal{T}_{X}(-h)\big{)}=h^{i}\big{(}\Omega_{\mathbb{P}^{2}}(1) \big{)}=0,\qquad h^{j}\big{(}\mathcal{T}_{X}(-2h)\big{)}=h^{j}\big{(}\Omega_ {\mathbb{P}^{2}}(-1)\big{)}=0,\] for \(i\leq 1\leq j\). Thus \(\mathcal{T}_{X}\) is an \(h\)-instanton sheaf. Notice that \(\mathcal{T}_{X}\) is the unique rank two \(h\)-Ulrich sheaf on \(\mathbb{P}^{2}\): see [7, Theorem 5.2]. We are ready to prove Theorem 1.2. Proof of Theorem 1.2.: As pointed out in the introduction, if \(X\subseteq\mathbb{P}^{N}\) and \(\mathcal{O}_{X}(h):=\mathcal{O}_{X}\otimes\mathcal{O}_{\mathbb{P}^{N}}(1)\), then \(\mathcal{N}_{X}\) is never an \(h\)-instanton bundle, hence it is not \(h\)-Ulrich as well. (2.3) becomes \[(((N-n)(n+1)+2(N+1))h+(N-n+2)K_{X})h^{n-1}=0. \tag{3.1}\] On the one hand, \(N>n\) yields \[(N-n)(n+1)+2(N+1)=(N-n+2)(n+1)+\lambda\] where \(\lambda=2(N-n)>0\). Thus (3.1) becomes \[(N-n+2)((n+1)h+K_{X})h^{n-1}=-\lambda h^{n}<0,\] thanks to the ampleness of \(\mathcal{O}_{X}(h)\). On the other hand \(((n+1)h+K_{X})h^{n-1}\geq 0\), because \(\omega_{X}((n+1)h)\) is nef by Corollary 2.3. The contradiction yields that \(\mathcal{C}_{X}\) is not an \(h\)-instanton bundle. The assertion on \(\Omega_{X}\) is a particular case of Theorem 1.4. Anyhow, we can also prove the assertion with the same argument used in the previous case. Indeed, in this case it leads to \[(n-2)((n+1)h+K_{X})h^{n-1}=-2(n+1)h^{n}<0,\] again contradicting Corollary 2.3 as in the previous case when \(n\geq 2\). If \(n=1\) the condition \(h^{1}(\Omega_{X}(-h))=0\), leads to \(h^{0}(\mathcal{O}_{X}(h))=0\) by duality, again a contradiction. We deduce that \(\Omega_{X}\) is not an \(h\)-instanton sheaf. We now focus our attention on \(\mathcal{T}_{X}\) in what follows. The case \(n=1\) is completely described in Example 3.1, hence we will assume \(n\geq 2\) from now on. The equality (2.3) for \(\mathcal{T}_{X}\) becomes \[(n(n+1)h+(n+2)K_{X})h^{n-1}=0. \tag{3.2}\] If \(\lambda:=n(n+1)-(n+2)(n-1)>0\), then we have the obvious equality \[(n(n+1)h+(n+2)K_{X})h^{n-1}=(n+2)((n-1)h+K_{X})h^{n-1}+\lambda h^{n}\] If \(\omega_{X}((n-1)h)\) is nef, then we can argue as for \(\mathcal{C}_{X}\), because \(\mathcal{O}_{X}(h)\) is ample. Let us examine the cases listed in Theorem 2.1 when \(\omega_{X}((n-1)h)\) is not nef. In the case (1) of Theorem 2.1 the sheaf \(\mathcal{T}_{X}\) is an \(h\)-instanton bundle thanks to Example 3.2. In the case (2) of Theorem 2.1, we deduce that (3.2) becomes \(-2=0\), while in case (3) we get \(-n=0\): thus \(\mathcal{T}_{X}\) is not an \(h\)-instanton sheaf. Consider the case (4) of Theorem 2.1. Thus \(n\geq 2\) and \(X\subseteq\mathbb{P}^{N}\) is a scroll on a smooth curve \(B\). Thanks to (2.1) and (2.2), then (3.2) becomes \[\deg(\mathfrak{g})=-(2+n)(p_{a}(B)-1).\] It follows that \(p_{a}(B)=0\) necessarily, because the left-hand side is positive and the right-hand one is non-positive if \(p_{a}(B)\geq 1\). Thus \(B\cong\mathbb{P}^{1}\) and \(\deg(\mathfrak{g})=n+2\), hence \(K_{\mathbb{P}}=-n\xi+nf\). If \(\mathcal{T}_{\mathbb{P}\mid\mathbb{P}^{1}}\) is the relative tangent sheaf of the morphism \(p\colon\mathbb{P}\to\mathbb{P}^{1}\), we have the exact sequence \[0\longrightarrow\mathcal{T}_{\mathbb{P}\mid\mathbb{P}^{1}}\longrightarrow \mathcal{T}_{\mathbb{P}}\longrightarrow\mathcal{O}_{\mathbb{P}}(2f)\longrightarrow 0\] because since \(\pi\) is smooth. Its cohomology tensored by \(\mathcal{O}_{\mathbb{P}}(-n\xi)\) and the Serre duality return \[h^{n}(\mathcal{T}_{\mathbb{P}}(-n\xi))\geq h^{n}(\mathcal{O}_{\mathbb{P}}(-n \xi+2f))=h^{0}(\mathcal{O}_{\mathbb{P}}((n-2)f))=n-1\geq 1.\] Thus \(\mathcal{T}_{\mathbb{P}}\) is not an \(h\)-instanton sheaf. **Remark 3.3**.: In order to prove Theorem 1.2 we only used that the bundles \(\mathcal{E}\) we are interested in actually satisfy the following properties: * \(h^{0}\big{(}\mathcal{E}(-h)\big{)}\neq 0\) (used for \(\mathcal{N}_{X}\)); * \(h^{n}\big{(}\mathcal{E}(-nh)\big{)}=0\) (used for curves and occasionally for \(\mathcal{T}_{X}\)); * \(\mathcal{E}\) satisfies (2.3) (used for \(\mathcal{C}_{X}\), \(\Omega_{X}\) and \(\mathcal{T}_{X}\)) **Remark 3.4**.: If the characteristic of \(\mathbf{k}\) is zero, then (2.3) holds only assuming that \(\mathcal{O}_{X}(h)\) is ample and globally generated, hence the same is true for the assertions about \(\Omega_{X}\) and \(\mathcal{T}_{X}\) in Theorem 1.2. **Remark 3.5**.: If \(X\subseteq\mathbb{P}^{N}\), then the same argument used in the proof of Theorem 1.2 easily implies that \(\mathcal{O}_{X}\otimes\Omega_{\mathbb{P}^{N}}\) and \(\mathcal{O}_{X}\otimes\mathcal{T}_{\mathbb{P}^{N}}\) are never instanton bundles with respect to \(\mathcal{O}_{X}(h):=\mathcal{O}_{X}\otimes\mathcal{O}_{\mathbb{P}^{N}}(1)\). ## 4. Proof of Theorem 1.4. We already checked in Theorem 1.2 that \(\Omega_{X}\) is never an \(h\)-instanton bundle with a very short proof. In this section we deal with the twists of the cotangent bundle, giving the proof of Theorem 1.4 stated in the introduction. We start with the following example analyzing the case \(n=1\): in [17] essentially the same computations are used solely for dealing with the case \(k=0\). **Example 4.1**.: Assume \(n=1\). On the one hand, if \(\Omega_{X}(ah)\) is an \(h\)-instanton, then \[h^{0}(\Omega_{X}((a-1)h))=h^{1}(\Omega_{X}((a-1)h))=0:\] in particular \[h^{0}(\mathcal{O}_{X}((1-a)h))=h^{1}(\Omega_{X}((a-1)h))=0,\] hence \(a\geq 2\), because \(\mathcal{O}_{X}(h)\) is globally generated. On the other hand, the Riemann-Roch theorem on \(X\) implies \[h^{0}(\Omega_{X}((a-1)h))-h^{1}(\Omega_{X}((a-1)h))=p_{a}(X)-1+(a-1)\deg(X).\] Since \(a\geq 2\), it follows that \(p_{a}(X)=0\) necessarily and, consequently, \(a=2\) and \(\deg(X)=1\), i.e. \(X\cong\mathbb{P}^{1}\) and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{1}}(1)\). Conversely, it is immediate to check that \(\Omega_{\mathbb{P}^{1}}(2)\cong\mathcal{O}_{\mathbb{P}^{1}}\) is the unique rank one instanton (and Ulrich) sheaf on \(\mathbb{P}^{1}\) with respect to \(\mathcal{O}_{\mathbb{P}^{1}}(1)\). We now prove Theorem 1.4 stated in the introduction. Proof of Theorem 1.4.: If \(\Omega_{X}(ah)\) is an \(h\)-instanton bundle, then (2.3) implies \[(n^{2}+(1-2a)n)h^{n}+(n-2)K_{X}h^{n-1}=0. \tag{4.1}\] If \(n=1\), then \(a=2\), \(X\cong\mathbb{P}^{1}\) and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{1}}(1)\) by Example 4.1. If \(n=2\), then (4.1) has no integral solutions, hence \(\Omega_{X}(ah)\) is not an \(h\)-instanton. Thus, the proof is complete also when \(n=2\). Assume \(n\geq 3\). We have \[n^{2}+(1-2a)n=n(n-2)+n(3-2a).\] Let \(a\leq 1\). On the one hand, \(\lambda=n(3-2a)>0\), hence (4.1) becomes \[(n-2)(nh^{n}+K_{X}h^{n-1})=-\lambda h^{n}<0,\] due to the ampleness of \(\mathcal{O}_{X}(h)\). On the other hand the left-hand side of the equality above is strictly positive by Corollary 2.2. From the contradiction we deduce \(a\geq 2\). Notice that the exterior product of the dual of (1.2) tensored by \(\mathcal{O}_{\mathbb{P}^{N}}(a)\) is \[0\longrightarrow(\wedge^{2}\Omega_{\mathbb{P}^{N}})(a)\longrightarrow \mathcal{O}_{\mathbb{P}^{N}}(a-2)^{\oplus\binom{N+1}{2}}\longrightarrow\Omega _{\mathbb{P}^{N}}(a)\longrightarrow 0.\] Its restriction to \(X\) combined with the surjective morphism \(\mathcal{O}_{X}\otimes\Omega_{\mathbb{P}^{N}}(a)\twoheadrightarrow\Omega_{X}(ah)\) induced by the dual of (1.1) implies that \(\Omega_{X}(ah)\) is globally generated for \(a\geq 2\). Since we must have \(h^{0}(\Omega_{X}((a-1)h))=0\) by definition, it follows that \(a\leq 2\) necessarily. Thus, if \(n\geq 3\) and \(\Omega_{X}(ah)\) is an \(h\)-instanton, then \(a=2\) necessarily. By definition \(\Omega_{X}(2h)\) is not an \(h\)-instanton bundle when \(n\geq 3\) if \[h^{1}(\Omega_{X})\geq 1. \tag{4.2}\] If \(\Omega_{X}(2h)\) is an \(h\)-instanton bundle, then \(h^{0}(\Omega_{X}(h))=0\) by definition. Thus, tensoring (1.2) by \(\Omega_{X}\) we obtain an injective map \[\varrho\colon\operatorname{Hom}_{X}(\mathcal{T}_{X},\mathcal{O}_{X}\otimes \mathcal{T}_{\mathbb{P}^{N}})\cong H^{0}(\Omega_{X}\otimes\mathcal{T}_{ \mathbb{P}^{N}})\longrightarrow H^{1}(\Omega_{X}).\] Thus (1.1) implies \(\varrho\neq 0\), which yields (4.2). Thus, \(\Omega_{X}(2h)\) is not an \(h\)-instanton bundle when \(n\geq 3\). **Remark 4.2**.: If \(\mathbf{k}=\mathbb{C}\), then (4.2) certainly holds because the Lefschetz \((1,1)\)-theorem implies the existence of an injective morphism \(\operatorname{NS}(X)\to H^{1}(\Omega_{X})\), hence there would be no need of further computations in this case. When \(p\neq 0\) the above morphism still exists, but it could be not injective, hence we cannot argue (4.2) in the same way. ## 5. On the tangent bundle of subcanonical varieties In this section we collect some partial results and examples showing that the problem of determining whether \(\mathcal{T}_{X}(ah)\) is an \(h\)-instanton bundle might be highly non-trivial (hence, perhaps, quite intriguing). The following result is an immediate consequence of Theorem 1.2. **Proposition 5.1**.: _Let \(X\subseteq\mathbb{P}^{N}\) be a smooth projective variety of dimension \(n\geq 1\) and \(\mathcal{O}_{X}(h):=\mathcal{O}_{X}\otimes\mathcal{O}_{\mathbb{P}^{N}}(1)\). Assume that \(\omega_{X}\cong\mathcal{O}_{X}(\alpha h)\) for some \(\alpha\in\mathbb{Z}\)._ _Then \(\mathcal{T}_{X}(ah)\) is an \(h\)-instanton bundle if and only if \(a=\alpha+n-1\), \(X\cong\mathbb{P}^{1}\) and \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{1}}(1)\);_ Proof.: Since \(\omega_{X}\cong\mathcal{O}_{X}(\alpha h)\), it follows from the Serre duality that \(\mathcal{T}_{X}(ah)\) is an \(h\)-instanton bundle if and only if the same is true for \(\Omega_{X}((\alpha-a-n-1)h)\) (see [1, Section 6]). Thus the statement follows easily from Theorem 1.4. In view of the above proposition, it is perhaps natural to deal with the pluricanonical and antipluricanonical varieties \(X\), i.e. such that \(\omega_{X}^{\beta}\cong\mathcal{O}_{X}(h)\) for some \(\beta\in\mathbb{Z}\). Trivially \(\beta\neq 0\) and the case \(\beta=\pm 1\) is covered by Proposition 5.1, hence we assume \(\beta\not\in\{\ 0,\pm 1\ \}\) in the following statement. **Proposition 5.2**.: _Let \(X\) be a smooth projective variety of dimension \(n\geq 1\) endowed with an ample and globally generated line bundle \(\mathcal{O}_{X}(h)\). Assume that \(\omega_{X}^{\beta}\cong\mathcal{O}_{X}(h)\) for some \(\beta\in\mathbb{Z}\setminus\{\ 0,\pm 1\ \}\)._ _If \(\mathcal{T}_{X}(ah)\) is an \(h\)-instanton bundle, then \(n=2\), \(1\leq a\leq 2\) and \(1\leq K_{X}^{2}\leq 5\chi(\mathcal{O}_{X})\). In this case the quantum number of \(\mathcal{T}_{X}(ah)\) is \(10\chi(\mathcal{O}_{X})-2K_{X}^{2}\)._ Proof.: Assume that \(\mathcal{T}_{X}(ah)\) is an \(h\)-instanton bundle. The hypothesis on \(\omega_{X}\) and (2.3) yield \(n(2a-n-1)\beta=n+2\), because \(0\neq h^{n}=\beta^{n}K_{X}^{n}\). In particular \(2a\neq n+1\) because \(n\geq 1\). The above equality has no integral solution if \(n=1\), hence we will assume \(n\geq 2\) from now on: thus \(1<(n+2)/n\leq 2\). Since \[\beta=\frac{n+2}{n(2a-n-1)}\in\mathbb{Z},\] it follows that necessarily \((n+2)/n=2\) and \(2a-n-1\in\{\ \pm 1,\pm 2\ \}\). We deduce that \(n=2\), hence \(\beta=\pm 2\) (recall that \(\beta\neq\pm 1\)): consequently \(2a=3\pm 1\), whence \(1\leq a\leq 2\). If \(\mathcal{T}_{X}(ah)\) is an \(h\)-instanton, then \(h^{0}(\mathcal{T}((a-1)h))=h^{2}(\mathcal{T}((a-1)h))=0\). We have \[c_{1}(\mathcal{T}_{X}((a-1)h))=2(a-1)h-K_{X},\] \[c_{2}(\mathcal{T}_{X}((a-1)h))=12\chi(\mathcal{O}_{X})-K_{X}^{2} -(a-1)hK_{X}+(a-1)^{2}h^{2},\] hence the Riemann-Roch theorem returns \[k=-h^{1}(\mathcal{T}_{X}((a-1)h))=-\chi(\mathcal{T}_{X}((a-1)h))=10\chi(\mathcal{O }_{X})-K_{X}^{2}-((a-1)h-K_{X})^{2}.\] Thus for \(1\leq a\leq 2\) we obtain \(k=10\chi(\mathcal{O}_{X})-2K_{X}^{2}\geq 0\) whence \(K_{X}^{2}\leq 5\chi(\mathcal{O}_{X})\). On the other hand \(\beta^{2}K_{X}^{2}=h^{2}\geq 1\), whence \(K_{X}^{2}\geq 1\). For simplicity, in the following examples, we assume that \(p=0\). In these examples, we inspect the surfaces in Proposition 5.2 in more detail, also showing that \(q(X)=0\) necessarily. Notice that the existence on such surfaces of rank two \(h\)-instanton bundles \(\mathcal{E}\) with \(c_{1}(\mathcal{E})=3h+K_{X}=(2\beta+1)K_{X}\) and arbitrary quantum number follows from [1, Example 6.11], because \(h^{1}(\omega_{X}^{\beta})=0\) thanks to the Kodaira vanishing theorem, because \(\beta\in\mathbb{Z}\setminus\{\ 0,\pm 1\ \}\). **Example 5.3**.: Let \(a=1\), hence \(\beta=-2\). In this case \(\mathcal{O}_{X}(h)\cong\omega_{X}^{-2}\): in particular \(\omega_{X}^{-1}\) is ample, hence \(X\) is a Del Pezzo surface. Every Del Pezzo surface is either the blow up of \(\mathbb{P}^{2}\) at \(0\leq r\leq 8\) general points or it is isomorphic to \(\mathbb{P}^{1}\times\mathbb{P}^{1}\). In the former case \(K_{X}^{2}=9-r\) is called degree of \(X\) and \(\omega_{X}^{-1}\) is globally generated if \(r\leq 7\) and very ample if \(r\leq 6\). In the latter case \(\omega_{X}^{-1}\) is very ample and \(K_{X}^{2}=8\). The bundle \(\mathcal{T}_{X}(h)\) is an \(h\)-instanton if and only if \(h^{0}(\mathcal{T}_{X})=0\), because it is orientable, i.e. \(c_{1}(\mathcal{T}_{X}(h))=3h+K_{X}=-5K_{X}\) (see [1, Corollary 6.9]): moreover, in this case, its quantum number is \(h^{1}(\mathcal{T}_{X})=h^{1}(\mathcal{T}_{X}(-h))\). We recall that \(h^{0}(\mathcal{T}_{X})\) is the dimension of the tangent space to \(\operatorname{Aut}(X)\) at the identity because \(p=0\) (see [15, Exercise I.2.16.4]): it follows that \(\mathcal{T}_{X}(h)\) is an \(h\)-instanton if and only if \(\operatorname{Aut}(X)\) is finite. If \(2\leq K_{X}^{2}\leq 5\), then \(\operatorname{Aut}(X)\) is finite (see [8, Corollary 8.2.33]. If \(K_{X}^{2}\geq 6\), then either \(X\) is \(\mathbb{P}^{2}\) blown up at \(0\leq r\leq 3\) general points, hence \(\operatorname{Aut}(X)\) has positive dimension because it contains as subgroup the group of projectivities of \(\mathbb{P}^{2}\) fixing the blown up points, or \(X\cong\mathbb{P}^{1}\times\mathbb{P}^{1}\), hence it contains \(\operatorname{PGL}_{2}\times\operatorname{PGL}_{2}\) as subgroup. We conclude that \(\mathcal{T}_{X}(h)\) is an \(h\)-instanton if and only if \(2\leq K_{X}^{2}\leq 5\) and it is \(h\)-Ulrich if and only if \(K_{X}^{2}=5\). This result is a very particular case of a more general result proved in [18]. **Example 5.4**.: Let \(a=2\), hence \(\beta=2\). In this case \(\mathcal{O}_{X}(h)\cong\omega_{X}^{2}\): in particular \(X\) is a surface of general type and it is minimal because \(\omega_{X}\) is ample. Moreover, \(\mathcal{T}_{X}(2h)\) is an \(h\)-instanton if and only if \(h^{0}(\mathcal{T}(h))=h^{0}(\mathcal{T}(2K_{X}))=0\). If \(p_{g}(X)\geq 1\), we then obtain \[0=h^{0}(\mathcal{T}_{X}(2K_{X}))=h^{0}(\Omega_{X}^{1}(K_{X}))\geq h^{0}(\Omega_ {X}^{1})=h^{1}(\mathcal{O}_{X})=q(X),\] because \(\mathcal{T}_{X}(K_{X})\cong\Omega_{X}^{1}\). If \(p_{g}(X)=0\), then \(q(X)=0\), because \(\chi(\mathcal{O}_{X})\geq 1\) (see [4, Theorem VII.1.1 (ii)]). Thus \(X\) is embedded by \(\mathcal{O}_{X}(h)\) as a surface of degree \(d:=h^{2}=4K_{X}^{2}\) inside \(\mathbb{P}^{N}\) where \(N=h^{0}(\omega_{X}^{2})-1=K_{X}^{2}+p_{g}(X)\) (see [4, Corollary VII.5.4]). If \(N=K_{X}^{2}+p_{g}(X)=3\), then \(\omega_{X}\cong\mathcal{O}_{X}((d-4)h)\) thanks to the adjunction formula in \(\mathbb{P}^{3}\). If \(N=K_{X}^{2}+p_{g}(X)=4\), then the double point formula (see [10, Example A.4.1.3]) implies \[d^{2}-13d+12\chi(\mathcal{O}_{X})=0.\] because \(d=4K_{X}^{2}\). Since \(\chi(\mathcal{O}_{X})=1+p_{g}(X)\geq 1\), it follows that the only possible cases for \(K_{X}^{2}\) being a positive integer such that \(d=4K_{X}^{2}\) is a solution of the equation above are either \(p_{g}(X)=0\) and \(K_{X}^{2}=3\) or \(p_{g}(X)=2\) and \(K_{X}^{2}=1\). In both cases \(K_{X}^{2}+p_{g}(X)=3\), contradicting the hypothesis \(N=4\), hence \(N=K_{X}^{2}+p_{g}(X)\geq 5\). Moreover, the classification of surfaces of degree up to \(8\) in \(\mathbb{P}^{N}\) (see [11, 13]: see also [19]) implies that the unique surface \(X\) with \(\kappa(X)=2\) and \(K_{X}^{2}\leq 2\) is a quadro-quartic complete intersection in \(\mathbb{P}^{4}\). In this case \(\omega_{X}\cong\mathcal{O}_{X}(h)\), hence \(K_{X}^{2}\geq 3\) necessarily. We do not know if a surface \(X\) such that \(h^{0}(\mathcal{T}_{X}(h))=0\) actually exists, but if it does, \(X\) is a minimal surface of general type such that \[\mathcal{O}_{X}(h)\cong\omega_{X}^{2},\qquad q(X)=0,\qquad\max\{\;3,6-\chi( \mathcal{O}_{X})\;\}\leq K_{X}^{2}\leq 5\chi(\mathcal{O}_{X}),\] and \(\mathcal{T}_{X}(2h)\) is \(h\)-Ulrich if and only if \(K_{X}^{2}=5\chi(\mathcal{O}_{X})\). In particular, either \(\mathcal{T}_{X}(2h)\) is \(h\)-Ulrich or \(h^{0}(\mathcal{T}(h))=h^{0}(\mathcal{T}(2K_{X}))\neq 0\) when \(p_{g}(X)=0\). **Remark 5.5**.: We could also look for varieties \(X\) with \(\omega_{X}^{\beta}\cong\mathcal{O}_{X}(\alpha h)\) and such that \(\mathcal{T}_{X}(ah)\) is an \(h\)-instanton bundle, for suitable \(\alpha,\beta,a\in\mathbb{Z}\). E.g. the Veronese surface satisfies the above hypothesis with \(\alpha=3\), \(\beta=-2\), \(a=0\) (see Example 3.2 below or [6]): notice that in this case \(\mathcal{O}_{X}(h)\cong\mathcal{O}_{\mathbb{P}^{2}}(2)\) and \(\mathcal{T}_{X}\) is actually \(h\)-Ulrich. We refer the interested reader to [18] for further results, examples and details.
2308.09632
VALERIE22 -- A photorealistic, richly metadata annotated dataset of urban environments
The VALERIE tool pipeline is a synthetic data generator developed with the goal to contribute to the understanding of domain-specific factors that influence perception performance of DNNs (deep neural networks). This work was carried out under the German research project KI Absicherung in order to develop a methodology for the validation of DNNs in the context of pedestrian detection in urban environments for automated driving. The VALERIE22 dataset was generated with the VALERIE procedural tools pipeline providing a photorealistic sensor simulation rendered from automatically synthesized scenes. The dataset provides a uniquely rich set of metadata, allowing extraction of specific scene and semantic features (like pixel-accurate occlusion rates, positions in the scene and distance + angle to the camera). This enables a multitude of possible tests on the data and we hope to stimulate research on understanding performance of DNNs. Based on performance metric a comparison with several other publicly available datasets is provided, demonstrating that VALERIE22 is one of best performing synthetic datasets currently available in the open domain.
Oliver Grau, Korbinian Hagn
2023-08-18T15:44:45Z
http://arxiv.org/abs/2308.09632v1
# VALERIE22 - A photorealistic, richly metadata annotated dataset of urban environments ###### Abstract The VALERIE tool pipeline is a synthetic data generator [12] developed with the goal to contribute to the understanding of domain-specific factors that influence perception performance of DNNs (deep neural networks). This work was carried out under the German research project KI Absicherung in order to develop a methodology for the validation of DNNs in the context of pedestrian detection in urban environments for automated driving. The VALERIE22 dataset was generated with the VALERIE procedural tools pipeline providing a photorealistic sensor simulation rendered from automatically synthesized scenes. The dataset provides a uniquely rich set of metadata, allowing extraction of specific scene and semantic features (like pixel-accurate occlusion rates, positions in the scene and distance + angle to the camera). This enables a multitude of possible tests on the data and we hope to stimulate research on understanding performance of DNNs. Based on performance metric a comparison with several other publicly available datasets is provided, demonstrating that VALERIE22 is one of best performing synthetic datasets currently available in the open domain. 1 Footnote 1: Available here: [https://huggingface.co/datasets/Intel/VALERIE22](https://huggingface.co/datasets/Intel/VALERIE22) ## 1 Introduction Recently, great progress has been made in applying machine learning techniques to deep neural networks to solve perceptional problems. Automated vehicles (AV) are a recent focus as an important application of perception from cameras and other sensors, such as LIDAR and Radar [31]. Although the current main effort is on developing the hardware and software to implement the functionality of AVs, it will be equally important to demonstrate that this technology is safe. The German collaborative research project _KI Absicherung_[1] was a cross industry and academia effort to develop a methodology for the validation of DNNs in the context of pedestrian detection in urban environments for automated driving. Specifically, one important goal of that project was to make the safety aspects of ML-based perception functions predicable. As one important research stream of this project synthetic data generation was used as a base, as this allows full control over domain-specific scene parameters and the ability to generate parameter variations of these. Further, additional metadata annotations were specified and automated computation of these were added to the synthesis pipeline. The VALERIE tools pipeline was developed as a research tool to improve quality of data synthesis and to get an understanding of factors that determine the domain gap between synthetic and real datasets. For that a powerful synthesis pipeline has been developed, which allows the fully automated creation of complex urban scenes. In this paper we only summarize some of the functionalities of the VALERIE synthesis pipeline and focus on a description of the (meta-)data formats of the _VALERIE22_ dataset that was generated with the tool chain. More details on the synthesis tools can be found in [12]. Additionally, we present evaluation results to assess the quality of our synthetic data compared to other synthetic datasets in the autonomous driving domain. ### Related work In [12] we suggest a computational data synthesis approach for deep validation of perception functions based on parameterized synthetic data generation. We introduce a multi-stage strategy to sample the input domain and to reduce the required vast amount of computational effort. This concept is an extension and generalization of our previous work on parameterization of the scene parameters of concrete scenarios. We extended this parameterization by a probabilistic scene generator to widen the coverage of scenario spaces and a more realistic sensor simulation. These approaches were used to generate the scenes and data in the _VALERIE22_ dataset. Techniques to capture and render models of the real world have been matured significantly over the last decades. Computer generated imagery (CGI) is increasingly popular for training and validation of deep neural networks (DNNs) as synthetic data can avoid privacy issues found with recordings of members of the public and can automatically produce ground truth data at higher quality and reliability than costly manually labeled data. Moreover, simulations allow synthesis of rare scene constellations helping validation of products targeting safety critical applications, specifically automated driving. Because of the progress in visual and multi-sensor synthesis, now building systems for validation of these complex systems in the data center becomes not only feasible but also offers more possibilities for the integration of intelligent techniques in the engineering process of complex applications. The use of synthesized data for development and validation is an accepted technique and has been also suggested for computer vision applications (e.g. [2]). Several methodologies for verification and validation of AVs have been developed [16, 7, 17] and commercial options exist.2 These tools were originally designed for virtual testing of automotive functions, like braking systems and then extended to provide simulation and management tools for virtual test drives in virtual environments. They provide real-time capable models for vehicles, roads, drivers, and traffic which are then being used to generate test (sensor) data as well as APIs for users to integrate the virtual simulation into their own validation systems. Footnote 2: For example Carmaker from IPG or PreScan from TASS International. Recently, specifically in the domain of driving scenarios, game engines have been adapted [22, 29]. Another virtual simulator system, which gained popularity in the research community is CARLA [9], also based on a commercial game engine (Unreal4 [10]). Although game engines provide a good starting point to simulate environments, they usually only offer a closed rendering set-up with many trade-offs balancing between real-time constraints and a subjectively good visual appearance to human observers. Specifically, the lighting computation in this rendering pipelines is limited and does not produce physically correct imagery. Instead, game engines only deliver fixed rendering quality typically with 8bit per RGB color channel and only basic shadow computation. In contrast, physical-based rendering techniques have been applied to the generation of data for training and validation, like in the _Synscapes_ dataset [28]. For our experimental work we use the physical-based open source Blender Cycles renderer3 in high dynamic range (HDR) resolution. Footnote 3: [https://www.blender.org/](https://www.blender.org/) The effect of sensor and lens effects on perception performance has only been limited studied. In [3, 19] the authors are modeling camera effects to improve synthetic data for the task of bounding box detection. Metrics and parameter estimation of the effects from real camera images are suggested by [18] and [4]. A sensor model including sensor noise, lens blur, and chromatic aberration was developed based on real data sets [13] and integrated into our validation framework. Looking at virtual scene content, most recent simulation systems for validation of complete AD system include simulation and testing of the ego-motion of a virtual vehicle and its behavior. The used test content or scenarios are therefore aiming to simulate environments with a large extension and are virtually driving a high number of test miles (or km) in the virtual world provided [7, 27, 20]. This might be a good strategy to validate full AD stacks, one problem for validation of perception systems is the limited coverage of data testing critical and performance limiting factors. A more suitable approach is to use probabilistic grammar systems [28, 8] to generate 3D scenarios which include a catalog of different object classes and places them relative to each other to cover the complexity of the input domain. The _VALERIE22_ dataset demonstrates the effectiveness of our probabilistic grammar system together with our previous scene parameter variation [25] with a novel multi-stage strategy. This approach allows to systematically test conditions and relevant parameters for validation of the perceptional function under consideration in a structured way. The remainder of this contribution is structured as the following: The next section will give an outline of our synthesis approach and a description of the generated metadata. In section 3 we will give a comparison of _VA-LERIE22_ with a number of publicly available real and synthetic datasets. ## 2 Valerie data synthesis pipeline VALERIE is composed of several modules, as depicted in fig. 1. The validation flow control is in principle designed to run automated validation strategies in a data center, with the help of the 'SCALA' orchestration module based on slurm4. A description of the concept of these modules is outside the scope of this paper, see [12] for more details. The aim in here is to only give an overview over some of the modules in the data synthesis part, so that the reader is able to understand the features of the dataset and how to identify objects in the rendered frames. Footnote 4: [https://slurm.schedmd.com/documentation.html](https://slurm.schedmd.com/documentation.html) ### Computation of synthetic data Synthetic data is generated with graphics methods. Specifically for color (RGB) images, there are many software systems available, both commercially and as open source. For the generation of the dataset described in this paper Blender was used as a base to import, edit, and rendering of 3D content. The generation of highly varied synthetic data involves the following steps: 1. A 3D scene model with a city model is generated using a terrain/street generator. Parameters like width of a street and pavement, type of segment (e.g. tall houses, sub-urban residential, green/park, place, etc.) and materials for roads, sidewalks, segments are generated based on a scene description. Alongside this process the semantic information about the types and geometry of the segments is passed as input to the next step. 2. A placement step is inserting 3D assets, like cars, vegetation, road elements and pedestrians into the scene. This placement is inserting objects based on a density declaration (per segment) and a list of assets for this type of segment (e.g. road, sidewalk, etc.). The result is a complete scene. Fig. 3 shows examples of scenes with a variation of person densities. 3. (optionally) a set of scene parameters can be varied before each rendering pass. This includes position of objects, cameras and time-of-the-day (to vary the sun position) and many more. The dataset contains a multitude of additional metadata. For example all objects in the scenes are tagged with an identifier (see next section) and semantic and scene information, like position in the scene and distance + angle to the camera is documented in form of json files. This enables a multitude of possibilities to analyze the data and we hope to stimulate research on understanding performance of DNNs with our dataset. ### Assets and object instances The assets5 in the asset database (left side in fig. 1) have a unique identifier in form of a UUID (Universally Unique IDentifier). This identifier is used in the scene description either explicitly (for static objects) or in selection lists used by the probabilistic scene generator. Footnote 5: An asset here means a 3D model or 2D texture. The asset id6 is also used to identify objects in the rendered frames. The dataset contains metadata files (json format) with a list of objects and their asset ids. Objects are also identified with a specific UUID. This is depicted in fig. 2. In the appendix, section on _Metadata_ an example json file is listed. The "entities" key, in this example "91" is an integer and corresponds to the instance label (see below) of the instance ground truth. With the help of the scene metadata files and the unique UUIDs of the assets it is possible to identify assets in the rendered scene. This can be used for statistical purposes or to retrieve more information from the asset database (not included in the dataset). Footnote 6: id — is identifier for brevity. The scene composition and also the used assets in _VA-LERIE22_ are European, e.g. the traffic signs and road markings are German. The types of houses are also mainly European style. ### Ground truth and metadata The _VALERIE22_ dataset provides a very rich set of metadata annotations and ground truth: * pixel-aligned class groups (semantic label image) * pixel-aligned object instances (label image) Figure 1: Overview of VALERIE pipeline flow. * object 2D bounding box * object 3D bounding box * object position and orientation, angle and distance to camera * object occlusion (only for person class) * scene parameters, specifically time-of-the-day and sun (illumination) * camera parameter, including pose in scene The labels for object classes will be mapped to a convention used in annotation formats and follows the Cityscapes convention [6] for training and evaluation of the perception function. The 2D image of a scene is computed along with the ground truth extracted from the modeling software rendering engine. ### Sensor Simulation We implemented a sensor model to simulate real sensor behavior. The module works on HDR images in linear RGB space and floating point resolution as provided by the Blender Cycles renderer. We simulate a camera error model by applying **sensor noise**, as added Gaussian Noise (mean=0, variance: free parameter) and a automatic, histogram-based exposure control (linear tone-mapping), followed by non-linear **Gamma correction**. Further, we simulate the following lens artifacts **chromatic aberration**, and **blur**. Fig. 4 shows a comparison of the standard tone-mapped 8bit RGB output of Blender (left) with our sensor simulation (right). The parameters were adapted to approximate the camera characteristic of Cityscape images. The images do not only look more realistic for the human eye, they also are further closing the domain gap between the synthetic and real data (for details see [13]). #### 2.4.1 Sampling of variable parameter Variations in the dataset were created by linear stepping through a parameter interval or random sampling of these. Examples are time-of-the-day to control the sun settings or position and orientation of the camera. The parameters used in variation runs are documented in a json file with the actual parameter variations. However, the sun camera parameters are also documented in the 'per-frame-analysis' file. Figure 4: Realistic sensor effect simulation, (left) standard Blender tone-mapped output, (right) the sensor simulation output. Figure 3: Variation of density of pedestrians in the street and on side walk (top) low, to high (bottom). Figure 2: Object identifiers allow tracking of object instances through the rendered frames and metadata. ## 3 Evaluation To evaluate the quality of our dataset we conducted several experiments using the semantic segmentation task. We compare the segmentation performance of a DeeplabV3+ model trained on our synthetic data and compare the performance with models trained on several synthetic datasets. The performance of these models is then evaluated on five different real world automotive segmentation datasets. Use cases of our metadata include improved training and identification of impairing factors (for more details see [14, 15]). Next, we investigated the segmentation performance on the person class of the _CityPersons_ dataset if we train the model on subsets of our dataset. We additionally evaluated the person class performance with models trained on subsets of the _SynPeDS_ dataset [24] provided by the KI Abischerung project7. Finally, we investigated how the performance of the models differs for the number of unique person assets used to create the datasets and their subsets. Footnote 7: Currently a publication of the SynPeDS dataset is under preparation, see [https://www.ki-absicherung-projekt.de/](https://www.ki-absicherung-projekt.de/) Lastly, we investigated how the number of training images influences the segmentation performance. Again we trained on subsets of our dataset and the _SynPeDS_ dataset and evaluated the segmentation performance on all classes with the _DeeplabV3+_ segmentation model. ### Computation and evaluation of perceptional functions State-of-the-art perception functions consists of a multitude of different approaches considering the wide range of different tasks. For experiments presented in this chapter, we are considering the task of semantic segmentation. In this task, the perception function segments an input image into different objects by assigning a semantic label to each of the input image pixels. One of the main advantages of semantic segmentation is the visual representation of the task which can be easily understood and analyzed for flaws by a human. In this work, we considered the DeeplabV3+ model which originated from [5] and utilizes a ResNet101 backbone. We compare our dataset to three different synthetic datasets. The first dataset is the synthetic dataset _SynPeDS_[24] consisting of urban street scenes inspired by the preceding two real-world datasets. The second dataset is the _GTAV_ dataset [22], created by sampling data from the 3D game of the same name. Last, the _Synscapes_ dataset [28] which is intended to synthetically re-create charateristics of the _Cityscapes_ dataset is considered. To compare our dataset we train segmentation models on each of these datasets and evaluate the segmentation performance on five real-world datasets. The first dataset is the _Cityscapes_ dataset [6], a collection of European urban street scenes in the daytime with good to medium weather conditions. The second dataset is the A2D2 by [11], similar to the _Cityscapes_ dataset it is a collection of German urban street scenes and additionally it has sequences from driving on a freeway. The third dataset is the _BDD100K_ dataset [30] a diverse dataset recorded in North-America at diverse weather conditions. Next, the _India Driving Dataset_ dataset [26], which was recorded in India and contains entirely different street scenes compared to the European or American datasets. Last, the _Mapillary Vistas_ dataset [21], a world wide dataset with emphasis on northern America. All of these datasets are labeled on a subset of 11 classes which are alike in these datasets to provide comparability between the results of the different trained and evaluated models. To measure the performance of the task of semantic segmentation the mean Intersection over Union (mIoU) from the COCO semantic segmentation benchmark task is used [23]. The mIoU is denoted as the intersections between predicted semantic label classes and their corresponding ground truth divided by the union of the same, averaged over all classes. We showed in our previous work how to use the extensive metadata accompanied to our dataset to detect data biases in person detectors due to the underlying training data used to train the bounding box detectors [15]. Another work investigated the usage of the metadata to calculate visual impairing factors, i.e., factors that lead to detrimental detection performance of a person detector such as increased occlusion or decreased contrast. Re-training a person detector with a focus on harder to detect samples, according to these factors, improves the overall detection performance [14]. #### 3.1.1 Cross domain evaluation To demonstrate the quality of our synthetic dataset we conducted several cross-domain performance experiments with other real-world automotive and synthetic datasets. This cross-domain performance analysis is also commonly referred to as generalization distance. We trained a DeeplabV3+ model on our _VALERIE22_ dataset, as well as for the _SynPeDS_, the _GTAV_ and the _Synscapes_ dataset. Next, we evaluated the segmentation performance on real-world datasets _A2D2_, _BDD100K_, _Cityscapes_, _IDD_ and _Mapillary Vistas_. As the real-world and synthetic datasets do not have exactly the same semantic annotation format, the segmentation models were trained on a subset of 11 labels per dataset to ensure consistency of classes across. The labels are defined as follows: Road and sidewalk incorporate the road-markings and the curb respectively. Further, the building, sky, car and truck classes are used, which are consistent across these datasets. Pole, traffic light and traffic sign classes are mapped from similar sub-classes in the used datasets, e.g., utility pole in _Mapillary Vistas_. The vegetation class consists of the _Cityscapes_ sub-classes terrain, i.e., plants covering the ground, and the original vegetation class, i.e., trees and bushes. Last, the person class is defined as all humans in the dataset, e.g., pedestrians and riders. The mIoU cross-domain generalization performance results over all 11 classes are depicted in 5. Our _VALERIE22_ dataset performs best on three datasets (BDD100K, Cityscapes, IDD) and just marginally worse than the _SynPeDS_ trained model on A2D2. Compared to the mainly North-American based _Mapillary Vistas_ dataset our dataset shows a significant domain shift. Although, still the cross-domain evaluation of _VALERIE22_ is significantly better than _Synscapes_ and close to _GTAV_. Most notably our dataset outperforms the _SynPeDS_ dataset on the _Cityscapes_ dataset. This comes as a surprise as the _SynPeDS_ dataset was created to synthetically resemble the _Cityscapes_ dataset. #### 3.1.2 Number of Assets We conducted experiments to understand the influence of diversity of the training data. Therefore, cross-domain performance is evaluated by comparing the number of unique training assets and the resulting cross-domain segmentation performance. While comparing automotive real-world and synthetic images it becomes obvious that most images and scenes in real-world images are unique, whereas in synthetic images the scenes are often composed of repetitive content, i.e., a limited amount of unique assets, which are continuously differently arranged. In synthetic datasets the 3D assets, i.e., the 3D meshes and textures of objects in a scene, are expensive to create at a high fidelity and should therefore be used as much as possible. Training a pedestrian detector on a dataset consists of too few unique person assets will lead to a strongly biased detector which is able to detect solely the few trained person assets, but will fail to generalize on other persons. Overfitting will therefore occur if the training data is of low diversity and the model will fail to generalize, but it is non-obvious on how much diversity is actually needed to generalize well. To understand the required diversity we investigated the semantic segmentation performance on the _person_ class of a DeeplabV3+ model trained with different subsets of the _VALERIE22_ and the _SynPeDs_ datasets. The subsets, i.e., sequences, of our dataset are described in the Appendix whereas the subsets of the _SynPeDS_ dataset, i.e., tranches, are described in [24]. To track the number of unique person assets per subset in our dataset we just have to count the occurrences of unique asset IDs in the scene metadata files of a sequence. Each subset of both datasets represents a stage in the process of its development and therefore these dataset subsets consist of an increasing number of pedestrian assets the further the development progressed. The trained models are cross-validated on the _Cityscapes_ validation dataset to investigate the cross-domain generalization performance. Figure 6 shows the resulting number of unique person assets in the dataset subsets compared to the cross-domain person class performance measured as mIoU on the _Cityscapes_ dataset. The _VALERIE22_ subset for higher unique person counts clearly outperforms the _SynPeDS_ subset in the cross-domain performance. While a low number of unique assets will lead to overfitting on these assets, a higher number clearly benefits the generalization capabilities of the model. Both, the _VALERIE22_ trained models and the _SynPeDS_ trained model Figure 5: Cross-domain segmentation performance of synthetic datasets _VALERIE22_, _SynPeDS_, _GTAV_ and _Synscapes_ evaluated on real world datasets _A2D2_, _BDD100K_, _Cityscapes_, _IDD_ and _Mapillary Vistas_. Figure 6: Unique person assets per _SynPeDS_ (blue) tranche or _VALERIE22_ (red) sequence and person class generalization performance on the _Cityscapes_ dataset. benefit from an increasing number of person assets on the cross-domain performance. The model trained on our full _VALERIE22_ dataset is just \(<\%1\) worse in performance than the baseline _Cityscapes_ trained model. The results clearly indicate the more diverse a dataset with regard to person assets, the better the generalization capabilities of a segmentation model on this class. #### 3.1.3 Number of Training Images Training with a diversified dataset shows significant improvement on the cross-domain performance. This might also raise the question on the performance difference if we have a huge number of training images with lower asset diversity compared to a smaller count of images but with a higher number of assets. A very low number of images should obviously lead to overfitting, but training with a huge dataset with only marginal differences between images could lead to overfitting as well. From our previous experiment we found that the person asset diversity in the overall _VALERIE22_ dataset is higher compared to the _SynPeDS_ dataset and this leads to a better segmentation performance. However, the number of training images is vastly different between these datasets. To understand the influence of the number of training images we compared the cross-domain performance on all 11 classes on the _Cityscapes_ dataset again trained on subsets of the _VALERIE22_ and _SynPeDS_ datasets. Figure 7 shows the generalization results with the respective cumulative frame counts that were used to train each segmentation model. While no model reaches the baseline performance of \(82.34\%\), the cross-domain performance with Sequences of our _VALERIE22_ dataset reach higher mIoU performance values with far fewer image frames than the _SynPeDS_ dataset. As previously shown, the diversity in the _VALERIE22_ dataset continuously improved, which is evident by the increasing cross-domain performance, whereas the performance of the _SynPeDS_ model even deceased for tranche 4. In tranche 4 a significant pedestrian object distribution bias was introduced into the dataset as was found in [12]. In [12] we additionally showed how to utilize the exact positioning metadata of the person assets in the images to identify the pedestrian distributions and understand if data biases were introduced. Overall, it is clearly visible in this result that only increasing the frame count by reiterating the same assets in the scenes is no viable strategy to increase the cross-domain generalization performance. ## 4 Summary This paper describes the _VALERIE22_ dataset. The dataset and its underlying scene models are generated completely automated with a parametric scene generation and rendering pipeline. The results of a cross-evaluation with real and other synthetic datasets demonstrates the performance of this approach. Compared to European datasets VALERIE22 is performing best (or equal) compared with the synthetic _SynPeDS_, _GTAV_ and _Synscapes_ datasets. _VALERIE22_ comes with a rich set of metadata annotations making it a valuable asset for research on understanding performance and domain aspects of DNNs. ## Acknowledgement The work presented in this paper was partially funded by the BMWK project KI Absicherung.
2303.13518
Three ways to improve feature alignment for open vocabulary detection
The core problem in zero-shot open vocabulary detection is how to align visual and text features, so that the detector performs well on unseen classes. Previous approaches train the feature pyramid and detection head from scratch, which breaks the vision-text feature alignment established during pretraining, and struggles to prevent the language model from forgetting unseen classes. We propose three methods to alleviate these issues. Firstly, a simple scheme is used to augment the text embeddings which prevents overfitting to a small number of classes seen during training, while simultaneously saving memory and computation. Secondly, the feature pyramid network and the detection head are modified to include trainable gated shortcuts, which encourages vision-text feature alignment and guarantees it at the start of detection training. Finally, a self-training approach is used to leverage a larger corpus of image-text pairs thus improving detection performance on classes with no human annotated bounding boxes. Our three methods are evaluated on the zero-shot version of the LVIS benchmark, each of them showing clear and significant benefits. Our final network achieves the new stateof-the-art on the mAP-all metric and demonstrates competitive performance for mAP-rare, as well as superior transfer to COCO and Objects365.
Relja Arandjelović, Alex Andonian, Arthur Mensch, Olivier J. Hénaff, Jean-Baptiste Alayrac, Andrew Zisserman
2023-03-23T17:59:53Z
http://arxiv.org/abs/2303.13518v1
# Three ways to improve feature alignment for open vocabulary detection ###### Abstract The core problem in zero-shot open vocabulary detection is how to align visual and text features, so that the detector performs well on unseen classes. Previous approaches train the feature pyramid and detection head from scratch, which breaks the vision-text feature alignment established during pretraining, and struggles to prevent the language model from forgetting unseen classes. We propose three methods to alleviate these issues. Firstly, a simple scheme is used to augment the text embeddings which prevents overfitting to a small number of classes seen during training, while simultaneously saving memory and computation. Secondly, the feature pyramid network and the detection head are modified to include trainable gated shortcuts, which encourages vision-text feature alignment and guarantees it at the start of detection training. Finally, a self-training approach is used to leverage a larger corpus of image-text pairs thus improving detection performance on classes with no human annotated bounding boxes. Our three methods are evaluated on the zero-shot version of the LVIS benchmark, each of them showing clear and significant benefits. Our final network achieves the new state-of-the-art on the mAP-all metric and demonstrates competitive performance for mAP-rare, as well as superior transfer to COCO and Objects365. ## 1 Introduction Traditional closed vocabulary detection is limited to a fixed set of predetermined classes, and does not satisfactorily address user needs - imagine Google where you are only able to search for a predefined list of terms. Adding support for more terms requires large and costly annotation efforts, which is simply not scalable. Our objective in this paper is zero-shot open vocabulary detection, where the task is to detect any object the user queries for, in a form of a textual query (_e.g_. "Gargoyle"; Figure 1), even if it has not been seen during training. The common approach to building an open vocabulary detector is to borrow heavily from the design of standard closed vocabulary detectors (_i.e_. detectors capable of detecting only a fixed set of predetermined classes), and simply modify the bounding box classification procedure. Instead of producing the logits for the fixed set of classes via a fully connected layer, the score for the textual query is obtained via a scalar product between its embedding, produced by a language model, and the image region embedding, produced by the detector head. The zero-shot capability strongly relies on a good alignment between visual and textual representations - the only way queries not seen during training can be detected successfully is if the vision-text alignment holds even beyond the seen classes. In this work, we explicitly consider feature alignment and devise three ways of improving it: (i) Many works [24, 42, 46] choose to freeze the pre-trained language model, while others, observe this yielding bad performance [26, 31, 43] and choose to train it, but with a small learning rate to prevent "catastrophic forgetting". We also find that a frozen language model alone yields poor performance (Section 3.1.1), but propose instead to use the frozen language model together with a simple and efficient data augmentation approach, which provides superior results to both alternatives while speeding up training and decreasing accelerator memory consumption. (ii) A typical detector pretrains the vision backbone and Figure 1: **Zero-shot open vocabulary detection. The detector is able to answer the queries “Gargoyle” and “Eiffel tower” despite never seeing human-annotated bounding boxes for them.** language model on image-text datasets to obtain aligned image and text embeddings [14, 16, 31, 46], but also inserts many modules (feature pyramid network [28], detection heads [13, 28, 38]) that are trained from scratch. The added modules break the vision-text alignment established during pretraining, and we propose to side-step this issue by modifying their architecture. Explicitly, we add shortcuts and trainable gating layers which ensure the features are aligned at the start of detector training, and promote alignment throughout the training. (iii) Feature alignment that can be achieved from relatively scarce detection training data is sparse and limited. The alignment can be improved by making use of readily available large-scale image-text data through a self-training approach [25, 37, 41, 48]. We examine self-training via pseudo-labelling in detail and observe it is crucial to use batch-negatives. Our final approach based on all three improvements achieves the best mAPall on the challenging LVISR benchmark, beating the next method by more than 9% points, while achieving very competitive zero-shot results and superior transfer to COCO and Objects365. ### Related work **Zero-shot open vocabulary detection.** Zero-shot (ZS) in the context of object detection refers to never seeing even a single annotated bounding box of the class of interest during training [31]; note that this definition allows for the existence of the object in the training set images as long as no annotations are associated with it, and it permits weak supervision, an image-text dataset where the object is mentioned in the text can be used as long as no bounding boxes are provided. There is a large overlap in the ZS and open vocabulary (OV) approaches, so, confusingly, the terms are often used interchangeably, which we avoid here. Bansal _et al._[4] introduce ZS+OV detection where the classification layer of a closed vocabulary detector is replaced with the text embeddings of the class names, an approach taken by many subsequent works [11, 14, 16, 24, 31, 42, 46, 46], including this one. Some works [16, 24, 42] take the OV classification closer to the backbone features by directly extracting them from object proposals with ROI-Align [20], and optionally distill a strong OV classifier into the detector [16]. To improve ZS performance, Detic [46] and PromptDet [14] forego the OV aspect - knowing the names of the classes of interest (in evaluation: test classes) already during training enables them to obtain high-quality weak labels, and thus improve the detection performance for those classes. **Self-training** is often used in the weakly- and semi-supervised settings to improve the low-shot performance of a detector [33, 34, 37, 48], by first training a detector, followed by using it to pseudo-label additional images, which are in turn used to train a better detector. This has been adapted by Detic [46] to the ZS scenario, who argue for using region proposals rather than the detector outputs to perform the pseudo-labelling. Motivated by self-supervision and contrastive learning [1, 2, 8, 19, 32, 39], we show that using batch-negatives is crucial for obtaining good performance in self-training as well. **Pretraining-preserving init.** The seminal ResNet paper [21] showed the importance of shortcuts for signal propagation during training, while SkipInit [9] introduced a learnt gating that further encourages identity functions. We take most inspiration from the trainable gating of Figure 2: **A standard approach to open vocabulary detection and pretraining. A standard single-stage detector adapted to open vocabulary detection, as explained in Section 2, makes use of a language model, vision backbone, feature pyramid network (FPN), and detector heads. The vision backbone and language model are typically pretrained in a contrastive manner, while the FPN and the detector heads are initialized from scratch.** Flamingo [1] where the vision-language model is initialized such that the visual branch is ignored, thus preserving the language model pretraining. FIBER [10] uses the Flamingo-style gating to initialize a joint vision-text encoder with pretrained dual encoders. We instead aim to preserve alignment between visual and language features obtained during pretraining but broken due to the injected detection-specific modules. ## 2 Baseline detector and experimental setup In this section, we describe the baseline open vocabulary detector (Figure 2), that we build and improve upon in Section 3. We also specify the main benchmark with some implementation details, while the full details are available in Appendix A. **Open vocabulary detector.** We follow the design of the single-stage FCOS detector [38], illustrated in Figure 2. It starts by processing the image with a vision backbone, features from different blocks are then passed to the feature pyramid network (FPN [28]), followed by the application of detection heads (parameters shared across levels); we use the T-Head [13] but also experiment with the classic FCOS head [38]. Each head produces dense detections associated with three quantities: bounding box coordinates, quality, and classification features. In line with other ZS/OV approaches [4, 16, 42, 46], the classification features are dotted with the embedding of the query text, obtained via a language model (LM), producing the classification logits for the given query. The final scores for all dense detections are computed by multiplying the classification probabilities with the quality scores. Non-maximum suppression [12] is then applied to produce the final detections. Training follows the standard FCOS method and its improvements, _i.e._ the dense predictions are assigned to a ground truth box or deemed as a negative through ATSS [44], and the classification, bounding box prediction, and quality prediction branches use the focal [29], gloU [35] and IoU prediction [40] losses, respectively. Free form textual queries are naturally supported, while it is still possible to detect a desired object class as the query text for that class (hereafter also referred to as the "class embedding") can be produced by populating the default template ("A photo of a {_object_}") with the class name. **Zero-shot benchmark.** We use the LVIS v1.0 [18] object detection benchmark adapted for zero-shot evaluation; we call this setup LVIS\({}_{\text{.R}}\). Following standard practice [16, 24, 46], the _rare_ class annotations are removed from the training set, keeping only the _frequent_ and _common_ annotations (often called LVIS-base). Evaluation is then performed on all classes, reporting the box mAP for all classes (mAP\({}_{\text{all}}\)) and the mAP on rare classes (mAP\({}_{\text{rare}}\)), with the emphasis on mAP\({}_{\text{rare}}\) as this measures the zero-shot performance, _rare_ classes playing the role of the _unseen_ classes. As is best practice [17], we run all experiments with three different random seeds and report the mean and standard deviation. **Implementation details.** The vision backbone and the language model are pretrained contrastively on the ALIGN [22] and LTIP datasets [1] as in [1], while the FPN and the detector head are initialized from scratch. We follow a standard training procedure for LVIS\({}_{\text{.R}}\) and tune the hyper-parameters to maximize the baseline performance; full details are listed in Appendix A. With the NFNet-F0 [6] backbone we achieve an mAP\({}_{\text{all}}\) of 32.1 \(\pm\) 0.31, and mAP\({}_{\text{rare}}\) of 18.9 \(\pm\) 1.13. This is a strong baseline, as for example the baseline used in a recent work [46] achieves 30.0 \(\pm\) 0.4 and 16.3 \(\pm\) 0.7, respectively. ## 3 Three paths to alignment In this section we describe three complementary methods for improving vision-text alignment, starting from efficient text augmentation which alleviates overfitting and facilitates large scale training (Section 3.1), followed by an architectural modification that preserves and promotes the alignment (Section 3.2), and ending with an approach for self-training which further improves the detection performance on unseen classes (Section 3.3). ### Efficient text augmentation When training a zero-shot detector, a difficult choice has to be made whether to train or to freeze the language model (LM). Many works, such as OVD [42], Detic [46] and F-VLM [24], follow the natural intuition to freeze it - the language model learnt a comprehensive textual representations during pretraining, and fine-tuning for detection on a small number of classes could make it forget about the unseen classes [31]. However, freezing it also comes with down-sides - the vision model is "forced" into the language-model "mould" making it less able to adapt to the task change from pretraining which only involved global image understanding. Multiple works, such as OWL [31], GLIP [26], GLIPv2 [43], do train the language model as well, but typically use a smaller learning rate in order to prevent "catastrophic forgetting", _e.g._ OWL [31] sets it to \(1/100\) of the vision model learning rate and notes poor performance when the language model is frozen. In fact, experimentally we find that the main issue behind the poor detection performance of a system with a frozen language model is overfitting of the visual representations to the small fixed set of textual embeddings corresponding to the training classes. Augmenting of the class embeddings during training can be used as an effective way of alleviating these issues, and we consider two alternatives: (i) _Freeze + Dropout_: despite freezing the LM, enable the dropout (commonly present in Transformer-based LMs during training). (ii) _Variants_: precompute 64 variants of the class embeddings by using (i) and randomly sample a variant for each training sample. Freezing the LM makes the training faster and simultaneously saves memory due to not having to perform backpropagation or keep the optimizer state (_e.g._ for popular stateful optimizers such as SGD with momentum or Adam [23]). The _Variants_ approach makes it possible to completely remove the LM during training as precomputed embeddings can be used, thus making training even faster and providing further memory savings. This can be essential as detection training requires high-resolution images which for some large vision models makes it hard to fit even a batch size of 1 into the accelerator memory - it is exactly the case for our self-training NFNet-F6 experiments (Section 3.3) which are not possible without the _Variants_ method. #### 3.1.1 Results All experiments are performed on the LVIS\({}_{\text{R}}\) benchmark (_c.f._ Section 2). The effect of different approaches to training or freezing the language model are shown in Table 1. **Training** the LM with the same learning rate as the vision model is unstable and results in poor performance. Using the OWL [31] strategy of training the LM with a very small learning rate yields good performance on all classes. However, when compared with our augmentation approaches, it becomes clear that it is underperforming on unseen classes, as even this small learning rate causes some forgetting, albeit arguably not "catastrophic". **Freezing** the LM underperforms and is unstable, testifying to overfitting; this is equivalent to the _1 Variant_ scenario which performs equally badly. However, simply using dropout while keeping the network frozen performs very well - achieving the best mAP on the unseen classes. Furthermore, the _64 Variants_ approach, where the variants of the class embeddings are precomputed and sampled during training, performs equally well while enabling us to remove the LM inference from training. This in turn achieves a 9% reduction in memory use and a speedup of 53% vs _Freeze_ + _Dropout_, and 33% memory savings and a 2\(\times\) speedup vs the LM-training approaches. **Templates.** An alternative to the _Variants_ approach is to use many different text templates (_e.g._ "A close-up photo of the {_object_}" or "A low resolution photo of the {_object_}") to compute the class embeddings and randomly sample them for each training sample [31]. The 8 templates are formed by combining the "7 best" CLIP templates [32] and the default one ("A photo of a {_object_}"), while the 80 templates are the CLIP 80 templates (the default is already included). The use of multiple templates during training has a similar effect to _64 Variants_ in that it trains stably and outperforms the LM-training approaches. However, for good performance it requires inference with multiple templates [31] (_i.e._ class probabilities are averaged across the different templates) which increases complexity and inference memory requirements, while still being beaten by our simple _Variants_ approach. We hypothesise this is because the templates have been designed for ImageNet classification and contain obscure concepts such as "An origami {_object_}". It is not easy to design many good templates, so our approach to simply compute _64 Variants_ of the class embedding by using the natural default template and enabling dropout is more effective. ### Alignment preserving architecture As outlined in Section 1 and shown in Figure 2, a typical setup that we also follow is to: (1) pretrain a vision-language model, (2) construct the open-vocabulary detector by re-using the vision and text backbones and adding detection-specific layers (feature pyramid network (FPN) and detector heads), (3) initialize the backbones from (1) while initializing the rest (FPN, heads) from scratch, and (4) train all or subsets of parameters _e.g._ freezing the LM backbone. The disconnect between steps (1) and (3) stands out - the vision and language backbones were trained together to produce aligned representations of their respective modalities, and we sever that alignment by introducing many layers in-between that are trained from scratch. The detector training then spends a long time seeking to realign the features, and it is very likely that during this initial chaos some of the pre-trained alignment is forever lost. Here we introduce small \begin{table} \begin{tabular}{l c c c c} \hline \hline LM train or freeze & mAP\({}_{\text{all}}\) & mAP\({}_{\text{race}}\) & speed & mem. \\ \hline Train w/ lr-ratio 1 & 12.0 \(\pm\) 15.9 & 5.2 \(\pm\) 7.0 & 1.3 & 14.1G \\ Train w/ lr-ratio 0.01 & 33.2 \(\pm\) 8.0 & 16.5 \(\pm\) 095 & 1.3 & 14.1G \\ Freeze & 24.4 \(\pm\) 13.2 & 13.0 \(\pm\) 8.51 & 1.9 & 10.4G \\ Freeze + Dropout & 31.8 \(\pm\) 0.20 & **18.7 \(\pm\) 1.39** & 1.7 & 10.4G \\ 8 Templates, infer. 1 & 31.3 \(\pm\) 0.31 & 16.4 \(\pm\) 1.86 & **2.6** & **9.4G** \\ 8 Templates, infer. 8 & 31.5 \(\pm\) 0.10 & 17.1 \(\pm\) 1.28 & **2.6** & **9.4G** \\ 80 Templates, infer. 1 & 31.6 \(\pm\) 0.25 & 17.4 \(\pm\) 0.38 & **2.6** & **9.6G** \\ 80 Templates, infer. 8 & 31.9 \(\pm\) 0.06 & 18.1 \(\pm\) 0.06 & **2.6** & **9.6G** \\ 1 Variant & 16.3 \(\pm\) 13.0 & 6.9 \(\pm\) 7.99 & **2.6** & **9.4G** \\ 64 Variants & 32.1 \(\pm\) 0.31 & **18.9 \(\pm\) 1.53 & **2.6** & **9.5G** \\ \hline \hline \end{tabular} \end{table} Table 1: **To train or to freeze the language model (LVIS\({}_{\text{R}}\) benchmark).** Speed is measured as the number of gradient steps per second, while ‘mem.’ denotes the peak accelerator memory usage. Methods where at least 1 out of the 3 training runs has failed are in red. Our _Freeze_ + _Dropout_ and _64 Variants_ approaches perform best on the unseen classes, while speeding up training and requiring less memory. architectural changes to the detector-specific layers which serve to maintain the alignment of vision-text features at the start, and promote it throughout the detector training. The architectural modifications are shown in Figure 3 and consist of strategically adding shortcut connections and trainable gating layers [1]. A trainable gating layer, with inputs \(x\) and \(y\) and a trainable scalar parameter \(\alpha\), produces the output \(o=x(1-\tan\alpha)+y\tan\alpha\), where \(\alpha\) is initialized to 0 meaning \(o=x\) at the start of training. The shortcuts and gates are added such that at the start of training, the features from the end of the vision backbone are "forwarded" through the FPN and detector heads all the way to the final classification features. In other words, at the start of training, the detector head classification features at all levels of the pyramid are equal to the backbone features. Recall that the vision and text backbones have been pretrained for alignment. This means that due to the specific gated-shortcut architecture and initialization, the detection head classification features (now equal to the backbone features) are already aligned with the text embeddings at the beginning of detector training. Thus, the training is improved as it starts from a good initial guess for the object classification and only needs to learn to improve the classification and bounding box prediction, rather than spend effort in re-discovering the vision-text alignment. **Aligned architecture design.** Here we explain in more detail the recipe for converting an architecture to its gated-shortcut version. As explained above, the overall aim is to forward the final backbone features (as they are pretrained to be aligned with the text embeddings) to the end of the detection head. So one only needs to follow the "flow" of the final backbone features and apply the following operations: (i) if they are mixed with another signal, add a gate that zeroes-out the second signal at the start of training, (ii) if an alignment preserving operation is performed (_e.g._ upsampling) do nothing, (ii) if an alignment damaging transformation is performed (_e.g._ a convolution), make a shortcut connection and add a gate such that the output equals the shortcut at the start of training. These principles and the resulting architecture are illustrated in Figure 3, where the FPN is augmented with the shortcuts and gates, while a single shortcut+gate combina Figure 3: **Alignment preserving architecture (APA).** The standard single-stage object detector architecture [Backbone \(\rightarrow\) Feature pyramid network (FPN) \(\rightarrow\) Detector heads] is augmented with shortcuts and trainable gating layers (��⃝�), which at init propagate the green input and block the red input. The output is computed as \(x(1-\tan\alpha)+y\tan\alpha\), and \(\alpha=0\) at init. The green arrows show the propagation of the last backbone features at init all the way to the final detector head classification features. Light blue and light yellow parallelograms represent the backbone and FPN feature maps, respectively; circles with \(\uparrow\)2 and \(\downarrow\)2 are non-trainable up- and down-sampling, squares are trainable modules (_e.g._ convolutions), and ❝�) are the trainable gates; convolution blocks show the kernel size and potential striding (’s2’: stride 2). The standard architecture (_i.e._ without the shortcuts and gates) is shown in the supplementary material. tion is used around the entire detector head. This makes it easy to apply the design to different detector heads (_e.g._ FCOS [38] vs T-Head [13]) which contain potentially more complex operations. #### 3.2.1 Results Table 2 shows the results of our the alignment preserving architectures (APA). **Alignment preserving vs vanilla architecture.** Coupled with the NFNet-F0 vision backbone and the FCOS [38] detector head, our design improves mAPall and mAPare by +2.3% and +3.7%, respectively. Similarly, for the better performing T-Head [13] APA achieves +1.7% and +2%, respectively. It is impressive that the improvements transfer to the larger NFNet-F6 network which already exhibits an excellent mAPall of 41.6%, which is further boosted by APA by +1.9% to reach 43.5%. The largest improvement can be observed for mAPare where the alignment preserving architecture tops the strong baseline by +6.5% and yields 27.6%. **In the FPN, Head or everywhere?** Table 2 shows it is more important to apply APA onto the detection head than the FPN - we speculate that this is because the detection head is much deeper and therefore without APA it takes longer to learn to re-learn the feature alignment in the detector head than in the FPN. However, applying APA onto both simultaneously clearly dominates, confirming our intuition that maintaining alignment from the very start of training is important. ### Self-training Text augmentation (Section 3.1) and the alignment preserving architecture (Section 3.2) bring significant gains in zero-shot performance due to the improved feature alignment. However, it is still ambitious to ask for the detector to extrapolate to completely unseen classes. In this section, we investigate how to use self-training via pseudo-labelling to further improve the feature alignment beyond the seen classes. We propose a simple three-stage approach. First, a good open vocabulary detector is trained using the previous two improvements (Sections 3.1 and 3.2), called _2Ways_. The detector is then used to pseudo-label an additional dataset that contains only images-text pairs scraped from the internet, _i.e._ it contains weak image-level information (the text), without any human supervision nor finer-grained annotations such as classes, bounding boxes or segmentations. The detector uses the text embedding of the entire caption as the object query, and we simply use the single highest scoring box per image if it passes a confidence threshold of 0.25. Finally, a new, stronger, open vocabulary detector (_3Ways_) is trained by combining the strongly supervised data (LVIS.R) and the pseudo-labelled dataset, and treating the pseudo-labels as ground truth. It is worth elaborating on the exact details of the final training stage. Recall from Section 2 that for training with the true ground truth annotations, we follow the standard training procedure; _i.e._, certain detector head classification features are assigned to be positives for particular classes (in the open vocabulary case, its text embedding) based on their pyramid level and location in the feature map [44]. The same features are negatives for other classes, and all remaining features are negatives for all classes. For example, if an image has a _dog_ in it and no _cat_, we have: (i) some features depending on scale and location are positives for _dog_, (ii) features that are not positives for _dog_ are negatives for _dog_, (iii) all features are negatives for _cat_. Training then proceeds with the standard per-class binary focal loss [29]. We propose an analogous mechanism when training with the pseudo-labels. The single pseudo-bounding box per image is deemed to correspond to the entire caption, and other captions in the batch are used as negatives. Therefore, for the \(i\)-th image in the batch, we have: (i) some features are deemed positive for the \(i\)-th caption again following [44], (ii) features that are not positive are negatives for the \(i\)-th caption, (iii) all features are negatives for the \(j\)-th caption where \(i\neq j\); we call this the use of "batch-negatives". The same binary focal loss is used for training. As will be shown in Section 3.3.1 and is commonly observed in the self-supervised literature [2, 8, 19, 39], batch-negatives are crucial to obtain good performance. **Relation to other methods.** While multiple works have used self-training with pseudo-labelling to boost the detector performance, none follow the above approach. Pseudo-labelling is popular in the weakly-supervised (classes in the image are specified but not their location) or semi-supervised low-shot works [33, 34, 37, 48] with closed vo \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{Architecture} & \multicolumn{2}{c}{APA} & \multicolumn{2}{c}{LVIS.R performance} \\ \hline Backb. & Head & FPN & Head & mAPall & mAPare \\ \hline NF-F0 & FCOS & & & 30.4 \(\pm\) 0.21 & 16.1 \(\pm\) 1.66 \\ NF-F0 & FCOS & ✓ & ✓ & 32.7 \(\pm\) 0.55 & 19.8 \(\pm\) 0.34 \\ \hline NF-F0 & T-Head & & & 32.1 \(\pm\) 0.51 & 18.9 \(\pm\) 1.13 \\ NF-F0 & T-Head & ✓ & & 32.4 \(\pm\) 0.44 & 18.3 \(\pm\) 1.58 \\ NF-F0 & T-Head & & ✓ & 33.3 \(\pm\) 0.55 & 19.6 \(\pm\) 0.49 \\ NF-F0 & T-Head & ✓ & ✓ & **33.8 \(\pm\) 0.55** & **20.9 \(\pm\) 0.34** \\ \hline NF-F6 & T-Head & & & 41.6 \(\pm\) 0.71 & 21.1 \(\pm\) 0.40 \\ NF-F6 & T-Head & ✓ & ✓ & **43.5 \(\pm\) 0.22** & **27.6 \(\pm\) 0.80** \\ \hline \hline \end{tabular} \end{table} Table 2: **Alignment preserving architecture (APA).** All networks were trained with the _64 Variants_ approach (Section 3.1). The added shortcuts and trainable gating layers consistently improve the detection performance for both the backbone and detection head architectures. cabulary detectors. This means that pseudo-labelling is easier as all classes are seen during training, but also that the self-training follows exactly the same setup as the initial training, where the negatives are other classes and there is no need or way to use batch-negatives. Our _3Ways_ uses the pseudo-detections from the _2Ways_ detector conditioned on the image caption, while Detic [46] computes the pseudo-detection independently of the caption by taking the largest bounding box proposal. Detic adopts batch-negatives but does so on the image-level rather than the bounding box-level; a more detailed discussion is available in Appendix C. GLIPv2 [43] also uses batch-negatives, but does not consider the zero-shot scenario. While we use the entire caption at once to produce pseudo-detections, GLIPv2 extracts noun phrases and pseudo-labels them individually. This could provide better quality pseudo-labels, but comes with its downsides as well, in that it depends on the quality of the text parser and requires additional book-keeping and special handling of repeated noun phrases in the batch. #### 3.3.1 Results **Implementation details.** We start from the strong _2Ways_ detector from Section 3.2 and verify that longer training on LVIS.R does not improve results further. Conceptual Captions 12M [7] (CC12M), an image-caption dataset gathered automatically from the internet containing 12M images, is used for all self-training experiments. The self-training starts from the _2Ways_ detector checkpoint and continues training, where each training step simultaneously optimizes the losses on a batch of LVIS images with true ground-truth and a batch of pseudo-labelled images from CC12M. In line with [46], we find we can reduce the resolution of the CC12M images (for LVIS.R we use \(800\times 1024\), while for CC12M \(400\times 512\) is sufficient) thus fitting a larger number of images in the batch and allowing for more batch-negatives. **Performance.** Table 3 shows the self-training results - it is clear that our self-training, _3Ways_, significantly improves both metrics and on both backbones, providing an especially large boost for the unseen classes. **Comparison to Detic [46].** We do not simply copy the numbers from [46] as this wouldn't be a fair comparison - we use different visual backbones, detector type, the self-training dataset, _etc_. Furthermore, Detic [46] focuses on the zero-shot and not open-vocabulary aspect (_e.g._ the full approach specifically searches for the LVIS-rare classes in the captions and uses this as the pseudo-label). Therefore, we reimplement the open-vocabulary version of Detic\({}^{\dagger}\), using our _2Ways_ detector (see Appendix C). Detic\({}^{\dagger}\) performs well, giving improvements over _2Ways_. However, our self-training approach also significantly beats Detic\({}^{\dagger}\). We also compare to another approach proposed by [46] where the pseudo-detection is simply taken to be the image bounding box. In fact, _Image bbox_ beats Detic\({}^{\dagger}\) slightly, but _3Ways_ is still superior. **Importance of batch-negatives.** As an ablation, we also train versions of the _Image bbox_ and our _3Ways_ approaches where batch-negatives are not used. The results show a clear large benefit of using batch-negatives - without them there is barely any gain from self-training as the task is too easy. ## 4 Results and discussion Comparison on an equal footing with the state-of-the-art is hard because most works use different visual backbones, pretraining, detector architecture, training procedure, augmentations, _etc_. Sections 3.1.1, 3.2.1 and 3.3.1 demonstrate the performance of each of our methods individually through fair comparisons where all these aspects are identical, while here we resort to the standard practice of reporting system-level performance (Table 4). We list best performing methods that are truly zero-shot and open vocabulary, and are trained following the LVIS benchmark rules (_i.e._ the only detection annotations used are the LVIS training set with the rare classes removed). For example, this criterion disqualifies PromptDet [14] and the best performing versions of Detic [46] (they actively use the list of LVIS classes to pseudo-label additional images, _i.e._ not open vocabulary), FIBER [10] and some OWL [31] experiments (train on many more detection annotations), GLIP [26] and GLIPv2 [43] (rare classes are not removed during training so not zero-shot, and more training data is used), _etc_. Our final open vocabulary detector, _3Ways_, achieves the highest mAPall and competitive mAPare. On mAPall it sets the state-of-the-art by a large margin - the largest network (NFNet-F6 with 440M parameters) achieves 44.6% \begin{table} \begin{tabular}{l c c c} \hline \hline Self-training method & Backb. & mAPall & mAPare \\ \hline 2Ways (no self-training) & NF-F0 & 33.8 \(\pm\) 0.15 & 20.9 \(\pm\) 0.34 \\ Image bbox w/o batch-negs & NF-F0 & 33.9 \(\pm\) 0.30 & 20.9 \(\pm\) 1.06 \\ Image bbox & NF-F0 & 35.1 \(\pm\) 0.26 & 24.2 \(\pm\) 1.37 \\ Detic [46] open-voc. \({}^{\dagger}\) & NF-F0 & 34.8 \(\pm\) 0.32 & 23.4 \(\pm\) 1.49 \\ 3Ways w/o batch-negs & NF-F0 & 34.2 \(\pm\) 0.15 & 20.7 \(\pm\) 0.35 \\ 3Ways & NF-F0 & **35.7 \(\pm\) 0.20** & **25.6 \(\pm\) 1.12** \\ \hline 2Ways (no self-training) & NF-F6 & 43.5 \(\pm\) 0.12 & 27.6 \(\pm\) 0.80 \\ 3Ways & NF-F6 & **44.6 \(\pm\) 0.31** & **30.1 \(\pm\) 1.30** \\ \hline \hline \end{tabular} \end{table} Table 3: **Self-training (LVIS.R benchmark).** CC12M is pseudo-labelled with the _2Ways_ detector (Sections 3.1 and 3.2). Detic\({}^{\dagger}\) is our reimplementation of Detic [46], Image bbox uses the entire image as the pseudo-detection while 3Ways uses the 2Ways’s best prediction; Section 3.3 explains all methods. Self-training helps, and batch-negatives are important. (and 43.5% without self-training) while the best second is at 35.3% (OWL [31]'s VIT-H/14 with 630M parameters), making for an impressive improvement of 9.3% points (8.2% without self-training). Even our smaller network (NFNet-F0 with 71M parameters) with self-training beats the previously best reported performance. On the unseen classes, mAP\({}_{\text{rate}}\), we compare favourably to the latest approaches. Only the concurrent F-VLM [24] method performs better, achieving 32.8% for the largest R50x64 model, while our equally large NFNet-F6 is a close second at 30.1%. It should be noted that, when trained on full LVIS, it has been observed that mAP\({}_{\text{rate}}\) inherently has high variance as these are the long tail categories, and an absolute difference of 1% might not be significant [17]; for example, our best performing run achieves 31.7%. The shared third place with a significantly lower mAP\({}_{\text{rate}}\) value of 25.6% is achieved by OWL with VIT-L/14 (310M parameters) and our much smaller NFNet-F0 model (71M parameters). The good performance is partially due to the use of the strong vision backbone. However, simply using it out of the box (_0Ways_) fails (Table 4, _c.f._ Section 3.1.1) and our methods are required to unlock its power. **Transfer** capabilities are tested by evaluating the LVIS\({}_{\text{-R}}\) trained networks on COCO [27] and Objects365-v1 [36]. Table 5 shows the networks achieve impressive performance: on COCO even our smallest model without self-training beats all previous approaches, while on Objects365 (estimated by [24] to have only 63% overlap with LVIS\({}_{\text{-R}}\) training classes) _3Ways_ improves upon the previous best mAP by 5.1% points. Qualitative results are provided in Appendix D. ## 5 Conclusions We introduced three methods for improving alignment between visual and text features, which in turn boosts zero-shot detection performance. They reduce overfitting and forgetting of concepts learnt during pretraining, improve training speed while decreasing accelerator memory requirements, and make use of large image-text datasets without costly detection annotations. We achieve superior mAP\({}_{\text{all}}\) on the challenging LVIS\({}_{\text{-R}}\) benchmark, and transfer to COCO and Objects365, while obtaining mAP\({}_{\text{rate}}\) competitive with concurrent work. Further research directions include investigating how to even more efficiently make use of plentiful image-text data with improved pseudo-labelling, losses, or combinations with self-supervised learning. \begin{table} \begin{tabular}{l l r r r r r r} \hline \hline Method & Backbone & \#Params & Self-training & mAP\({}_{\text{all}}\) & mAP\({}_{\text{rare}}\) & mAP\({}_{\text{comm}}\) & mAP\({}_{\text{freq}}\) \\ \hline DetPro [11] & R50 & 26M & ✓ & 30.4 & 17.4 & & \\ DetPro [11] & R50 & 26M & & 28.4 & 20.8 & 27.8 & 32.4 \\ RegionCLIP [45] & R50x4 & 87M & ✓ & 32.1 & 22.0 & 32.1 & 36.9 \\ OWL [31] & VIT-L/14 & 303M & & 34.7 & 25.6 & & \\ OWL [31] & VIT-H/14 & 627M & & 35.3 & 23.3 & & \\ F-VLM [24] & R50x4 & 87M & & 28.5 & 26.3 & & \\ F-VLM [24] & R50x64 & 420M & & 34.9 & **32.8** & & \\ 0Ways [this work] & NFNet-F0 & 71M & & 16.3 \(\pm\) 13.0 & 6.9 \(\pm\) 7.89 & 13.2 \(\pm\) 11.3 & 23.7 \(\pm\) 11.7 \\ 1Ways [this work] & NFNet-F0 & 71M & & 32.1 \(\pm\) 0.31 & 18.9 \(\pm\) 11.3 & 29.5 \(\pm\) 0.15 & 40.9 \(\pm\) 0.08 \\ 2Ways [this work] & NFNet-F0 & 71M & & 33.8 \(\pm\) 0.15 & 20.9 \(\pm\) 0.34 & 32.4 \(\pm\) 0.20 & 41.0 \(\pm\) 0.05 \\ 3Ways [this work] & NFNet-F0 & 71M & ✓ & 35.7 \(\pm\) 0.20 & 25.6 \(\pm\) 11.2 & 34.2 \(\pm\) 0.05 & 41.8 \(\pm\) 0.02 \\ 0Ways [this work] & NFNet-F6 & 440M & & 0.8 \(\pm\) 0.19 & 0.4 \(\pm\) 0.12 & 0.7 \(\pm\) 0.16 & 1.0 \(\pm\) 0.24 \\ 1Ways [this work] & NFNet-F6 & 440M & & 41.6 \(\pm\) 0.17 & 21.1 \(\pm\) 0.40 & 42.9 \(\pm\) 0.19 & 49.2 \(\pm\) 0.09 \\ 2Ways [this work] & NFNet-F6 & 440M & & 43.5 \(\pm\) 0.12 & 27.6 \(\pm\) 0.08 & 44.9 \(\pm\) 0.10 & 48.8 \(\pm\) 0.01 \\ 3Ways [this work] & NFNet-F6 & 440M & ✓ & **44.6**\(\pm\) 0.38 & 30.1 \(\pm\) 1.35 & **46.0**\(\pm\) **0.47** & **49.3**\(\pm\) **0.08** \\ \hline \hline \end{tabular} \end{table} Table 4: **State-of-the-art for zero-shot open vocabulary detection on LVIS\({}_{\text{-R}}\). \({}^{(m)}\)** denotes that Detic [46] only reports the mask mAPs, box mAP should not be much higher. _0Ways_, _1Ways_, _2Ways_ and _3Ways_ refer to the baseline detector with the frozen LM and no text augmentation, and cumulative application of our three methods from Sections 3.1, 3.2 and 3.3, respectively. _3Ways_ performs well, yielding the best mAP\({}_{\text{all}}\) by a large margin and achieving a favourable mAP\({}_{\text{rare}}\). \begin{table} \begin{tabular}{l l r r r} \hline \hline Method & Backbone & \#Params & COCO & O365 \\ \hline ViLD [16] & R50 & 26M & 36.6 & 11.8 \\ DetPro [11] & R50 & 26M & 34.9 & 12.1 \\ F-VLM [24] & R50x4 & 87M & 36.0 & 14.2 \\ F-VLM [24] & R50x64 & 420M & 39.8 & 17.7 \\ 2Ways [this work] & NF-F0 & 71M & 40.6 & 14.6 \\ 3Ways [this work] & NF-F0 & 71M & 41.5 & 16.4 \\ 2Ways [this work] & NF-F6 & 440M & 46.5 & 20.3 \\ 3Ways [this work] & NF-F6 & 440M & **46.9** & **22.8** \\ \hline \hline \end{tabular} \end{table} Table 5: **Transfer.** The LVIS\({}_{\text{-R}}\) trained networks are evaluated on COCO [27] and Objects365-v1 [36] without any additional training. **Acknowledgments.** We thank Evan Shelhamer for fruitful discussions and Iain Barr for help with the codebase.
2302.13157
Improving Energy Management of Hybrid Electric Vehicles by Considering Battery Electric-Thermal Model
This article proposes an offline Energy Management System (EMS) for Parallel Hybrid Electric Vehicles (PHEVs). Dividing the torque between the Electric Motor (EM) and the Internal Combustion Engine (ICE) requires a suitable EMS. Batteries are vital to HEVs and significantly impact overall vehicle cost and performance. High temperature and high battery State of Charge (SOC) are the main factors that accelerate battery aging. SOC is the most critical state variable in EMS and was usually considered the only dynamic variable in previous studies. For simplicity, the battery temperature was often assumed to be constant, and the effect of EMS on temperature change was neglected. In this paper, we first apply Dynamic Programming (DP) to a PHEV without considering battery temperature variations. Then, the battery model is improved by modeling the cooling system to take into account temperature variations and show how neglecting the thermal dynamics of the battery in EMS is impractical. Finally, by integrating battery temperature as a state variable in the optimization problem, a new EMS is proposed to control battery temperature and SOC variation. Simulation results of the tested vehicle show that the proposed method controls battery charge and temperature. The proposed EMS method prevents uncontrolled fluctuations in battery temperature and reduces its deterioration rate.
Arash Mousaei
2023-02-25T20:55:17Z
http://arxiv.org/abs/2302.13157v1
Improving Energy Management of Hybrid Electric Vehicles by Considering Battery Electric-Thermal Model ###### Abstract This article proposes an offline Energy Management System (EMS) for Parallel Hybrid Electric Vehicles (PHEVs). Dividing the torque between the Electric Motor (EM) and the Internal Combustion Engine (ICE) requires a suitable EMS. Batteries are vital to HEVs and significantly impact overall vehicle cost and performance. High temperature and high battery State of Charge (SOC) are the main factors that accelerate battery aging. SOC is the most critical state variable in EMS and was usually considered the only dynamic variable in previous studies. For simplicity, the battery temperature was often assumed to be constant, and the effect of EMS on temperature change was neglected. In this paper, we first apply Dynamic Programming (DP) to a PHEV without considering battery temperature variations. Then, the battery model is improved by modeling the cooling system to take into account temperature variations and show how noveling the thermal dynamics of the battery in EMS is impractical. Finally, by integrating battery temperature as a state variable in the optimization problem, a new EMS is proposed to control battery temperature and SOC variation. Simulation results of the tested vehicle show that the proposed method controls battery charge and temperature. The proposed EMS method prevents uncontrolled fluctuations in battery temperature and reduces its deterioration rate. Energy Management System, Dynamic Programming, Parallel Hybrid Electric Vehicles, Battery Temperature, Corrosion ## I Introduction Two-thirds of the petroleum used worldwide is consumed by cars, and about half of this amount is related to passenger cars. Pollution caused by fuel consumption and dependence on external sources has motivated much research and advances in replacing conventional power generation and transmission systems based on Internal Combustion Engines (ICE) with renewable and clean energy sources. On the other hand, the power of pure Electric Vehicles (EVs), due to the high cost and low capacity of the batteries, does not meet the general needs except for special applications. As a result, one of the primary motivations for the production of Hybrid Electric Vehicles (HEVs) has been to take advantage of the high power of Conventional Vehicles (CVs) and the low emission of EVs [1]. HEVs have two directions to provide their driving force: ICE, the Electric Machine (EM), and the battery. The most crucial issue for control engineers in these vehicles is the issue of EMS. The proper performance of HEVs, regardless of the type of structure and the characteristics of various components, is highly dependent on the EMS. The control algorithm of EMS in HEVs specifies how to divide the demand power or demand torque between the electric and thermal units of the vehicles to follow a driving cycle according to the minimal fuel consumption and emission of pollutants [1,2]. In the last two decades, various methods have been presented for the EMS of these vehicles [1-7]. The control theory of EMSs, divide into two categories: based on rules and based on optimal control. Although rules-based methods are more straightforward, they do not provide the optimal response, and optimal control-based methods are used to obtain the optimal response. The standard classification of methods based on optimal control divides them into offline and online forms. All offline methods' common assumption knows all road and driving conditions. This assumption is necessary to reach the optimal answer. Although the offline methods are not directly applicable to the online control of vehicles, their response can be used to select the best performance, compare the online techniques, and define reference routes in online mode. In online forms, a cost function is usually minimized. Therefore, all online methods are suboptimal, and their measurement criteria are close to the overall optimal results obtained by offline methods. Online methods that only use momently information have far from the optimal answer. The methods that use pattern recognition and processing previously stored information depend highly on the conditions and the driving cycle for which they are designed. They do not have a suitable answer for other driving cycles [1-8]. The essential methods based on optimal control are Dynamic Programming (DP) for offline energy management [3,9] and Pontriagin's Minimum Principle (PMP) for online energy management [4,9]. In the standard EMS, the cost function is the amount of fuel consumption, and the only dynamic variable is the State of Charge (SOC) of the battery. Therefore, parts of the vehicle, like the battery, are modeled using lookup tables and static maps. Due to the size and cost of the batteries, especially in plug-in HEVs, the vehicle's battery is very vulnerable to high temperatures. So, their continuous charging and discharging and optimal battery use are necessary and vital [5]. Many studies have been done to optimize the use of batteries in HEVs. [8] and [10,11] have studied the modeling of battery temperature changes. A group of articles has modeled battery temperature changes without including them in the EMS problem. Another group of articles also considered the State of Health (SOH) of the battery in the optimization problem but assumed the battery temperature to be constant. In [12-14], battery life and its degree of SOH are modeled. Unsuitable working conditions reduce battery life and cause increased internal resistance and a decrease in capacity, and [12] and [14] have modeled this action. In [7], the battery life is considered in the EMS problem, but the battery temperature is assumed to be constant. The design of different systems for the thermal management of batteries has been studied in [15] and [16]. References [17] and [18] have presented methods to estimate battery SOC, and [19] the optimal range of battery temperature has been investigated. In [20], the battery's lifetime depends only on its current. Still, a more accurate model must consider other parameters affecting the battery's performance for a general conclusion. In [21], it was viewed as an equation of state instead of evaluating the SOH in the cost function. Because the ranges of SOH and SOC variables are very different, two controllers were used to follow the reference, but the optimality of the response was not guaranteed. Considering that battery temperature is a critical and essential factor in corrosion, a penalty for battery temperature was considered [22]. But battery temperature is the only factor regarded in this reference, while other elements effectively reduce battery lifetime, even at mild temperatures. In [23], optimal energy management was presented with a cost function that includes fuel consumption and battery health. This article used a model for corrosion dependent on battery temperature and the current passing through it. The limitation of this article was that the vehicle's battery was modeled statically, and the battery's temperature was assumed to be constant. [24] has shown that the battery lifetime of vehicles in temperateness (on average between -3 \({}^{\circ}\)C to 32 \({}^{\circ}\)C) is 73 to 94% more than in topic (up to 32 \({}^{\circ}\)C) areas. The reference of [25] presented a method to estimate battery lifetime and investigated the effect of driving style on the battery in addition to temperature and current. The reviewed references show the harmful effects of increasing the battery temperature on the battery and the vehicle's performance. Still, it has not been presented in the form of EMS. Also, the references that have addressed the issue of EMS have considered the battery temperature as constant or ignored the dependence of the parameters on the battery temperature. In this article, an offline EMS using DP is first implemented for an HEV whose model is presented in [1]. In this model of the HEV, a cooling system that uses air as a heat transfer fluid has been used. The battery model is improved by modeling this cooling system, including the battery's electric and thermal dynamics. With this, battery temperature changes are modeled as a dynamic variable, and then battery temperature is added to the fuel consumption optimization system as a controlled state variable. This way, the proposed energy management system will have two equations of state: SOC and battery temperature. It will be shown that battery temperature and SOC are under control during the energy management process and do not go out of the desired range. Therefore, the innovations of this article are: 1. Improving the vehicle model by modeling the battery cooling system and considering battery temperature changes for an HEV. 2. Offline energy management by DP method with two equations of the state, including SOC and battery temperature, in the optimization problem based on the improved model. ## II Dynamic Model of HEV Unlike conventional and electric vehicles, HEVs have at least two sources of power to propulsion (figure1): An ICE or a fuel cell as a fuel converter and an Electric Machine (EM) with a battery, supercapacitor, or flywheel as a source of energy storage. For energy management, the longitudinal dynamics of the vehicle are used to model the vehicle's chassis, which calculates the car's speed according to equation (1) with the thrust force. Figure 2 shows the relation (1). \[\mathrm{m}\frac{\mathrm{dV(t)}}{\mathrm{dt}}= \tag{1}\] ## III Model of Battery The battery of HEVs is formed by connecting several identical cells in series. Therefore, one cell is usually modeled, and the battery's output voltage will be the cells' total voltage. In the discussion of energy management, the electrical equivalent circuit of Figure (3) is usually used to model the battery. This equivalent circuit includes the internal resistance of the battery (\(\mathrm{R_{b}}\)) and an ideal voltage source called \(\mathrm{V_{oc}}\) as the battery's open-circuit voltage. For simplicity, the battery temperature is assumed to be constant, and the internal resistance and open-circuit voltage will depend only on the SOC. According to equation (2), the SOC of the battery will be calculated. We have: \[SOC(t)=\frac{S(t)}{S_{0}} \tag{2}\] As mentioned in the introduction, to control the battery's temperature, increase its lifetime, and improve the vehicle's performance, it is necessary to consider the changes in the temperature of the battery in the EMS. For this purpose, the proposed model in [12] and [14] is used for the battery. The presented model for the battery consists of two electrical and thermal parts. The electrical model calculates the battery voltage and SOC, and the thermal model predicts the battery temperature. The electrical sub-system of the battery model is shown in Figure (3). For this model, SOC changes can be calculated as follows: Fig. 1: The structure of the Parallel HEV Fig. 3: Equivalent circuit of the battery Fig. 2: Forces acting on the vehicle \[\frac{dSOC(t)}{dt}=-\frac{I_{b}(t)}{S_{0}} \tag{3}\] \[V_{0}(t)=V_{oc}(t)-R_{b}(t)I_{b}(t) \tag{4}\] In the above relationships, S(t) is the battery's current charge, S\({}_{0}\) is its total capacity, and I\({}_{b}\) is the current of the battery, which is obtained by equation (5). \[\mathrm{I_{b}}=\frac{V_{oc}\mathrm{\text{--}}\sqrt{V_{oc}^{2}\mathrm{\text{--}}4 R_{b}P_{m}}}{2R_{b}} \tag{5}\] To examine the thermal model of the battery, its cooling system is shown in Figure (4). This system uses air as a heat transfer fluid. This system is less complex than the system that uses liquid. Using air as a heat transfer fluid system works well for parallel HEVs, but a liquid cooling system is preferred for series HEVs. Figure (4) shows the temperature of the air entering the channel. According to passing air through the channels, the battery is cooled and the temperature of the air at the exit of the channels reaches \(\theta_{\text{out}}\). In the thermal model of the battery, based on the law of conservation of energy, the temperature changes of the battery cell follow equation (6), where \(\theta\) is the battery temperature, Q\({}_{g}\) is the speed of the heat generation, and Q\({}_{d}\) is the speed of depletion heat by the channels of the cooling system. Also, mc and C\({}_{\text{P,C}}\), are the battery cell's weight and specific heat capacity, respectively. \[\mathrm{m_{C}C_{P,C}\frac{d\theta}{dt}=}\mathrm{Q_{g}}\mathrm{\text{--}} \mathrm{Q_{d}} \tag{6}\] In [28], the speed of the heat generation is shown by equation (7). Equation (7) \(\frac{\partial V_{oc}}{\partial\theta}\), can be ignored compared to other sentences [11]. \[\mathrm{Q_{g}}=\mathrm{I_{b}}\mathrm{\text{--}}\mathrm{\text{(}}V_{oc}\mathrm{ \text{--}}\frac{\partial V_{oc}}{\partial\theta}\theta\mathrm{)} \tag{7}\] The heat removed by the cooling system, according to equation (8), includes the heat removed from channels 1 and 2. \[\mathrm{Q_{d}}=\mathrm{Q_{ku,1}}+\mathrm{Q_{ku,2}} \tag{8}\] When the air enters the channels with a temperature lower than the temperature of the battery, the heat of the battery is removed through surface convection by the channels. The speed of heat removal from each channel is calculated by equation (9). According to this equation, the heat exchange speed in the channels' output is a function of the temperature at the channel's output and the temperature of the battery cell. \[\mathrm{Q_{ku,j}}=\mathrm{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\ According to the above contents and relationships (1) to (21), the vehicle's final model combines dynamic equations and maps. The calculation process in this model is that first, the traction force (F\({}_{\rm t}\)) and the required torque of the vehicle (T\({}_{\rm w}\)) to reach the driving cycle speed (v) are obtained. Then, according to the u, the torque of the EM and ICE is calculated. In the next step, having the speed and torque of the motors, their power is obtained with the help of efficiency maps of these motors. The following relationship (equations (22) to (27)) is related to the battery current. Finally, the changes in the state of charge and temperature of the battery are calculated as state variables with the last two relationships of this model. In the standard energy management method used in most previous references, the only dynamic state variable of the system is the SOC of the battery, and the static model is used for the rest of the vehicle components. Using the reference tables for the battery model and ignoring the thermal dynamics and temperature changes of the battery will cause damage to the battery and reduce its lifetime. Since the battery of HEVs is the most sensitive and expensive component, its performance dramatically impacts the vehicle's performance. It is necessary to keep SOC and its temperature within a specific range for optimal vehicle performance. In this article, to control the temperature of the battery in addition to its SOC during energy management, taking into account the electric-thermal dynamics of the battery, the energy management problem is defined with two state variables, SOC and \(\theta\) with relations (22) to (27). \[\rm{x(t)} = [SOC(t)\quad\theta(t)]^{T} \tag{22}\] \[\rm{\dot{x}(t)} = \begin{bmatrix}\rm{\dfrac{I_{b}}{S_{0}}}\\ \rm{\dfrac{R_{b}I_{b}^{2}}{m_{c}C_{P,c}}}\end{bmatrix}\] (23) \[\rm{SOC_{low}} \leq \rm{SOC(t)} \leq \rm{SOC_{high}}\] (24) \[\rm{SOC(T_{f})} \in [SOC_{N,min},SOC_{N,max}]\] (25) \[\rm{\theta_{low}} \leq \theta(t) \leq \theta_{high}\] (26) \[\rm{\theta(T_{f})} \in [\rm{\theta_{N,min}},\rm{\theta_{N,max}}] \tag{27}\] Methods such as Linear Programming (LP), Dynamic Programming (DP), and evolutionary algorithms are used to find the optimal answer to the energy management problem of HEVs. DP has the most use and the best performance among all the implemented methods because it guarantees the ultimate optimal solution based on Bellman's optimality principle [33] and [34]. As will be shown below, the problem optimal energy management (equations (22) to (27)) is fully compatible with the form of the problem in DP (equations (28) to (33)). DP is a very suitable method for finding the optimal control and path of state variables in problems of the form (26) problems in which K\(=\) 0, 1,..., N. In relations (28) to (33), \(\rm{g_{N}(x_{N})}\) is a penalty to limit the final value of the state variable, and \(\rm{g_{A}(x^{\rm{t}},u_{h})}\), is the Cost-to-go function. The value of this function is the cost of applying \(\rm{u_{h}}\) at the moment k to a system with state function \(\rm{g_{A}(x^{\rm{t}},u_{h})}\) and initial value \(\rm{x_{0}}\). \[\rm{min_{u_{E}}}\rm{e_{i_{k}}}\rm{\{g_{N}(x_{N})+}} \sum_{0}^{N-1}\rm{g_{N}(x_{k},u_{k})} \tag{28}\] \[\rm{x_{k+1}} = \rm{f_{k}(x_{k},u_{k})}\] (29) \[\rm{x_{0}} = \rm{x0}\] (30) \[\rm{x_{N}} \in \rm{T}\rm{\subset}\rm{R^{n}}\] (31) \[\rm{x_{N}} \in \rm{X_{k}}\rm{\subset}\rm{R^{n}}\] (32) \[\rm{u_{N}} \in \rm{U_{k}}\rm{\subset}\rm{R^{n}} \tag{33}\] Since DP is a numerical method, the continuous control problem should be discretized from (28) to (33). The method's accuracy depends on the meshing accuracy or the resolution of time and state spaces. Another critical issue is the value of the cost function for impossible states. Allocation of infinite transfer cost is the first method proposed for such points. As said, interpolation is used to determine the cost function of transmitting the middle points of space networking. Then doing interpolation for points adjacent to impossible states will cause those points to be interpreted as unbelievable points, which is the fundamental problem of assigning the infinite value. Some methods were proposed to solve this problem; among them, it is possible to mention giving a huge value instead of an infinite value [36]. The DP algorithm calculates the optimal cost function \(\rm{J_{k}(x^{\rm{t}})}\) as follows, starting from the end of the path (k=N) and performing recursive calculations until k=0 and for all points of the discrete time-state space: 1. _Determining the initial value of the cost function in k=N._ \[\rm{j_{N}(x^{\rm{t}})} = \begin{cases}\rm{j_{N}(x^{\rm{t}})}&\rm{x^{\rm{t}}}\rm{\in}\rm{T}\\ \rm{\infty}&\rm{else}\end{cases}\] (34) The transfer cost function is zero at the last moment because there is no next state. In problems where the final value of the state variable is bound, \(\rm{j_{N}(x^{\rm{t}})}\) is equal to the penalty for the deviation of the final value. According to (19), the final value of the SOC and battery temperature variables is limited in the energy management problem. This limitation is determined by the set T in (31). At this stage, according to the final values of SOC and \(\theta\) outside the limits defined in the problem, the value \(\rm{\infty}\) (infinite) is assigned. So, the paths leading to these values will not be the optimal path and the answer to the problem. 2. _Calculation of the optimal cost function \(\rm{j_{N}(x^{\rm{t}})}\) at all points of the discrete space for k=N-1 to k=0 and for \(\rm{x^{\rm{t}}}\in X_{k}\)._ \[\rm{j_{N}(x^{\rm{t}})} = \rm{min_{u_{E}}}\rm{e_{i_{k}}}\rm{e_{i_{k}}}\rm{(g_{k}(x^{\rm{t}},u_{k}) +})_{k+1}\rm{(f_{k}(x^{\rm{t}},u_{k}))}\] (35) In the intermediate steps, an optimal path is found for each state space point until the path's end and stored for use in the following steps. Each sub-problem is solved once and its answer is saved, and with this, the repetition of sub-problems is prevented when their answers are needed again. The transfer cost function \(\rm{j_{N}(x^{\rm{t}})}\) is the cost of moving from the point x\({}^{\rm{t}}\) at the moment k to the end of the optimal path, which is formed by two terms \(\rm{g_{k}(x^{\rm{t}},u_{k})}\) and \(\rm{j_{k+1}(f_{k}(x^{\rm{t}},u_{k}))}\). The first term is the cost of the transfer from the point x\({}^{\rm{t}}\) at the moment k to the point f\({}_{k}(x^{\rm{t}},u_{k})\) at the moment k+1, and the second term is the cost of the optimal path from this new point to the end of the path, which was calculated and stored in the previous step. This way, by continuing this algorithm until k=0, the optimal control signal is obtained at every moment. Finally, the optimal control sequence \(\pi\) = \(\{\mu_{0},\mu_{1},...,\mu_{N-1}\}\) is obtained. ## V Simulation Results The vehicle under investigation is the Hyper Daimler Chrysler parallel Electric Hybrid Vehicle from the Mercedes A-Class series, presented in [1]. The values of the general parameters of the vehicle are given in Table I, and the specifications of the vehicle's battery are shown in Table II. The driving cycle used in the simulation is the JN-1015 (Japan), with the specifications listed in Table III. Also, the speed-time diagram of JN-1015 is shown in figure 5. First, by ignoring the effect of the thermal dynamics of the battery, the changing temperature is not considered in the relationships of the optimization problem. The only variable is the SOC, and the values of its parameters are given in Table IV. Fuel consumption in this method is equal to 4.3 liters per 100 kilometers. Figure 6 shows SOC changes. As mentioned, in HEV that the network cannot recharge; the final charging level should be close to its initial value. It can be seen in Figure 6 that SOC approaches the initial value at the end of the path. Figure 7 shows how to divide the demanded torque between EM and ICE by the control variable u(t)=T\({}_{\text{m}}\)(t)/T\({}_{\text{w}}\)(t). Due to the power of the EM being less than the ICE, high torques are inevitably provided by the ICE. For amounts of torque that both EM and ICE can provide, the priority is with the EM, provided that the battery charge level does not fall below the minimum allowed charge. Negative torques are also used by the EM (in generator mode) to charge the battery. As mentioned in the optimization problem in this section, the battery's temperature was considered constant. Now, using the parameters of the battery model and the resulting internal resistance and battery current vectors, we will show the effect of the EMS on the battery temperature. In figure 8, it can be seen that the battery's temperature has increased to about 57.2 "C due to being neglected in the Fig. 5: Driving cycle IN-1015 (Japan) Fig. 6: SOC for energy management without battery temperature control Fig. 7:.T torque split between EM and ICE optimization despite the presence of the cooling system. In the following, the results of the proposed method are shown. In this method, the improved battery model described in section 4.2 is used in the optimization problem. Battery temperature is considered the second state variable next to SOC in dynamic planning. SOC parameter values are the same as in table IV, and battery temperature is set in table V. Adding upper and lower limits of the battery's temperature to the optimization problem has resulted in less battery usage. In this case, if providing the demanded torque causes the temperature to increase and approaches the upper limit of the temperature, it will be prevented. This will improve the operation of the ICE and, as a result, increase fuel consumption. Fuel consumption in this mode has reached 5.1 liters per 100 kilometers. Figures 9 and 10 show the proposed method's SOC and battery temperature changes. It can be seen that in addition to SOC, the battery temperature also remains within the specified range, and its final value has reached 25.45 \({}^{\circ}\)C. ## VI Onclotion This article proposed a method for offline energy management of Hybrid Electric Vehicles based on the optimal use of the battery for HEVs. The problem of optimal energy management of these vehicles is a non-linear problem with different performance limitations according to modeling and system variables. Dynamic Programming, a numerical solution method, is suitable for finding the optimal answer to such a problem offline. This method was applied to the vehicle under study with the assumption of constant battery temperature. By modeling battery temperature changes, it was shown that the assumption of continuous battery temperature is impractical, and the effect of energy management on battery temperature cannot be ignored. According to the results, by ignoring the control of temperature changes in the energy management strategy, the value of SOC from the initial value of 55% decreased to 49.4% by a change of 10.19%, then returned to 55%. Also, in this case, the battery's temperature reached 57.2 \({}^{\circ}\)C from 20 \({}^{\circ}\)C with a sharp change of 186%, which will cause many disadvantages, including correction and reduced battery life. In the proposed method, according to the control of the battery temperature changes, the value of SOC reached 56.6% from 55%, with an increase of 2.73%. Also, in the proposed strategy, the battery's temperature increased by 27.25% and reached 25.45 \({}^{\circ}\)C from 20 \({}^{\circ}\)C. According to this article's results, changes in SOC improved by 12.92% in the proposed method. Also, compared to conventional energy management methods, the proposed method reduced changes in the battery temperature by 6.82 times and increased battery life.
2301.05265
A Mixed Stirring Mechanism for Debris Discs with Giant and Dwarf Planetary Perturbations
Debris discs consist of belts of bodies ranging in size from dust grains to planetesimals; these belts are visible markers of planetary systems around other stars that can reveal the influence of extrasolar planets through their shape and structure. Two key stirring mechanisms -- self-stirring by planetesimals and secular perturbation by an external giant planet -- have been identified to explain the dynamics of planetesimal belts; their relative importance has been studied independently, but are yet to be considered in combination. In this work we perform a suite of 286 N-body simulations exploring the evolution of debris discs over 1~Gyr, combining the gravitational perturbations of both dwarf planets embedded in the discs, and an interior giant planet. Our systems were somewhat modeled after the architecture of the outer Solar system: a Solar mass star, a single massive giant planet at 30~au ($M_{\rm GP} =$ 10 to 316~$\mathrm{M}_{\oplus}$), and a debris disc formed by 100 massive dwarf planets and 1000 massless particles ($M_{\rm DD} =$ 3.16 to 31.6~$\mathrm{M}_{\oplus}$). We present the evolution of both the disc and the giant planet after 1~Gyr. The time evolution of the average eccentricity and inclination of the disc is strongly dependent on the giant planet mass as well as on the remaining disc mass. We also found that efficient stirring is achieved even with small disc masses. In general, we find that a mixed mechanism is more efficient in the stirring of cold debris discs than either mechanism acting in isolation.
Marco A. Muñoz-Gutiérrez, Jonathan P. Marshall, Antonio Peimbert
2023-01-12T19:23:29Z
http://arxiv.org/abs/2301.05265v1
# A Mixed Stirring Mechanism for Debris Discs with Giant and Dwarf Planetary Perturbations ###### Abstract Debris discs consist of belts of bodies ranging in size from dust grains to planetesimals; these belts are visible markers of planetary systems around other stars that can reveal the influence of extrasolar planets through their shape and structure. Two key stirring mechanisms -- self-stirring by planetesimals and secular perturbation by an external giant planet -- have been identified to explain the dynamics of planetesimal belts; their relative importance has been studied independently, but are yet to be considered in combination. In this work we perform a suite of 286 N-body simulations exploring the evolution of debris discs over 1 Gyr, combining the gravitational perturbations of both dwarf planets embedded in the discs, and an interior giant planet. Our systems were somewhat modeled after the architecture of the outer Solar system: a Solar mass star, a single massive giant planet at 30 au (\(M_{\rm{GP}}=10\) to 316 M\({}_{\oplus}\)), and a debris disc formed by 100 massive dwarf planets and 1 000 massless particles (\(M_{\rm{DD}}=3.16\) to 31.6 M\({}_{\oplus}\)). We present the evolution of both the disc and the giant planet after 1 Gyr. The time evolution of the average eccentricity and inclination of the disc is strongly dependent on the giant planet mass as well as on the remaining disc mass. We also found that efficient stirring is achieved even with small disc masses. In general, we find that a mixed mechanism is more efficient in the stirring of cold debris discs than either mechanism acting in isolation. keywords: circumstellar matter - planetary systems - planet-disc interactions - dynamical evolution and stability ## 1 Introduction Debris discs are massive structures observed around 20 to 30 percent of main sequence stars (for recent reviews see, e.g., Wyatt, 2018; Hughes et al., 2018); their presence is signaled by the presence of excess emission in thermal emission at infrared to millimetre wavelengths (e.g., Eiroa et al., 2013; Thureau et al., 2014; Holland et al., 2017; Sibthorpe et al., 2018) and/or scattered light at optical or near-infrared wavelengths (either total intensity or polarization, e.g., Schneider et al., 2014; Esposito et al., 2020), coming from circumstellar micrometre- to centimetre-sized dust grains. The dust contents of debris discs are not just remnants of the original, massive, dust- and gas-rich protoplanetary discs of material from which planets are born (Wyatt et al., 2015; Andrews et al., 2018). Although some amount of (sub-)micron-sized dust can remain after the initial protoplanetary disc dissipates, the smallest dust grains are lost on timescales much shorter than the age of the host star due to photoevaporation and accretion processes (Burns et al., 1979; Krivov, 2010). Therefore, the dust observed in debris discs is thought to be second-generation dust, produced in disruptive collisions between larger leftover planetesimals, which were originally formed from dust (and ices) in the protoplanetary discs. Collisions between these bodies produce detectable amounts of dust throughout the lifetime of the host star and beyond (e.g., Matthews et al., 2014; Farihi, 2016). However, to be able to produce that dust, planetesimals must be abundant enough to have frequent collisions, as well as have relative velocities high enough for collisions to be destructive, or at least erosive (Dohnanyi, 1969; Kenyon and Bromley, 2001). The formation of planetesimals starts with dust growth in protoplanetary discs, which is encouraged by vertical settling of larger grains to the disc mid-plane and radial trapping at pressure bumps, especially around ice lines increasing the mass surface density to a level where the gas-to-dust ratio approaches unity (Blum and Wurm, 2008; Drazkowska and Alibert, 2017). Growth beyond millimetre- to centimetre-sized particles is inhibited by collisions due to the 'bouncing barrier' (Brauer et al., 2008; Birnstiel et al., 2010; Zsom et al., 2010). The rapid loss of these large grains or pebbles due to inward radial drift is an inhibiting factor in current theories of planet formation. A mechanism referred to as the'streaming instability' has been proposed as a means to bypass the 'bouncing barrier' and precipitate planetesimals directly from pebbles in the proto-planetary disc (Youdin and Goodman, 2005; Youdin and Johansen, 2007; Johansen et al., 2007; Bai and Stone, 2010, 2010). The size distribution of these bodies is consistent with the range observed in the Solar System's Kuiper Belt, wherein Pluto and its cohort could represent the high mass tail of this planetesimal formation process (Johansen et al., 2015; Simon et al., 2016). The initial orbits of planetesimals formed in the protoplanetary phase are expected to be nearly circular and confined to the disc midplane, therefore some additional stirring mechanism is required to dynamically excite the planetesimal belts left after the gas dispersal. Structures observed in proto-planetary discs, such as rings, spiral arms, etc., are uncorrelated with ice lines/density enhancements induced by disc temperature structure (Long et al., 2018; van der Marel et al., 2019). Rings in protoplanetary discs could therefore be the result of the action of protoplanets trapping material and sculpting the disc (e.g. Dong et al., 2018; Huang et al., 2018; Zhang et al., 2018).Low mass companions have been identified embedded within several such discs (Fedele et al., 2018; Keppler et al., 2019; Ubeira-Gabellini et al., 2020; Teague et al., 2021). Once the eccentricity-damping effect of the protoplanetary gas disc has been removed, the ongoing stirring by either planets or planetesimals on the debris disc will excite the belt leading to enhanced collision rates. Inheritance of structure from proto-planetary discs to debris discs is uncertain (Najita et al., 2022), but planetesimal belt locations in cold debris discs (exoKuiper belts) appear consistent with formation at CO ice line (Mataf et al., 2018; Marshall et al., 2021). However, the widths of rings in proto-planetary discs are much narrower than debris disc's planetesimal belts (Miller et al., 2021). The majority of broad debris belts observed by ALMA with sufficient spatial resolution exhibit sub-structures consistent with the presence of a perturbing planetary companion (Marino, 2021). Analysis of spatially resolved observations of debris discs have been used to infer the stirring mechanism(s) in play for a number of young systems based on stirring arguments from the size of the disc and the stellar age (e.g. Moor et al., 2015; Vican et al., 2016) and interpretation of their architectures, revealing disc-planet interactions in a variety of ways, including the detection of gaps in broad belts (e.g. Marino et al., 2017, 2018; MacGregor et al., 2019), scattered haloes of mm dust grains (MacGregor et al., 2018; Geiler et al., 2019), and the eccentric architectures of narrow belts (Kennedy, 2020). Most recently, Pearce et al. (2022) examined a large ensemble of debris discs, both spatially resolved and unresolved, inferring the required mass of a perturber, under the assumption that the sculpting is produced by a single planet or multiple planets, as well as if being the result of self-stirring by massive planetesimals within the disc. These two aforementioned main mechanisms have been suggested in the past to account for the planetesimal excitation levels, i.e. 1) the self-stirring mechanism (e.g. Kenyon and Bromley, 2008; Krivov and Booth, 2018), in which large planetesimals are able to trigger a collisional cascade once they acquire a certain size threshold, and 2) the secular perturbations from giant planetary companions, interior or exterior to the discs (e.g., Wyatt et al., 1999; Mustill and Wyatt, 2009). The latter has been favored recently due to the very large masses of debris discs required to explain their excitation levels by the self-stirring mechanism (Krivov and Wyatt, 2021; Pearce et al., 2022). However, the effects of a simultaneous stirring by external planets together with internal planetesimals has never been studied in detail. Besides, the existing self-stirring models do not properly account for the top-end of the size distribution (e.g., Pluto-sized dwarf planets), frequently relying in models comprised of equal mass (not so large) bodies stirring the disc. In previous works (Munoz-Gutierrez et al., 2015, 2017, 2018), we studied the long-term evolution of generic cold debris discs of different masses, under the perturbations of an interior Neptune-like giant planet, as well as of dozens of dwarf planet-sized massive perturbers (DPs, hereafter) embedded in the discs. In Munoz-Gutierrez et al. (2017), we demonstrated the existence of a stabilizing effect produced by a giant planet over the disruptive perturbations of massive DPs; we also demonstrated (Munoz-Gutierrez et al., 2018) the existence of a constant resupplying of the giant's MMRs with new objects, a mechanism acting on secular time-scales due to the radial migration of disc particles produced by the DPs' scattering effects. In this work, we expand the exploration of the mass parameter space of our mixed stirring scenario for more massive debris discs, comparable to those which have been observed in extrasolar planetary systems. We account for both the perturbations produced by an interior giant planet, as well as 100 massive DPs embedded in a disc of 1 000 massless particles. The simulation setup for our grid of disc-planet systems, along with a brief summary of the dynamical modeling approach, is given in Section 2. In Section 3 we characterize the outcome of our simulations, through analysis of the evolution of the survival fraction, average eccentricities, and inclinations of the bodies comprising the discs, as well as the orbital perturbations exerted on the giant planet; we provide our interpretation of the results and how they relate to other works addressing either planetary or self-stirring of a debris disc in isolation in Section 4. Finally, in Section 5, we summarize our findings and present our conclusions. ## 2 Methods and simulations We aim to test the efficiency for producing stirring over debris disc particles, of models which combine the perturbations coming from a giant planet, located interior to an initially wide and cold debris disc, as well as dwarf planets embedded within the disc. We call this a mixed stirring scenario, since it combines some of the elements applied so far in debris discs stirring models, i.e. secular perturbations from giant planets and self-stirring. ### Model disc generation Our systems are formed by a Solar mass central star, as well as a Neptune-analog "giant" planet (GP, hereafter) located at 30 au and starting with zero eccentricity and inclination. The debris disc is formed by 1 000 test particles and 100 massive DPs; the disc is 30 au wide and its inner edge is set to be 10 Hill radii beyond the GP location. We assume the mass of the debris disc to be given by the sum of the individual masses of the 100 DPs. We study a grid of models where the GP mass explores values from 10 to 316 M\({}_{\oplus}\) (i.e. from sub-Neptune to one Jupiter masses), in logarithmic steps of 0.15 (11 values). The mass of the debris discs covers a range from 3.16 to 31.6 M\({}_{\oplus}\) in logarithmic steps of 0.04 (26 values). Within each disc, the masses of the individual DPs are drawn randomly to try to reproduce the 100 most massive particles of a mass distribution \(n(m)\propto m^{-2.8}\), consistent with the distribution of large bodies in the Kuiper Belt (Fraser and Kavelaars, 2009). The individual mass of the most massive DP in the lightest disc is below 0.105 M\({}_{\oplus}\), while in the most massive disc it is 1.05 M\({}_{\oplus}\). Those values correspond to ratios with the less massive giant planet of 0.01 and 0.1, respectively. Such large planetesimal masses are not unexpected according to recent theories of planetesimal formation (e.g. the streaming instability; Youdin and Goodman, 2005; Morbidelli et al., 2009; Nesvorny et al., 2019), and are consistent with recent measurements of planetesimal masses inferred from the spatially resolved scale heights of \(\beta\) Pic and AU Mic (Matra et al., 2019; Daley et al., 2019). The range in debris disc masses was chosen to keep a realistic representation of the individual objects in the discs while remaining computationally feasible, i.e. a larger range in debris discs masses would imply that individual DPs would be very massive (comparable to the GP mass) to account for more massive discs, or we would require to proportionally increment the number of DPs in our simulations, making them too computationally expensive. If lower limits on the DP masses are preferred, the largest of these DPs should be interpreted as the sum of many smaller bodies, a product of the limitation of our computational power. The distributions of semi-major axes, eccentricities, and inclinations of the DPs and test particles were randomly generated based on a single seed. The initial inclinations of the DPs and test particles were randomly drawn between 0 and 5\({}^{\circ}\), whilst the initial eccentricities were constrained to be \(\leq\)0.05, i.e. we used similar values to the ones found for the cold classical Kuiper Belt (Gulbis et al., 2010). Visual inspection of the output for 20 seed values was carried out and an initial simulation setup was selected based on the uniformity of the distribution in _a-e_ and _a-i_ parameter space for both the DPs and test particles 1. Footnote 1: The initial orbital distribution of the DPs and test particles in the discs, for the random seed used in this work, can be found online at Figshare. ### N-body simulations We used the hybrid symplectic integrator from the mercury package (Chambers, 1999), to explore the long-term evolution of a grid of 286 debris disc models. An initial time-step of 400 days is used in all cases, as well as an accuracy parameter for the Bulirsch-Stoer integrator of \(10^{-10}\). We produced orbital outputs every 10 Myr, over a total integration time of 1 Gyr. Particles are removed from the simulation if their semi-major axes grow larger than 10 000 au, decrease below 1 au, or if they collide with the GP or the DPs. In most cases, several DPs are also ejected from the simulations by the same mechanisms due to their mutual interactions. ## 3 Results Over sufficiently long periods of time (\(\sim\)100 Myr), the gravitational perturbations from DP-sized objects, acting on initially cold debris discs particles, induce a considerable vertical and radial heating (Munoz-Gutierrez et al., 2015), which results in a progressive increment of the disc's mean eccentricities and inclinations. A GP in a non-circular, non-planar orbit, will induce secular perturbations on an external debris disc, forcing a component on the particles' eccentricity and inclination vectors (e.g. Murray and Dermott, 1999; Mustill and Wyatt, 2009; Gladman and Volk, 2021). Though initially circular and planar, the orbit of the GPs in our simulations quickly evolves, as we will show, due to their interactions with the massive disc members (DPs), which makes the former phenomenon relevant. Moreover, under the right circumstances, i.e. if massive enough (\(\geq\)100 M\({}_{\oplus}\)), an interior GP can also act to stabilize the orbits of massless particles within debris discs, acting against the perturbations produced by DPs (Munoz-Gutierrez et al., 2017). In the following subsections, we will show separately the evolution of the populations of massless particles, DPs, the debris discs as a whole, and finally the GPs within these systems. ### Evolution of Massless Particles in the Discs We aim to quantify the long-term impact that the combination of perturbers, namely an interior GP plus 100 massive embedded DPs, have on the dynamical stirring of the initially cold debris disc particles. We produced coloured grids showing the survival fraction of particles in the discs, as well as the amount of dynamical excitation, characterised by their mean eccentricities and mean inclinations; at this first stage, we characterise this excitation as a function of the total _initial_ disc mass, as well as the GP mass. In fig. 1 we show the evolution of the survival fraction, the average Figure 1: Animated figures for the survival fraction of test particles in the discs (top), as well as their mean eccentricity (middle) and inclination (bottom). Each colored circle in the grids shows the corresponding value at each time step output from the simulations (i.e. every 10 Myr) according to the color bar presented to the right of each grid. The points in the grid are arranged as a function of the mass of the GP in the model as well as of the initial mass of the debris disc, as accounted by the total mass of 100 massive DPs. The still frames in each panel show the final states of the simulations after 1 Gyr (An animated version of this figure can be found online at Figshare.). eccentricities, and the average inclinations of the massless particles on our array of simulations. In the top panel of fig. 1 we show the evolution of the survival fraction up to 1 Gyr. In the animated figure, each snapshot corresponds to a 10 Myr time step. The color of each circle represents the surviving fraction of massless particles within that disc, while its location on the grid corresponds to the initial mass of the disc (i.e. the sum of the masses of our DPs) and the mass of the GP in that planetary system. The ejection efficiency is correlated to both the GP mass and the (initial) disc mass. Those systems with the highest GP and disc masses are the most quickly depleted. Within the first 30 to 100 Myr, the systems with the most massive GPs (\(M_{\rm GP}>100{\rm M}_{\oplus}\)) have already lost \(\geq 80\%\) of their initial particles. Over the next hundreds of Myr, with a smaller number of total particles as well as a smaller number of total perturbers, the ejection rate slows down. Overall the most efficient ejection continues to occur in systems with simultaneously the most massive discs and the most massive GPs. At the end of the simulations, the higher ejection efficiency occurs for initial discs masses \(\gtrsim\)10 M\({}_{\oplus}\), with the highest ejection efficiency occurring when the GP mass is \(\sim 100\) M\({}_{\oplus}\) and the disc mass is \(\gtrsim 20\) M\({}_{\oplus}\). Many systems exhibit the ejection of a substantial fraction of the test particles in our simulations. The average ejection rate is 64.5% across all simulations in the grid, with an ejection rate of up to 96.3% for the most extreme case. The orbital characteristics (eccentricity, inclination) of particles in the discs were calculated by averaging the elements of surviving particles at each time step. In the animated version of Figure 1, we show their evolution in 10 Myr time steps illustrating the change in the remaining particles, their eccentricity, and inclination over 1 Gyr. The color of each circle represents the mean values of the eccentricity and inclination for each model at the timestep in question. From the evolution seen in the middle and bottom panel animations of fig. 1, we find that the disc response is monotonic for the lower GP masses (\(M_{\rm GP}\leq 30\)\(M_{\oplus}\)). We find increasing excitation for decreasing GP mass, increasing disc mass, and longer integration times. The evolution of the disc excitation with time is more clearly visible in eccentricity than inclination. The middle panel of animated Figure 1 shows that after a few tens of Myr of evolution, a more efficient stirring has been produced for the middle rows of the grid, i.e. for \(M_{\rm GP}\) in the range \(\sim\)30 to \(\sim\)110 M\({}_{\oplus}\). At this time, the stirring grows in proportion to the debris disc mass, while for a given debris disc mass, the stirring increases with GP mass, reaches a maximum around 70 to 100 M\({}_{\oplus}\), and decreases for larger GP masses. This behaviour does not resemble the quadratic behaviour presented in Munoz-Gutierrez et al. (2017), however that study was for discs 2 to 4 orders of magnitude lighter than what we are studying here. Over time, during the first 400 Myr, we see less massive GPs becoming progressively more efficient at exciting test particles; while the more massive GPs models stop evolving. After 300 Myr the sweet spot for efficient stirring becomes less evident, in part due to the ejection rate of the most excited particles from these systems; after 600 Myr even the models with the least massive GP have stopped evolving. By the end of the simulations, the largest mean eccentricity occurs in the lower right corner of the grid, where the disc masses are comparable to, or even greater than, the GPs masses in these systems. In the bottom panel of animated fig. 1 we observe a slower and more linear trend for the evolution of the mean inclination; up to 100 Myr, the increment in mean inclination is small and its value remains almost homogeneous across the grid. With time a small tendency of larger excitation with larger disc masses and smaller GP masses starts to develop; after 200 Myr the lower right corner of the grid, where \(M_{\rm GP}\leq M_{\rm DD}\), starts to show clear signs of a stronger stirring. By the end of the simulations, the final stirring is shown to be a function of both GP mass and debris disc mass, with the greater stirring observed in systems with lower GP masses and larger disc masses. When the mass of the disc is comparable to that of the GP, the planet-disc interactions are warranted to be complex. The angular momentum that can be transferred from the GP to the DPs is large enough to produce a significant migration of the GP due to the ejection of massive objects. Also, the reference plane (or "invariable plane") within such a massive debris disc is not well defined, as the GP orbit no longer plays such an important role in determining the total angular momentum of the system. These conditions are satisfied for models in the lower right corner of our grid; in that region, particles are excited but they are not efficiently ejected, so the system effectively heats up and there is no way of cooling it down. Complementary to the animated grids in fig. 1, we also present the time evolution of each model as a curve on the three panels of animated fig. 2. There we can see the evolution of each model across the animation, with the survival fraction on the top panel, mean eccentricity in the middle panel, and mean inclination in the bottom panel; the last images (as well as the still frames) highlight the average of all the models with the same GP mass. In the top panel of animated fig. 2, we see the decline in particle numbers as a function of time. As expected, the more massive planets are more efficient at ejecting test particles from the system. For a given planet mass, the ejection is more efficient with a more massive disc. In the middle panel of animated fig. 2 we present the eccentricity evolution for each of our 286 models; as in fig. 1, we are presenting the evolution of the mean eccentricity of all particles remaining in the simulations. For models with GP masses less than \(\lesssim 80\) M\({}_{\oplus}\) we can see that the eccentricities keep increasing over the whole duration of most of the simulations; all models slow down with time, but for models with GP masses between \(\sim 30\) M\({}_{\oplus}\) and \(\sim 80\) M\({}_{\oplus}\) there seem to be two phases: first, a fast increase and then they reach a plateau with very little increase in eccentricity thereafter; the change between these two phases occurs sooner and at a lower average eccentricity for the more massive GPs, and will likely occur even at GP masses less than 30 M\({}_{\oplus}\), but it probably requires more than 1 Gyr for the same to happen, while for 80 M\({}_{\oplus}\) it only requires approximately 100 Myr. For the most massive GPs (\(\gtrsim 100\) M\({}_{\oplus}\)) a third phase appears, after the fast increase, and before the plateau, a moderately fast decrease occurs due to the rapid ejection of the most eccentric objects; again the evolution is faster for more massive GPs, this new phase seems to be most pronounced for our 223 M\({}_{\oplus}\) models, but perhaps with smaller time steps it might be even more important for the 316 M\({}_{\oplus}\) GP. Finally, models with GPs more massive than 220 M\({}_{\oplus}\) seem to reach saturation, perhaps even a small decline, near the end of the simulations. Regarding the effect of the disc mass on the overall eccentricity, we find that, for a given time and GP mass, larger disc masses produce larger mean eccentricities. We present the inclination evolution in the bottom panel of animated fig. 2; as for eccentricity, we are presenting the evolution of the mean inclination of all particles remaining in the simulations. Here we show that the evolution of the inclinations is much slower than for the eccentricities, in fact, the inclination for all models continues to rise until the end of the simulations. As with eccentricities, simulations with more massive discs tend to evolve faster and have larger mean inclinations. In general, for very large GP masses, both eccentricity and inclination show a mostly smooth evolution. This is related to the dominance of the GP mass on the overall dynamics, as well as to the number of particles quickly ejected from the system. This shows a dependence in GP mass on the degree of stirring of the disc. Very massive GPs become less efficient with time at heating the discs, and in fact, those discs cool off at later times, whereas less massive GPs continually stir their discs throughout the timescale of the simulations. This effect can be explained through the ejection efficiency of the GPs at different masses. High-mass GPs (top rows) quickly excite and eject disc particles and DPs that stray into regions of strong interaction with the GP, leaving a depleted but dynamically cold system in their wake. In contrast, low-mass GPs do very little to stir the discs, but also very little to suppress stirring by the DPs or to eject particles excited by DPs, leaving a well-populated but dynamically hot system. ### Evolution of DPs in the Discs We find that the evolution of massive DPs in the discs follows a similar trend to that of massless particles, but their self-stirring is slightly less efficient, as shown in fig. 3 (cf. fig. 2). There we present animations showing the evolution of the survival fraction (top panel), mean eccentricities (middle panel), and mean inclinations (bottom panel) for surviving DPs in the simulations as a function of time, in the same scheme as for the test particles in the previous sub-section. In the top panel of fig. 3 we see the surviving fraction of DPs in each model system as a function of time. Again, consistent with the analogous plot for test particles in fig. 2, we see that the DPs are more efficiently removed from the system with a more massive GP and a more massive disc. In the middle panel of fig. 3 we can see trends in the behaviour of the DPs can be delineated for models with different GP masses, following the same general behaviour as for the test particles. The models with the lowest GP masses, below 15 M\({}_{\oplus}\), exhibit a rising mean eccentricity for the DPs up until the end point of our simulations. Models with GPs above that, but below 60 M\({}_{\oplus}\), again reach a plateau, and have a slow increase, but this time they have an obvious maximum before having a slow decrease in eccentricity at some point between 400 Myr and 1 Gyr. The time at which the highest value occurs, and its value are both dependent on the GP mass; more massive GPs have their maxima at earlier times and with lower mean eccentricity values. This is again a result of the increasing strength of interaction for DPs that more closely approach the GP. Furthermore, we see that overall the mean eccentricity of the DPs is lower than that of test particles. For models with GPs \(>\) 60 M\({}_{\oplus}\) we again observe a third phase of evolution, in the first a rapid increase in mean eccentricity occurs, quickly reaching a peak within the first 200 Myr which is faster for more massive GPs; after this follows a decline, also faster the more massive the GP; finally, after the decline, a slow increment begins again until reaching an approximate steady state by the end of the simulations. The maximum values of the average mean eccentricity for the models remain below \(\simeq\)0.35 for DPs (cf. 0.55 for test particles, which continue growing for the lowest mass GP models), with an apparent saturation limit at this value independently of GP mass. We can also see that the behaviour of the lines in fig. 3 is noisier than in the case of test particles (fig. 2), this is because the DP population is 10 times less numerous than the particles. We would expect this to also be true for any real disc since the number of DPs containing a substantial amount of the disc mass will always be a minority compared to the total population (starting with the largest bodies, which are the most dynamically relevant). For any given GP mass there is a trend of larger eccentricities for Figure 2: Evolution of the survival fraction (top), mean eccentricity (middle), and inclination (bottom) of test particles in the discs. The different colors of the lines in the three panels indicate the mass of the GP in the models. The initial debris disc mass in the models is represented by the thickness of each line, with thicker lines corresponding to more massive discs (pale lines in the still frames, all but the last frame in the animated figures). The thickest lines in the still frames (and those of the last animated frames) correspond to the average of all disc masses for any given GP mass (An animated version of this figure can be found online at Figshare). larger disc masses. However, there is an overall dispersion for the evolution of each suite of simulations, and some individual simulations fall outside of the global trend e.g. the most massive discs for the systems with 28 M\({}_{\oplus}\) and 112 M\({}_{\oplus}\) GPs lie well above the other systems in their respective suites. These "outliers" may be attributed to stochastic events involving DP interactions or ejections influencing the overall evolution of that system. The evolution of the mean inclination for DPs in our models is shown in the bottom panel of fig. 3. Again, we observe a similar behaviour to the one described above for the test particles, finding lower inclinations for larger GP masses, and also that the final inclination values are consistently lower. In this case, almost all the systems show a monotonic rise in mean inclination over the duration of the simulations with no turnover. Only the most massive GP systems (\(>220\) M\({}_{\oplus}\)) seem to reach a peak in their respective mean inclination within the duration of the simulations. Systems with lower mass GPs, M\({}_{\rm GP}<30\) M\({}_{\oplus}\), are not yet slowing down at the end of the simulations. We also find that, for a given GP mass, more massive discs will produce larger mean inclinations. Overall, the greatest inclination values lie below 30\({}^{\circ}\) for the DPs, regardless of GP mass, and take longer to undergo the same relative degree of excitation, as compared to the test particles in the same systems that can reach values close to 45\({}^{\circ}\). ### Evolution of the Discs as a Whole To better understand the evolution of discs as complete systems, containing both massive and massless particles, as well as the relationship between the two, we begin by comparing the final values of the mean orbital parameters and survival fractions of test particles and DPs. In fig. 4 we show the final distribution of mean eccentricities (left panel), mean inclinations (middle panel), and survival fractions (right panel), of both populations, for all 286 systems; the different colors indicate the mass of the GP in that system, while the size of the dots represents the initial mass of the corresponding debris disc. In the left and middle panels of fig. 4 we see that for both mean eccentricity and inclination, the distribution of final values remains above the identity line (indicated by the solid black line) except for one outlier case in eccentricity which corresponds to one of the models with the most massive GP. We can see that the final conditions for all models closely follow a straight line. A comparison of the corresponding panels in figs. 2 and 3, shows that massless particles are more easily disturbed than DPs (as seen in fig. 4); it can also be seen that the evolution of the eccentricity is much less mass dependent than that of the inclination. We applied a linear fit in both cases (dashed black lines in the left and middle panels of fig. 4) to quantify how efficient the stirring of test particles is when compared to that of massive DPs. The best fit for the models in the eccentricity panel is given by \(\left<e_{particles}\right>=1.366\left<e_{DPs}\right>+0.004\) and for the final inclination \(\left<i_{particles}\right>=1.916\left<i_{DPs}\right>-2.602^{\circ}\); these fits show that the stirring of test particles is more efficient than that of DPs by factors of 1.366 for eccentricity and 1.916 for inclination. Both of these fits lie close to the gray star representing the initial conditions of all the distributions. As in figs. 2 and 3, fig. 4 shows that the more massive discs (larger dots) are more efficient at stirring their particles than less massive ones (smaller dots) but that more massive GPs have a stabilising effect on the discs after a quick removal of the initially unstable minor bodies (both DPs and test particles); this comes about because massive GPs will tend to eject particles that pass close to them, Figure 3: Same as fig. 2 but for the evolution of the survival fraction (top), mean eccentricity (middle), and inclination (bottom) of DPs in the discs. The color of the lines in both panels indicates the mass of the GP in the models, while line thickness represents the mass of the disc, with thicker lines corresponding to more massive discs (pale lines in the still frame, all but the last frame in the animated figure). The thickest lines in the still frame (and those of the last animated frame), correspond to the average of all disc masses for any given GP mass (An animated version of this figure can be found online at Figshare). whereas lighter GPs will perturb their orbits without ejecting them from the system. The final distribution of survival fractions (right panel of fig. 4) remains below the identity line, illustrating the greater difficulty for a planet in ejecting massive objects (DPs) than massless ones (test particles). A strong dependence on GP mass is observed in the final survival rate for both populations of minor bodies, demonstrating the efficiency of ejection. We find the relationship between the surviving fractions (\(SF_{particles}\) and \(SF_{DPs}\)) is best represented by a 3rd order polynomial of the form: \[SF_{particles} = 0.932~{}SF_{DPs}{}^{3}-0.703~{}SF_{DPs}{}^{2}\] \[+~{}0.746~{}SF_{DPs}-0.031.\] As with eccentricity and inclination, extrapolation of this trend towards the less perturbed discs leads to the gray star representing the initial conditions; for survival fractions, there is also an obvious extrapolation to more violent systems and we find that our trend leads toward the (0,0) point where all particles would be ejected. The scatter of simulations around this trend line is generally more pronounced for the systems with higher GP masses (in the survival of both test particles and DPs). This is to be expected as it is interactions with the GP in each system that will dominate the removal of smaller bodies (either by collision or ejection). We find that the number of test particles removed by collisions remains approximately constant over the simulation grid, comprising about 2% of the particles over the duration of each model run. By contrast, the number of ejection events is strongly correlated with the GP mass, with removals initially about 5%, and swiftly becoming greater by an order of magnitude or more with up to 95% during a model run. As the GP mass decreases so too does the ejection efficiency, and they will only dynamically heat their companion discs rather than deplete them. This leads to a lower dispersion in the survival of DPs, but a comparable scatter in test particle ejection. The most massive GPs exhibit the tightest correlation with the observed trend. In these simulations, the GP rapidly stirs and depletes the disc (cf. fig. 2) and if any minor body subsequently migrates into the perturbation region of the GP it is swiftly removed. The most massive discs in the simulations for a given GP mass tend to lie below the trend line identified by section 3.3. This indicates segregation by disc mass within the distribution of surviving minor bodies, where the more massive (initial) discs are more depleted in both test particles and DPs for a given GP mass. This is the natural consequence of greater dynamical stirring by more massive individual DPs within the more massive disc for a given system, leading to particles (and DPs) passing into close interaction with the GPs. This tendency weakens and breaks down as the GP mass decreases, representing the decreasing capacity of the GP to deplete mass from the disc. Most of the analysis of sections 3.1 and 3.2 is focused on the point of view of the models, this is: we are classifying each model according to its initial conditions. However, this is not directly applicable to observations. From an observational point of view, it is more interesting to characterise a model according to its current parameters, and while the GP mass will not change, the disc mass will change with the ejection of DPs. Therefore, similar to the animated fig. 2, in the animated version of fig. 5 we show the time evolution of our 286 models by plotting the survival fractions, mean eccentricities, and mean inclinations of surviving particles at each time step, as a function of the evolving mass of the disc, instead of its initial mass. In the top panel of fig. 5 we show the survival fraction of test particles, where each snapshot corresponds to a 10 Myr evolution. The color of each circle indicates the particle survival fraction present in each disc at that time, with darker colours representing a lower survival fraction. We can see that the evolution of the survival rate is fastest for the most massive GP and (initial) disc mass combinations, with more than 50% depletion of those discs occurring within the first few tens of Myr; in the same time frame, barely any ejections have occurred amongst the lower mass systems. By 100 Myr, systems with GP masses greater than 60 M\({}_{\oplus}\) have experienced substantial ejection, losing up to half the particles (but not necessarily half their mass), whereas systems below that have yet to experience any substantial ejections. At the 500 Myr point, the most massive systems have lost up to 90% of their initial particles and only the least massive GP/disc systems are untouched by ejections. Beyond this time up to 1 Gyr the overall picture remains constant and the systems' evolution is more gradual. If we focus on a fixed small area of the grid, instead of following Figure 4: Comparison of the final mean orbital parameters and survival fractions of DPs _vs_ test particles. The left panel shows the distribution of mean eccentricities, the middle panel for mean inclinations, and the right panel for survival fractions. The colors indicate the mass of the GP in the system, while the size of the dot represents the initial mass of the debris disc. The identity is indicated by the solid black line, while best fits are indicated by dashed black lines. Linear fits were done for both eccentricity and inclination, while a third-order polynomial was fitted to the survival fraction. The gray star in each panel represents the initial conditions of our systems. the evolution of individual coloured circles, the behaviour of the survival fraction in that region becomes even more extreme, e.g., for a disc mass of \(\sim 7\) M\({}_{\oplus}\) the difference in survival fraction goes from \(\approx 90\%\) (at low GP masses) to \(\approx 10\%\) (at high GM masses) during the 1 Gyr simulation. In the same sense, one should look at the animated middle panel of fig. 5, following the eccentricity evolution, as we looked at the animated top panel, i.e. we should focus on an area and not let our eyes drift away from it; by looking at a column centered at around \(\sim 10\) M\({}_{\oplus}\), we see that the eccentricity slowly rises with time; after the first few time steps the most eccentric models were those with a GP mass of \(\sim 200\) M\({}_{\oplus}\), but with time this maximum went all the way down to 10 M\({}_{\oplus}\) (although this took the best part of the 1 Gyr of our simulations). Another thing to note is that, while many individual dots reach saturation within our simulations, by looking at a fixed area we see that it keeps on evolving, mostly because simulations with more massive initial discs keep passing through our observation area (akin to the difference between Eulerian vs. Lagrangian evolution). By the end of our simulation, we find that there is a triangular region, in the lower right of the plot, that is mostly saturated with mean eccentricities \(\sim 0.6\). This is quite extreme for discs that were initially dynamically cold with \(\left<e_{0}\right>=0.025\), a \(\sim\)25 fold increase. Finally, for the bottom panel of the animated fig. 5, following the inclination evolution, we observe that the evolution of inclination is slower than for eccentricity. After \(\sim 100\) Myr the inclination is mostly homogeneous with only the most massive disc models showing signs of a significant stirring. During the next hundreds of Myr, a differentiation in the level of stirring becomes evident for individual columns, which seem to have uniform colors evolving in time, i.e., the excitation level for the inclination is more clearly dependent on the remaining debris disc mass than on the initial disc mass or the GP mass. At the end of the simulations, the maximum stirring has occurred for the most massive debris discs and the least massive GPs, however, since the ejection fraction grows with GP mass, as time passes, what would be an equally excited component in our most massive GP models has already been depleted. A skewed initial grid (rather than the rectangular one we considered here), with more massive debris discs for the more massive GP systems, might fill in some of this depleted parameter space. However, as the disc evolution timescale decreases with increasing disc mass, the observed regions of parameter space that are vacated in our simulations are necessarily void given the duration of the simulations. In this sense the structure we observe in our grid at 1 Gyr is not fixed; longer integration would necessarily drive all the systems to lower disc masses, leading to a more pronounced "gap" in the top right of these plots. This diagram, therefore, provides some constraints on the evolutionary pathway undertaken by observed debris discs with the constraints of the stellar age and inferred disc mass. ### Evolution of GPs Besides the evolution of the debris disc systems as a whole, the GPs in our models experience modifications to their initial orbital parameters; this is due to the interactions between the GP and massive DPs, which results in the interchange of angular momentum that leads to an overall increase in their eccentricity and ultimately to ejections of some DPs. Although small in most cases, the orbital perturbations experienced by some of the GPs in our models can be significant; specifically: large inward orbital drifts, of up to 10 au, are observed in systems with the less massive GPs and the most massive debris discs, i.e. in those systems with the largest mass ratio, as given by \(M_{DD}/M_{GP}\). In fig. 6 we only show the final distribution, in logarithmic values, for the eccentricities (left panel), inclinations (middle panel), and semimajor axis changes (right panel) of the GPs in our 286 models, as a function of the logarithm of the mass ratio of the system. In log-log space, those three distributions can be well described by linear fits. At the end of the simulations we found that most of our GPs would be considered to have remained in cold orbits (only 3, out of 286, have \(e>0.1\), while only 5 have \(i>5^{\circ}\)); however, about 30% of the GPs in our simulations have lost a significant fraction of their angular momentum, having a noticeable decrease in their semimajor axis by the 1 Gyr mark, \(a<0.9a_{0}\). At any point during the simulations, the three distributions (\(e\), \(i\), and \(\left|\Delta a\right|/a_{0}\)) can be well described by linear fits that slowly evolve with time, with both the absolute value as well as the mass fraction dependence slowly increasing. By fitting all simulations at 100 Myr Figure 5: Animated figures illustrating the surviving fraction of test particles (top), mean eccentricity (middle) and mean inclination (bottom) as a function of the evolving disc mass vs. GP mass. The time step is in increments of 10 Myr (An animated version of this figure can be found online at Figshare). intervals, and subsequently fitting a time dependence to the linear fits we obtain: \[\log_{10}(e_{GP}) =0.3383\left(\frac{T}{\mathrm{Myr}}\right)^{0.1005}\log_{10}(M_{DD}/ M_{GP})\] \[\quad+0.2061\log_{10}\left(\frac{T}{\mathrm{Myr}}\right)-2.0871, \tag{2}\] \[\log_{10}(i_{GP}) =0.6454\left(\frac{T}{\mathrm{Myr}}\right)^{0.0751}\log_{10}(M_{DD }/M_{GP})\] \[\quad+0.3265\log_{10}\left(\frac{T}{\mathrm{Myr}}\right)-0.9025,\] (3) \[\log_{10}(\left|\Delta\alpha\right|/a_{0}) =0.3138\left(\frac{T}{\mathrm{Myr}}\right)^{0.1281}\log_{10}(M_{DD }/M_{GP})\] \[\quad+0.4582\log_{10}\left(\frac{T}{\mathrm{Myr}}\right)-2.0646, \tag{4}\] for eccentricity, inclination, and semimajor axis change, respectively. ## 4 Discussion In this work, we did not focus on the sculpting process of the edges of the discs, nor on the disc shapes; this is why we choose 10 Hill radii as the inner edge of the discs and not 5 Hill radii as been done elsewhere (e.g. Pearce & Wyatt, 2014). We further assumed a GP in a circular and planar orbit to minimise the impact of the planet on the disc. We instead focused on the stirring process produced as a result of the interaction of the massive DPs embedded in the debris discs, but such a process was somewhat dominated by the presence of the GP. We focused on determining the stirring levels as functions of both the GP and debris disc masses (assuming the shapes and disc edges are imprinted on the debris discs by the giant planetary companion). Our model spans GP masses between 10 M\({}_{\oplus}\) (approximately 60% the mass of Neptune) and 316 M\({}_{\oplus}\) (approximately the mass of Jupiter); Pearce et al. (2022) estimate that Neptune to Saturn-mass planets are the minimum needed to stir most of their 178 modeled discs (though some needing Jupiter mass planets, assuming maximum eccentricities of 0.3). Similarly, the range of disc masses in this analysis, 3.16 to 31.6 M\({}_{\oplus}\), are consistent with expectations based on both observations and theoretical considerations (Mulders et al., 2021; Krivov & Wyatt, 2021). Several other studies predict larger masses (\(>\) 100 M\({}_{\oplus}\)) in order for debris discs to be self-stirred (e.g. Krivov & Booth, 2018; Krivov & Wyatt, 2021). Nonetheless, in this work, we found that small masses in debris discs can result in large stirring values, up to a 25-fold increase in the mass extreme cases. Thus an efficient stirring is possible for small disc masses (\(<\) 10 M\({}_{\oplus}\)), if ever perhaps containing larger than expected perturbers, as some of the DPs present in the most massive discs we considered have assigned masses close to 1 M\({}_{\oplus}\). Our massive objects are initially thought to be real 'dwarf planets' (DPs), as long as we adopt the definition of DP as an object that has not cleared its neighborhood from debris (yet). We could expect massive debris discs (much more massive than our Kuiper belt) to contain more massive objects, though this is not necessarily true, depending on planetary formation mechanisms, disc mass density, etc. Recent studies on dust formation and excitation place limits to the most massive objects present in massive debris discs to be around 5 times the mass of Pluto. Based on spatially resolved observations of the vertical scale heights of the debris discs around AU Mic and \(\beta\) Pic, the most massive bodies present in those discs could be up to 9\(\times 10^{-5}\) and 0.4 M\({}_{\oplus}\), respectively (Daley et al., 2019; Matra et al., 2019). However, more massive objects might be present in debris discs (without leaving a piece of observational evidence, such as bumps or gaps), if we assume the mass range in planetesimals scales linearly with the overall mass of the disc. Limiting the mass of the DPs to be similar to the mass of Pluto would diminish the stirring effect in both \(e\) and \(i\). According to shorter duration simulations (\(t=50\) Myr) with 41, 100, and 250 DPs (and 410, 1 000, and 2 500 test particles) we ran as consistency checks, this correction should be approximately a factor of 1.5, and definitely less than a factor of 2. Notably, by using Mercury, the problem becomes computationally intractable when considering more than a few hundred massive DPs; new tools are required to expand the grid with a greater number of DPs and particles, such as GPU-based simulations. We leave this question open for future work. The main stage for excitation evolution in our simulations occurs in timescales of the order of a few and up to 100 Myr; while the time required for our systems to acquire their final configurations, i.e. reach their saturation levels, is of the order of 150 Myr to more Figure 6: Final orbital values for the GPs as a function of the mass ratio. The left panel shows the distribution of final eccentricities, the middle panel for final inclinations, and the right panel for final semimajor axes changes. As in fig. 4, the colors indicate the mass of the GP in the system, while the size of the dot indicates the initial mass of the debris disc. Linear fits in these Log-Log planes are indicated by the dashed lines (see text for details). than 1 Gyr scales. These timescales are similar to the ages of host stars for many observed debris disc systems; thus we would expect that many of the observed systems with similar physical parameters as those covered in this work, would already be settled in their final configuration, currently experiencing a quiet steady-state evolution. The timescales derived here are a function of the chosen architecture of the model, adopting a 1 M\({}_{\odot}\) star with a planetary companion at 30 au and a Kuiper belt-like disc beyond that. However, the evolutionary timescales can be easily scaled for different stellar masses or disc semi-major axes, as the dynamics should be self-similar, provided physical collisions are a negligible cause of removal of bodies. For a different central star mass, all the masses should scale proportional to the new stellar mass, and the timescales should be modified as the inverse of the square root of the mass; for a different GP orbital radius, the masses should not be modified, and the timescales should be modified as the mass to the -3/2 power. For the Vega system, with a stellar mass \(\sim\)2 M\({}_{\odot}\) and a planetesimal belt around 100 au (Matra et al., 2020; Marshall et al., 2022), the equivalent timescale would be nearly three times longer than the evolution of the models considered here (depending on the exact location of the GP used in Vega). A debris disc is the result of a collisional cascade within a planetesimal belt triggered by dynamical excitation, either intrinsically by the largest planetesimals within the belt (Krivov and Booth, 2018) or extrinsically by an external perturber (e.g. Mustill and Wyatt, 2009). The range of relative velocities among planetesimals, required to trigger the onset of the collisional cascade, is typically estimated as 100 to 300 m/s (e.g. Kenyon and Bromley, 2001). On the other hand, such relative velocities can be estimated from the average orbital parameters of the dust-producing small objects in the discs, as \(V_{rel}=V_{K}\sqrt{1.25e^{2}+I^{2}}\), where \(V_{K}\) is the Keplerian velocity at the distance \(a\) from the star (Lissauer and Stewart, 1993; Wyatt and Dent, 2002). Krivov and Booth (2018) argued though, that the average inclinations are not terribly important when determining the relative velocities among planetesimals, since eccentricities grow much faster than inclinations in debris disc models. Thus, one can simply estimate the relative velocities from the root mean square eccentricity of the planetesimals as \(V_{rel}=V_{K}\sqrt{\left\langle e^{2}\right\rangle}\). In any case, an estimation of such relative velocities in all of our models shows that values close to 1 km/s are quickly reached, in less than 10 Myr, regardless of the initial debris disc mass or the GP mass of the model. Indeed, velocities of collisions within the belt modeled here, range from \(\sim\)250 m/s to \(\sim\)2000 m/s, therefore locating themselves safely on the side of a collisional cascade capable of producing dust. A population of planetesimals on eccentric orbits within a debris disc would produce a halo of millimetre dust grains. Such structures have been identified in ALMA observations of several systems, including HR 8799 (Geiler et al., 2019), HD 32997 and HD 61005 (MacGregor et al., 2018). The typical eccentricity of dust grains within debris discs inferred from their spatially resolved belts lies in the range 0.1 to 0.3 (based on 11 discs, see figure 9 of Marino, 2021); this level of eccentricity is consistent with the mean eccentricity induced by the dwarf planets in this set of simulations. ## 5 Summary and Conclusions In this work, we performed a suite of 286 numerical simulations to explore the stirring effects that a combination of giant and dwarf planetary perturbations would have on the long-term evolution of initially cold debris disc models. Our systems are formed by a solar mass star, a giant planet initially located at 30 au in a circular and planar orbit, and 100 massive dwarf planets embedded in a disc described by 1000 test particles. The orbital distribution of the discs was drawn randomly for small values of eccentricity (between 0 and 0.05) and inclination (between 0\({}^{\circ}\) and 5\({}^{\circ}\)). We initially located the inner edge of our discs at 10 Hill radii from the GP, with a total width of 30 au. Our 1 Gyr long simulations take into account the perturbations from the GP and the DPs over test particles and among themselves. The evolution timescale for the eccentricity and inclination depends mostly on GP mass, where the simulations with more massive GPs evolved faster than those with less massive GPs. On the other hand, the limit to the heating depends on both GP mass and disc mass, with large disc masses and small GP masses being able to heat the disc more than simulations with light discs and/or heavy GPs. Part of the reason why massive GPs are less efficient at heating the disc is their tendency of ejecting "warm" particles before they can get extreme values of either eccentricity or inclination. In all models the mean inclination rises quickly (or at least relatively quickly) before slowing down, only the most massive GPs seem to be able to level off before the 1 Gyr mark. The eccentricity evolves faster with many of the simulations reaching a plateau before the end of the simulation. Very massive GPs heat their discs very quickly which then slowly cool down by ejecting the more excited particles. The effect on the eccentricity is larger than for the inclination. Massless particles, which in real systems could be considered as the less massive members of the discs, such as cometary nucleii, are more mobile than massive objects (DPs), therefore they become 'hotter', i.e. more eccentric, more inclined, and are easier to be ejected (they have a poorer survival rate). Nonetheless, DPs reach significant stirring levels as well and have only slightly better chances of survival than test particles. The values of both eccentricity and inclination for test particles at a given time have a better correlation with the remaining mass of the debris discs than with the GP mass or the initial debris disc masses; this is particularly evident for the inclination. GPs themselves are perturbed by their interactions with massive DPs, the most significant perturbations occur when the mass of the disc is comparable to the mass of the GP. In such cases, a significant inward migration of the GP takes place, of up to \(\sim\) 10 au, leaving a stirred disc that is not able to cool off by ejecting "warm" particles, with a far away GP closer to its star. The masses in debris discs explored in this work, and specifically their evolving remaining masses, are indeed very small when compared to those expected to be able to stir the disc by the self-stirring scenarios (Krivov and Wyatt, 2021; Krivov and Booth, 2018), but here we highlight the fact that even with such small masses, which involve a small number of massive perturbers (100 DPs initially), and perhaps more importantly, not-so-massive objects, are capable of increasing in an important percent, while acting together with the GP, the eccentricities and inclinations of debris disc particles. This result is similar to the enhancement of cometary production in the Kuiper belt found by Munoz-Gutierrez et al. (2019) and could have additional implications for the production of exocomets in extrasolar planetary systems. Taking everything into account, we have found that a combination of perturbers, consisting of embedded dwarf and external giant planetary masses, is in general more efficient in the stirring of cold debris discs than one or the other mechanism acting independently. ## Data Availability The data underlying this article are available in the article and in its online supplementary material. The animations, supplementary data, and analysis scripts are provided for public access on Figshare. ## Acknowledgements The authors thank the referee, Alex Mustill, for his constructive and helpful comments. JPM acknowledges research support by the Ministry of Science and Technology of Taiwan under grants MOST107-2119-M-001-031-MY3 and MOST109-2112-M-001-036-MY3, and Academia Sinica under grant AS-IA-106-M03. _Software:_ This work has made use of the symplectic integrator package mmextky (Chambers, 1999), and the Python modules Matplotlib (Hunter, 2007), and NumPy(Harris et al., 2020).
2310.14429
Text generation for dataset augmentation in security classification tasks
Security classifiers, designed to detect malicious content in computer systems and communications, can underperform when provided with insufficient training data. In the security domain, it is often easy to find samples of the negative (benign) class, and challenging to find enough samples of the positive (malicious) class to train an effective classifier. This study evaluates the application of natural language text generators to fill this data gap in multiple security-related text classification tasks. We describe a variety of previously-unexamined language-model fine-tuning approaches for this purpose and consider in particular the impact of disproportionate class-imbalances in the training set. Across our evaluation using three state-of-the-art classifiers designed for offensive language detection, review fraud detection, and SMS spam detection, we find that models trained with GPT-3 data augmentation strategies outperform both models trained without augmentation and models trained using basic data augmentation strategies already in common usage. In particular, we find substantial benefits for GPT-3 data augmentation strategies in situations with severe limitations on known positive-class samples.
Alexander P. Welsh, Matthew Edwards
2023-10-22T22:25:14Z
http://arxiv.org/abs/2310.14429v1
# Text Generation for Dataset Augmentation in Security Classification Tasks ###### Abstract Security classifiers, designed to detect malicious content in computer systems and communications, can underperform when provided with insufficient training data. In the security domain, it is often easy to find samples of the negative (benign) class, and challenging to find enough samples of the positive (malicious) class to train an effective classifier. This study evaluates the application of natural language text generators to fill this data gap in multiple security-related text classification tasks. We describe a variety of previously-unexamined language-model fine-tuning approaches for this purpose and consider in particular the impact of disproportionate class-imbalances in the training set. Across our evaluation using three state-of-the-art classifiers designed for offensive language detection, review fraud detection, and SMS spam detection, we find that models trained with GPT-3 data augmentation strategies outperform both models trained without augmentation and models trained using basic data augmentation strategies already in common usage. In particular, we find substantial benefits for GPT-3 data augmentation strategies in situations with severe limitations on known positive-class samples. _Keywords_- Classification, Fraud, Machine Learning, LLMs, NLP, Text Generation ## 1 Introduction Detecting malicious activity has been a task tackled by machine learning classifiers since the 1980s [8]. Classifiers are first trained with datasets of both positive (malicious) and negative (benign) samples, and then evaluated in settings that should reflect their intended deployment scenario [5]. It is well known that classifiers will perform better with larger datasets, as long as the data is not of lesser quality [30, 14]. This can sometimes be a problem when working in the security domain; malicious content like fraud, malware delivery, and offensive comments are generally a small minority of all data, and cases are often under-reported [31, 17, 34]. This means that while negative cases are often plentiful, a lack of positive cases can be a limitation. Dataset augmentation is a technique used to artificially expand the size of a dataset by creating new samples based on the data available [13]. Existing techniques for augmenting text data include swapping words for synonyms or translating a sentence to another language and back again [19]. These techniques are intended to produce small differences without changing the fundamental nature of the data. These methods have substantial limitations; lacking a complex understanding of the meaning and structure of the data they are augmenting, they often either alter the original samples very little or produce heavily-distorting mutations that may prevent them from being considered a true example of the intended class. This paper posits that security classifiers can be improved by creating new positive training samples with a text-generating large language model, and especially in cases where true positive data is limited. Modern language models are good at writing text that is logical and coherent. They can often be tuned with a small number of samples on a given topic. This allows the generator to match the desired style, and focus on a certain subject. In a situation where data is limited, the few true samples that are available could be used to automatically create more in a similar style. Our aim for this technique is that the generated samples will be of high enough quality to measurably improve the performance--on unseen data--of security classifiers that use them as training data. This, in turn, would help produce more accurate detectors for malicious content. In particular, this technique could greatly support security classification tasks in situations where there is restricted access to samples, such as small enterprises attempting to build security into their systems or software, or law enforcement agencies investigating rare or underreported crime. While other work has discussed this technique more generally in natural language processing (NLP) tasks [26, 19, 39], prior research has not considered elements key to the context of security-focused tasks. Data-limited scenarios have been examined, but not imbalanced and adversarial datasets, and the fine-tuning process required has not been rigorously explored. There has also been significant progress in text generation techniques since prior investigations took place, presenting as-yet unevaluated opportunities. To assess the viability of our approach, we apply a large language model to augment datasets for three different detection tasks. In each case we first replicate current best-performing classifiers from the literature. We then present experiments demonstrating the impact of limiting the availability of positive-class data, and show how augmenting the training dataset with a modern large language model can repair this impact. Our overall contributions include: 1. We evaluate modern text generation as a method of dataset augmentation for malicious content classification tasks. We find that, with a minor exception, models trained with such data augmentation outperform both models lacking augmentation and models using more widely-used basic data augmentation strategies. 2. We provide what we believe is the first investigation into fine-tuning text generators in the context of _disproportionately_ data-limited datasets, including the effect of different levels of data reduction. We find that modern text generators can be especially effective for data augmentation in cases where positive-class samples are least available. 3. We assess different methods of fine-tuning language models for text generation as a method of dataset augmentation. We find mixed results pointing to possible tradeoffs between strategies depending on classifier selection and the quantity of data available for fine-tuning. The remainder of this paper proceeds as follows. In Section 2 we provide a brief introduction to text generation and natural language classification within security, and highlight closely related work within the field. In Section 3 we outline our experimental process for data limitation, fine-tuning and data augmentation. Section 4 presents the results of applying this process to three security classification tasks. We discuss our observations and the implications and limitations of our method in Section 5, before concluding with our key takeaways for future work. ## 2 Background ### Dataset Augmentation in NLP A lack of data presents a major problem for security classification tasks. Building effective classifiers often requires large amounts of data which can only be collected after an incident has taken place. This means that before these filtration systems become fully functional, the public is repeatedly put at risk. Researchers have long been looking at how to lower the number of real-world samples needed before it is possible to identify patterns. It is widely known that classifiers with a learning component will perform better with larger datasets, given the data is not of lesser quality [30, 14]. This is because it allows a classifier to better generalise to a given class. Limited data can cause overfitting. Dataset augmentation techniques are strategies to artificially expand the size of a dataset, by creating new samples based on the ones available [13]. For example, a data scientist using a set of pictures may rotate, crop, or hue shift images by a small offset and add these new samples to their set. These samples are just edited versions of existing data, but can provide additional information to the model. When done correctly, these techniques help to reduce overfitting and improve performance [32] by providing more generalised data. Dataset augmentation for textual data is arguably more challenging due to the complex grammar of human languages. Small edits to punctuation, word choice, or structure can completely change the meaning of a sentence. Even so, simple techniques such as the random removal or reordering of words can measurably improve performance [35]. #### 2.1.1 Synonym Replacement Synonym replacement is a computationally complex strategy but has a simple concept. Words are extracted at random from a sample and replaced with either synonyms or closely related words. For example, _cat_ could become the synonym _feline_ or the closely related word _dog_. In many cases, the result is a new sample with the same meaning but slightly different word choices. This however is not a perfect strategy as some swaps can change a sample entirely. A _two-dimensional plane_ could be very different to a _two-dimensional aircraft_. #### 2.1.2 Word Insertion Word insertion is a technique that can be performed in several ways. With a random approach, words from a dictionary could be placed at any point in a sample. It is however common to use more advanced techniques. These fall into two major categories. One is to use word embeddings, the other is to use a language model. Word embeddings are representations of words, usually taking the form of a vector. These are assigned in such a way that similar words are closer to each other in the vector space are more related to each other. These can be used to intelligently insert more relevant words at points throughout a sample. Language models can also be used for this purpose. Given a sample, 'blank' words can be inserted at random points, and the language model can be used to predict what the words should be. Given that language models are often trained by blanking out words from real sentences, they can perform well at this task. ### Text Generation for Dataset Augmentation The first use of text generation as a form of dataset augmentation was by Wu et al. [36]. They introduced CBERT, a language model based on BERT [12] designed specifically for dataset augmentation, which worked by deterministically blanking out words in true samples and filling in the gaps with the language model. Fully generating samples as augmentation data was first presented by Anaby-Tavor et al. [3]. Their methodology is to fine-tune a pre-trained GPT-2 model with a base dataset. This is then used to generate a set of samples \(10\times\) greater than the original dataset. They take advantage of the fact that the classifier they are using reports confidence levels as well as a class label. Any sample with a confidence level below a certain threshold is dropped, and the remainder are added to the training set. They were able to show improvements of 2% to 58% depending on the classifier and classification task. A criticism of this task is that while they compared the performance of their generator-augmented datasets to basic techniques, they did not extend the filtration step to other methods. This may have unfairly biased their results. Kumar et al. [19] present a more general method. They do not include a filtration step, instead directly using the language model output. They also extend testing to three language models, each constructed in a different style. They opted to work on three different tasks, namely sentiment classification [33], intent classification [10], and question topic classification [20]. GPT-2, the largest model tested, did not generally perform as well as the other two models. It was able to produce "very coherent" text, however the class was not translated well into the samples it generated. One notable result of this investigation was unusually high effectiveness when working in low-data scenarios. Quteineh et al. [26] further investigate the use of generator augmentation while working with extremely small datasets of only 5 or 10 samples per class. They present methods for highly effective use of this data, emphasising efficiency. These results are again verified on varying tasks, and the authors note that the methods described should apply to any domain or language, given a suitable language model. It is worth noting that they do however make use of manual labelling. With the introduction of GPT-3 [7] in 2020, researchers were given access to a much more powerful model. Yoo et al. [37] showed impressive improvements with this new class of generator. They use a different labelling methodology with continuous'soft' class labels, rather than discreet 'hard' classes. This helps with the transfer of knowledge throughout the models by providing effectively what is a measure of confidence with each sample. This made their implementation more in line with the style of Anaby-Tavor et al. [3], rather than Kumar et al. [19]. Initially, we believed our research may be the first to apply a modern language model, such as GPT-3, for dataset augmentation with discrete class labels. This topic has been recently investigated by Sahu et al. [28]. However, they focus on different aspects of the problem, such as sampling parameters, and apply the the technique in the field of intent classification, with generally high numbers of classes (7, 64, 77, and 150 for each of their 4 chosen datasets respectively). We investigate how this idea can be applied to the specific domain of data-limited security classification. This area deals with similar language to many standard NLP classification tasks. Security classification can often be reduced to other fields such as a combination of sentiment and intent classification. These tasks have been well researched in recent years [25, 6, 4, 18, 27, 1]. However, data-limitation in particular has only been explored in a proportionate setting, with a lack of research exploring heavily imbalanced scenarios where one class is common and others are rare - a scenario that is commonly the case in the classification of malicious behaviour. ## 3 Method ### Task & Dataset Selection We selected three different text-based security and online harm tasks to evaluate our dataset augmentation approach. In each case, we first replicated recent classifiers from the literature before measuring the effect of data loss and the viability of augmentation solutions. The implementations we selected were: offensive language detection by Dai et al. [11], deceptive review detection by Salunkhe [29], and SMS spam detection, by Chandra & Khatri [9]. #### 3.1.1 Task 1: Offensive Language Detection Given the scale of modern social media, automated classifiers are necessary tools for moderation interventions in platforms attempting to prohibit insults and textual abuse. Dai et al. [11] propose a classifier based on BERT, a pre-trained language model developed by researchers at Google [12]. We include this classifier as it already contained a language model. BERT is a medium-size model with hundreds of millions of parameters, so there is potential for an improvement in performance from augmenting the model's training data using a larger model. On the other hand, the fact that there is already information from a language model being taken into account may mean that there is less to gain from using another. The researchers make use of the Offensive Language Identification Dataset (OLID) [38], which is a dataset of 14,200 annotated Twitter posts. There are 9160 (approximately 65%) negative samples, with the rest being offensive and sorted into one of four categories based on their target. Every sample is of English text. Some minor prepossessing has taken place, such as URLs being shortened to the term "URL" and references to specific users have been replaced with the term "@USER". An open-source implementation of their model has been made available1. Footnote 1: [https://github.com/wenliangdai/multi-task-offensive-language-detection](https://github.com/wenliangdai/multi-task-offensive-language-detection), accessed 24/04/2020 We focus on the BERT-based classifier from Dai et al. [11] which distinguishes between the two primary classes. This classifier has multiple different models. The most advanced of these uses subclass data to inform its primary class decision. For simplicity, we used a reduced model with this feature omitted. Even so, when generating new data we made sure to maintain correct proportions for every class, not just positive/negative. #### 3.1.2 Task 2: Deceptive Opinion Detection Reviews can be a key tool for consumers to evaluate the quality of products and services. This makes them a prime target for fraud. Opinion spam, or fake reviews, can be used to deceive consumers and sway them towards or away from a purchase. Salunkhe [29] proposes an attention-based bidirectional LSTM to classify these reviews as either truthful or deceptive. This model was chosen as it represents a complex deep network topology which reflects the architecture of many modern classifiers. The data used is a balanced set of hotel reviews from a range of sources. The truthful samples come from multiple review websites, while the negative samples are from Mechanical Turk2. The samples are described by Ott et al. (2011) [23] and Ott et al. (2013) [22]. There are a total of 1600 reviews, with 400 for each combination of opinion polarity and truthfulness. Salunkhe's open-source implementation3 contains not only their final classifier but also a selection of basic ML models, providing an additional dimension of comparison for this task. We primarily focus on the advanced classifier which performs best on this task, but also report results for augmentation under the various 'basic' classifiers. The advanced classifier is a combined CNN LSTM which makes use of both doc2vec and TF-IDF, as well as multiple preprocessing stages. The classifier itself has three convolutional layers with dropout and max-pooling between, plus one bidirectional LSTM layer and a final dense layer. Footnote 2: [https://www.mturk.com/](https://www.mturk.com/) Footnote 3: [https://github.com/ashishsalunkhe/DeepSpamReview-Detection-of-Fake-Reviews-on-Online-Review-Platforms-using-DeepLearning-Architectures](https://github.com/ashishsalunkhe/DeepSpamReview-Detection-of-Fake-Reviews-on-Online-Review-Platforms-using-DeepLearning-Architectures), accessed 24/04/2022 #### 3.1.1 Task 3: SMS Spam Detection SMS spam messages often attempt to coerce a target into performing some desired action, such as leading them into an advance fee fraud scam [15] or opening a link to malware. Manual review of these messages raises significant privacy concerns, creating a desirable application area for an automated classifier. Almeida et al. [2] assembled a dataset of such messages to enable machine-learning classification. The set contains 5574 messages, 747 (around 13%) of those being spam. Chandra & Khatri [9] propose a straightforward LSTM approach for this task. Their model contains an embedding layer which helps with learning the relationships between words. This model is similar to that by Salunkhe [29] but does not have any convolutional layers, making it a pure RNN, rather than a combined CNN-RNN like the opinion spam classifier. The model also makes use of pre-trained GloVe [24] vectors for word embeddings. In our reimplementation we use 100-dimensional vectors from the glove.6B dataset, trained with 6 billion tokens4. This was chosen as it presents an opportunity to compare how small differences may influence the effectiveness of the technique to be assessed. An open-source implementation of the model is available online5. Footnote 4: [https://nlp.stanford.edu/projects/glove/](https://nlp.stanford.edu/projects/glove/) Footnote 5: [https://github.com/Awesome12-arch/Detecting-the-Spam-messages-using-Keras-in-Python](https://github.com/Awesome12-arch/Detecting-the-Spam-messages-using-Keras-in-Python), accessed 24/03/2022 ### Generative Model We used GPT3 [7] with the 'Curie' model for text generation. This model takes an arbitrary string as input and attempts to predict what should be written after it. Strings are encoded as lists of tokens, each approximately 4 characters in English. We experimented with supplying a single token prompt for every completion, similar to the approach used by Kumar et al. [19]. For example, in Task 1 the prompts used for negative samples were "NOT" and untargeted offensive samples were "UNT". Qualitative inspection of samples showed poor performance, and we found that plain natural language produced better results.The previous examples became "A regular tweet -\(>\)" and "An untargeted offensive tweet -\(>\)" respectively. The other classes (targeting an individual, group or other) were updated in the same way. The addition of a consistent end sequence "-\(>\)" reduced failures from the model trying to write a longer prompt, rather than write a sample after the prompt. ### Experimental Procedure For each of the three tasks, we begin with a standard dataset and first replicate previously published state-of-the-art results on the relevant task. We then artificially introduce a lack of data into the dataset by removing some examples from the training set. Similar truncation methods are common for research in this area [3, 28, 39]. As illustrated in Figure 1, two factors are of interest at this stage. First, the proportion \(x\) of the data which is retained after truncation. We experimented with varying degrees of data removal in each task, with original data retention at 40, 36, 25, 15, 10, 3 and 1 percent. The second factor is whether data is missing proportionately from the positive and negative classes for each task, or whether data is missing only for the positive class. In many security contexts, positive labelled data of threats is more difficult to obtain than negative data. Accordingly, we explore this _disproportionate_ data limitation, in which data is less available in the training set than it would normally appear in the dataset (i.e., on top of any existing class imbalance). Our training sets are then used as input to a finetuning process for GPT-3, and samples drawn from this fine-tuned language model are then used to augment the dataset by replacing the removed data. Figure 2 demonstrates how different fine-tuning approaches are used to create augmented datasets for a given proportion of missing data. As well as fine-tuning GPT-3 on the disproportionate and proportionate cases, we also explore an approach where only positive-class examples are used in fine-tuning (even though true negative examples are available for forming the augmented dataset). For any given proportion of retained data \(x\), we create and evaluate classifiers trained on datasets augmented via these disproportionate, proportionate and positive-only fine-tuning strategies. Figure 1: Truncation and augmentation under proportionate and disproportionate removal. As points of comparison, we also report the baseline or target performance (of the full dataset with no data removed), the performance of the classifier trained on a dataset with data removed but no augmentation, and performance figures when basic data augmentation techniques (synonym replacement using WordNet [16], word insertion [24] and BERT-guided word insertion [21]) are used instead of GPT-3 text generation to augment the datasets. This use of three basic augmentation methods as baselines was chosen to match the methodology of Kumar et al. [19]. Table 1 outlines the shorthand used in our results sections. All model performance is assessed via the F1 score on the held-out test set (which is never seen by either the language model or the classifier). Figure 2: GPT-3 data augmentation strategies, illustrated with retained data proportion \(x=.25\) \begin{table} \begin{tabular}{l p{284.5pt}} \hline \hline disp & An unaugmented training set with \(x\%\) positive, 100\% negative training examples. \\ prop & An unaugmented training set with \(x\%\) of positive and negative samples. \\ bda1 & Basic data augmentation using a synonym replacement scheme. \\ bda2 & Basic data augmentation using random word insertion. \\ bda3 & Basic data augmentation using word insertion guided by BERT. \\ gen1 & GPT-3 data augmentation fine-tuned on a disproportionately cut training set. \\ gen2 & GPT-3 data augmentation fine-tuned on a proportionately cut training set. \\ gen3 & GPT-3 data augmentation fine-tuned on only \(x\%\) positive samples. \\ \hline \hline \end{tabular} \end{table} Table 1: Shorthand for the dataset forms used in augmentation experiments, where \(x\) is the proportion of original data retained. ## 4 Results ### Task 1: Offensive Language Identification Figure 3 graphs the performance of the different training strategies over the range of original data retention proportions within OLID. It is immediately obvious that the basic data augmentations we use as a comparison are actually harmful to performance in this task, with training sets augmented using these methods producing _worse_ performance than unaugmented training sets in almost all cases. This may be an example of the poor understanding of the language by these basic techniques hindering the performance of the BERT-based classifier by making inappropriate insertions or replacements - a pitfall avoided by the GPT3 augmentation strategies. The overall performance profile shows the impact of data removal on performance, which is serious at high removal rates. It is worth noting, however, that at 40% retention (i.e. with 60% removed), even the unaugmented datasets attained figures close to the 0.806 F1 baseline performance, showing that Dai et al.'s method is remarkably robust to loss and imbalance. However, performance does decrease as more data is removed, and, as retention dips below 10%, the gap widens between the performance of unaugmented and GPT-3 augmented training sets. The best performer under the most severe removal condition (1% of positive class data remains) is gen1, fine-tuned on a disproportionate training set. Table 2 averages the performance of strategies at each retention rate, relative to the best-performing strategy at that rate. This allows for an overall numeric comparison. The disproportionate augmentation of gen1 proves to be the overall best performing strategy, followed by the other GPT-3 augmentation strategies and then the proportionate unaugmented dataset strategy. ### Task 2: Deceptive Opinion Detection All mean F1 scores are shown in Figure 4. In short, all classifiers that were augmented with GPT-3-generated data outperformed all those that were not. The profile of the GPT-3-augmented strategies is quite different to the others. This is most clearly seen in Table 4, which compares the mean F1 scores of the lowest-performing GPT-3-augmented dataset and highest performing other dataset, at each split percentage. At the lowest of retention of the original positive-class data, the performance gap is initially minor, at about 0.02, but quickly widens to more than 0.11 at 5% retention. The worst generation strategy at this point, gen1 is more than 0.15 F1 improved upon the performance of the unaugmented dataset with disproportionate losses. Performance rapidly accelerates with generator-augmented datasets up to the 10% retention mark. Up until this point, the other datasets show almost no improvement. After this, there is a clear turning point. The performance of the generator strategies begins to plateau as positive data retention levels increase, while the other datasets start to improve. The gap consistently narrows from then on. Across the three basic dataset augmentation methods, there were some clear performance trends. When starting with very few positive samples, namely 1%, 3% or, 5%, the effect on performance was minimal. The maximum difference in mean F-1 \begin{table} \begin{tabular}{|c|c|c|c|} \hline Split (\%) & Lowest F1 & Highest F1 & Difference \\ & score of & score of other & \\ & GPT-3 & approaches & \\ & augmented & & \\ \hline 1 & 0.7571 & 0.7298 & 0.0273 \\ 3 & 0.7792 & 0.7336 & 0.0456 \\ 5 & 0.8511 & 0.7376 & 0.1135 \\ 10 & 0.8704 & 0.7567 & 0.1137 \\ 15 & 0.8677 & 0.8055 & 0.0622 \\ 25 & 0.8960 & 0.8446 & 0.0514 \\ 36 & 0.8947 & 0.8744 & 0.0203 \\ 40 & 0.9028 & 0.8833 & 0.0195 \\ \hline \end{tabular} \end{table} Table 4: The of F1 scores of different dataset types. score between the highest and lowest-performing basic augmentation strategies in the entire range was 0.01 F1. Once at least 10% of the positive samples are retained, a new trend emerges. All augmentation methods outperform the disproportionate dataset by a wide margin, until around 40% positive samples. At this point, the augmented sets appear to begin levelling off while the disproportionate set still trends upwards towards the baseline. Out of the three basic augmentation strategies, contextual word insertion performs best. The unguided word insertion and synonym replacement strategies offer comparable performance. Figure 4 also shows how performance varies between the GPT-3 fine-tuning strategies used for augmentation. The first two points indicate performance with only 1% and 3% of the original positive data. Here, the characteristics are different to the rest of the graph. There is a clear hierarchy, with the balanced fine-tune set (gen2) performing best, followed by the positive-only set (gen3), and then the disproportionate set (gen3). A different trend can be observed from 5% onward. There is no clear 'best' strategy, however the disproportionate and positive-only sets are generally superior to the balanced set. In the averaged performance figures in Table 3, we see that gen2 is the overall best performer when considering different retention rates, but by a much closer margin than in Task 1. ### Task 3: SMS Spam The average F-1 scores for each dataset at each split percentage are shown in Figure 5. The generator-augmented datasets again show the highest performance of all. Notably, across the 3%, 5% and 10% splits, they hold a clear advantage over the other strategies. This fades as more data becomes available. By 36%, all datasets become equally viable. One interesting aberration in the performance profile is that the top four strategies lose performance when moving from 3% to 5% original positive data. Given that they _all_ lose performance, the effect may be attributed to the data they are given. As the data to be cut is randomly selected each time (but constant across strategies at the same percentage) it may be that the positive data selected for the 5% retention cut happened to provide less useful information than in the 3% cut. However, the weaker strategies still appeared to benefit from the increased availability of original positive samples, perhaps because they suffered more severely from the initial lack of data. Table 5 shows that the overall best performing model was gen2, but all three generator-augmentation strategies are quite tightly competitive. The disproportionate unaugmented and basic word insertion strategies performed on par with each other, but worst of all when positive data was limited to \(<10\%\). At \(10\%\) they match the effectiveness of the proportionate dataset, then exceed it at \(25\%\). Their performance curve is similar to that of the other two basic augmentation datasets. The synonym replacement and contextual insertion datasets follow the same profile but with improved performance. The improvement over the disproportionate unaugmented strategy can be expected, however the distance to the basic word insertion strategy is more interesting. The gap may reflect the fact that the basic word insertion augmentation is performed using the same GloVe [24] database as is used in the classifier itself. The model is therefore not gaining any additional information that it does not already have access to. #### 4.2.2 Disproportionality The unaugmented proportionate strategy shows unique behaviour; it has much higher variation and a shallower performance curve than other non-generator-augmented strategies. This contradicts the patterns seen previously. An explanation for this may simply be that this model's performance is more strongly tied to the proportionality of the training data it receives. At low percentages, this proportionality supports the model's performance while others do poorly. As the bias of the disproportionate dataset decreases, and it is given more positive samples, its performance consistently trends upwards. The strategies using basic dataset augmentation techniques also have proportionate datasets, however they have similar performance curves to the disproportionate dataset. This implies that the basic dataset augmentation techniques used are hindering performance with low split percentages. The samples are being duplicated and manipulated too many times for the size of the dataset. They may therefore see an improvement if basic augmentation strategies were only used to increase the sample count by a smaller number e.g. from \(3\%\) to \(12\%\) rather than only to \(100\%\). Figure 5: Augmentation strategy performance for SMS spam detection. ## 5 Discussion ### Generated Data Performance Results from all three tasks have shown evidence that modern text generators using large language models can improve security classifier performance when used for dataset augmentation. However, generated data is not a perfect substitute for true data, and the magnitude of the improvement possible may depend on the task, the classifier being used, and the quantity of positive samples available for fine-tuning the language model. Strategies using generator augmentation would seem to be most helpful when only a small number of true positive samples are available relative to the expected prevalence in the testing set or deployment scenario. The results from Task 1 indicated that provided at least 25% of the original positive data was retained, there was negligible improvement from using generated data. Tasks 2 and 3 contradicted this, however, finding that generator-augmented strategies more consistently outperformed unaugmented strategies across different data retention levels. It can therefore be inferred that the structure of either the dataset or classifier made the technique less effective. The data in Task 2 is mostly incomparable to that of Task 1, as it is of a much longer format, and presents varied sentiment in both positive and negative classes. The dataset in Task 3 is a closer match, given the short sample length and conversational nature of the text. We suspect that the difference in performance for Task 1 stems from the structure of the classifier, rather than the dataset. The classifiers in Tasks 2 and 3 were similar, containing LSTM components. The classifier in Task 1 by contrast was based on the BERT [12] language model. We would suggest that using a language model within the architecture of the classifier itself produced a degree of generalisability that reduced the effectiveness of--or need for--dataset augmentation using another language model. Across all three tasks, the generator-augmented datasets showed the highest relative performance when limited to 3-10% of the original positive data. In absolute terms, this refers to between 274-916 offensive Tweets (Task 1), 24-80 deceptive reviews (Task 2) and 22-75 unwanted SMS messages (Task 3). Our experiments thus far do not provide a concrete answer regarding the quantity of data required for successful application of the technique, but these ranges could be considered guides for similar tasks where collection of positive samples may be expensive or difficult. That there is an optimal range in which this approach is most effective has an intuitive explanation. Given too few samples, the language model will have too little information to sufficiently ground its generation of class examples. Conversely, on tasks with bountiful positive samples and a classifier already leveraging a language model, the improvements granted by this technique may be small to negligible. Our results are similar to those found in the work of Kumar et al. [19] and Anaby-Tavor et al. [3] in that they show a clear improvement, particularly in more strongly data-limited scenarios. They differ however in the magnitude of the improvement. The prior work shows large gains of up to 40% in some cases. This is likely reflective of the baseline performance of the classifiers themselves. The target F1 score of the classifier across all three of our tasks rarely dropped below 0.70 (excluding the basic models in Task 2). For comparison, the unaugmented performance (measured as mean accuracy) of the classifiers in [19] and [3] is usually between 0.4 and 0.6. They therefore have much more potential for increase. The tasks chosen in those papers are purposeful benchmarks, specifically designed to be challenging in order to show differences between models. By contrast, the classifiers we replicate are representative of the state of the art in each field. ### Language Model Fine-Tuning We do not see a strongly conclusive result regarding which fine-tuning approach is most desirable. However, there appeared to be a few trends in how the fine-tuning strategy used for the language model would influence the effectiveness of the samples generated. First, with low quantities of true data, the best approach appears to be to use a proportionate or positive-only dataset. It was shown in both Tasks 2 and 3 that fine-tuning with heavily disproportionate datasets would decrease performance. This behaviour is not however ubiquitous. These methods were also seen to converge when supplied with more original data. Second, results from Task 2 appear to indicate that the proportionate fine-tuning dataset resulted in poorer generated data than the other generator strategies at higher retention levels (above 5% retention). This outcome was not reflected in the results from Tasks 1 and 3. We are not certain if this result reflects an inherent structural difference between the tasks, or merely speaks to the highly-similar performance of all three generator strategies on Task 2. ### Practicality #### 5.3.1 Cost Analysis Aside from the cost of running the classifiers themselves, the material costs associated with this project came from two sources: language model fine-tuning, and language model sampling (generation). Unfortunately, generation without fine-tuning cannot be advised. A small number of tests were run trialling this method and performance became worse than random guessing when including the data. The price of fine-tuning is dependent on three things: the number of tokens, the base engine, and the number of training epochs. The price per token is half that of the base engine cost i.e. $0.003 per 1000 tokens for Curie, $0.030 per 1000 tokens for DaVinci. The number of training epochs is up to the user. OpenAI recommends 4 as standard, so this was used for all instances for this study. All billing was calculated as a function of the action type and the number of tokens involved in the request. Text included in the prompt is always included in the cost calculation at the same rate as generated tokens. The total cost to fine-tune and generate data was relatively low, averaging approximately $2 to $3 for each dataset in this study. These exact prices and the associated rules will likely change in the future - at the beginning of this project, fine-tuning a model was free, with costs only for generation. It can generally be concluded that cost should not provide a high barrier for use of generator-based data augmentation in most instances. The clearest benefits have been seen when fine-tuning with \(<10\%\) of the original data. At this point, costs are low and can be less than $0.10 per dataset. #### 5.3.2 Large Language Models as Classifiers Task 1 indicated that using a language model as part of a classifier might offer similar benefits to using this technique. This raises the question: why not just use a more powerful model like GPT-3 in the classifier? At first glance, this suggestion makes some sense. It is quite possible that the measured increase in performance was at root due to the much larger size and greater power of the GPT-3 model in comparison to BERT. A similar, or even larger, improvement in performance may be seen by instead converting the classifier to make use of GPT-3. The downsides of such an approach stem from size of the GPT-3 model. With 175 billion parameters, it takes considerable hardware to even run the model. Depending on the implementation, BERT has approximately 1000\(\times\) fewer parameters. Even so, the Task 1 classifier using it took more than 10 times longer to train than the others. Considering the rapidly-increasing scale of new language models, it would be a more efficient use of resources to purchase a small amount of fine-tuning and generation from an externally-hosted large language model. The outputs could then be used to augment a training set for a more lightweight classifier, indirectly passing on some of the language model's understanding of the dataset. A second pragmatic reason for taking this approach is that it does not require the same level of access to the language model. Many large language models are commercially available through an API, but do not offer source code level support for being used within a classifier. Easier access to the model's assistance would be of considerable benefit to researchers and practitioners working with limited resources. #### 4.1.1 Misuse Considerations An element of concern for dataset augmentation in security domains--and especially high-fidelity sample generation as in the models we discuss--is that any generator of malicious content could also be used as a tool by malicious actors, in an effort to magnify their impact. Our generator for Task 1, for example, would be capable of cheaply producing large volumes of abusive messages, of a nature human annotators would struggle to identify as automatically generated. More dangerously yet, consider a generator tuned to augment a dataset of social engineering plays or mass-market fraud - compelling hooks cheaply available at any time to cybercriminals with no skill in the target domain or even language. For the moment, this consideration rests with OpenAI, who control access to the model's API, and monitor accounts for misuse of the service6. However, the increasing interest in and availability of sophisticated text generation capabilities should motivate urgent work to design defensive classifiers and other solutions capable of protecting internet users from such risks. Footnote 6: We preemptively explained our own usage of the service to OpenAI to forestall any such concerns. #### 4.1.2 When to use Generator Augmentation As a general guide, augmentation using a text-generating language model will offer the largest performance improvement when sample counts are extremely limited, and the classifier itself does not contain a language model. All three of our evaluation tasks saw the performance gap decrease as true samples were made available. This result in disproportionate limitations mirrors the results obtained by Kumar et al. [19], Quteineh et al. [26], and Anaby-Tavor et al. [3] in proportionately limited datasets. It also appears that when very few samples are available, fine-tuning may be more effective if conducted with a balanced dataset. The length of the sample did not appear to have an appreciable impact on performance. Task 2 had sample lengths up to 4200 characters (750 words) and had comparable performance to Task 3, in which text samples sometimes had as few as 5 characters apiece. Extremely long samples may cause configuration issues if they begin to exceed the limit of what is allowed by the language model when fine-tuning. Conclusion This study has built on the work of Quteineh et al. [26] and Kumar et al. [19] to further examine how text-generator dataset augmentation can be applied to the security domain. Tasks were selected to represent different areas of security classification. For each task, an open-source classification model from a recently published paper representing the state-of-the-art was identified and replicated. An array of tests then explored the value of text-generator dataset augmentation in different configurations. We find that for most classifiers, this form of data augmentation is effective, with classifiers trained with generated data on average outperforming others across our evaluation scenarios in three tasks. An overarching objective has been to evaluate this method as a solution to the common problem of disproportionality of availability in labelled security data. The effects of different rates of positive-class data limitation have been explored through a series of experiments across three different classification tasks. We find that text generation can be especially effective for data augmentation in cases where positive-class samples are very scarce, a positive result for domains where collecting such examples may be expensive or difficult. We also investigate which of three fine-tuning approaches is most effective for generation. This is an area that has not been explored in other data augmentation research. We find mixed results that tentatively suggest that using a proportionate training set for fine-tuning purposes may be more reliable. Future work will attempt clarify these last results, and further probe the set of factors which should guide the use of text generation in security classification tasks.
2303.14511
Improving robustness of jet tagging algorithms with adversarial training: exploring the loss surface
In the field of high-energy physics, deep learning algorithms continue to gain in relevance and provide performance improvements over traditional methods, for example when identifying rare signals or finding complex patterns. From an analyst's perspective, obtaining highest possible performance is desirable, but recently, some attention has been shifted towards studying robustness of models to investigate how well these perform under slight distortions of input features. Especially for tasks that involve many (low-level) inputs, the application of deep neural networks brings new challenges. In the context of jet flavor tagging, adversarial attacks are used to probe a typical classifier's vulnerability and can be understood as a model for systematic uncertainties. A corresponding defense strategy, adversarial training, improves robustness, while maintaining high performance. Investigating the loss surface corresponding to the inputs and models in question reveals geometric interpretations of robustness, taking correlations into account.
Annika Stein
2023-03-25T16:23:27Z
http://arxiv.org/abs/2303.14511v1
Improving robustness of jet tagging algorithms with adversarial training: exploring the loss surface ###### Abstract In the field of high-energy physics, deep learning algorithms continue to gain in relevance and provide performance improvements over traditional methods, for example when identifying rare signals or finding complex patterns. From an analyst's perspective, obtaining highest possible performance is desirable, but recently, some attention has been shifted towards studying robustness of models to investigate how well these perform under slight distortions of input features. Especially for tasks that involve many (low-level) inputs, the application of deep neural networks brings new challenges. In the context of jet flavor tagging, adversarial attacks are used to probe a typical classifier's vulnerability and can be understood as a model for systematic uncertainties. A corresponding defense strategy, adversarial training, improves robustness, while maintaining high performance. Investigating the loss surface corresponding to the inputs and models in question reveals geometric interpretations of robustness, taking correlations into account. ## 1 Introduction With powerful machine learning (especially deep learning) algorithms, new physics analyses have been enabled and established ones report improved results over previous iterations that utilized only cut-based strategies, shallow networks or techniques like BDTs [1]. For object identification, which serves as a crucial ingredient to various analyses carried out at experiments at the CERN Large Hadron Collider, it is therefore of prime interest to provide highly-performant algorithms, where many features enter complex architectures to capture as much information as possible, including correlations between observables. Deep Neural Networks are suited to perform the aforementioned difficult tasks like jet flavour identification, and many low-level features related to the jet constituents enter state-of-the-art taggers [2, 3, 4, 5, 6]. With high performance however comes high reliability on the modeling of the involved input features, especially since supervised machine learning techniques utilize labeled simulated samples [7]. These likely do not capture all detector effects and can be fairly different for non-identical MC generators, when comparing steps like parton showering and hadronization [8]. Calibration has therefore always been a necessary step towards improving agreement between the domain on which such algorithms have been trained (simulation), and measured data [9, 10]. Even after applying a high level of scrutiny and utilizing a set of independent control regions, a certain level of disagreement may remain after calibration, becoming increasingly relevant for analyses where derived scale factors factorize for final states with high (b-/c-tagged) jet multiplicities. ### Related work Reliability on low-level features is a common property in several areas of high-energy physics, and so might be the susceptibility towards slightly distorted features which can lead to drastically reduced performance, better known under the term of adversarial attacks yielding adversarial samples [7; 11; 12; 13]. The fundamental principle explored in Ref. [13] is that such inherent vulnerability can be turned into robustness, when carefully defending against adversarial attacks via adversarial training. There it has been shown that improving robustness can be achieved without loss of performance [13]. The technique which has been used extensively is the Fast Gradient Sign Method (FGSM), a first order attack [12; 13]. After the proof-of-principle had been introduced for a simple multilayer-perceptron architecture [13], the CMS Collaboration has presented a successful application of adversarial training for the succeeding generation of tagging algorithms, extending the strategy to convolutional (followed by recurrent and dense) layers [14]. One core finding of Ref. [14] is the relation between adversarial robustness and agreement between data and simulation, being explicitly prominent for light-flavoured jets. The literature also mentions complimentary perspectives where adversarial training does not capture uncertainties [15; 16]. The sentiment that theory-induced or generator-dependent modelings can hardly be handled by adversarial methods is evident [16], however we intend to focus on those mismodelings which can be mitigated by systematic regularization, which may play the main role in promising control regions shown in Ref. [14]. Another limitation to consider when applying adversarial training against FGSM attacks is that directions into which inputs are shifted are somewhat predictable, and moreover, always treat all features independently with the respective sign of the gradient, thus eliminating the full correlation between features and leaving only discrete choices [13]. When aiming for robust algorithms that offer not only high performance, but also generalization capabilities, the flatness of the underlying loss surface is scrutinized as a proxy for the aforementioned desired qualities of the model. Several approaches utilize the geometric properties of the loss [17] as a function of the model parameters (weights and bias terms), but a study with respect to the input distributions has yet to be carried out. #### 1.1.1 Adversarial attacks versus systematic uncertainties While several restrictions have been imposed to keep the artificial shifts of inputs somewhat realistic with respect to typically observed mismodelings, such adversarial methods are reliant on the network's properties. This marks an unphysical scenario, as neither nature nor simulation of processes could have any knowledge of the machine learning algorithms involved to tag the jets in an event. Thus, judging a network's capability to resist adversarial attacks might be biased towards the defense strategy which explicitly mitigates the impact of specific attacks. It is unrealistic that any mismodeling in simulation would shift inputs exclusively in the worst case direction pointing to steepest increase of the loss function [7; 13]. For physics analysis, it is not of primary relevance to utilize algorithms which are robust against adversarial attacks, but which allow generalization from simulation to data and offer robustness towards systematic uncertainties. Therefore, the two trainings studied in Ref. [13] (nominal and adversarial) are compared not only with nominal and adversarial inputs, but also when being exposed to systematically distorted inputs which point either in upwards or downwards direction [13]. In both cases, up- or downwards variation, adversarial training performs better on distorted inputs than nominal training on same distorted samples [13]. Similar conclusions can be drawn when exchanging the systematic variations with random smearing / Gaussian noise [13]. In this paper we intend to augment the findings by investigating the underlying loss function in the input feature space to propose a modified training strategy which can improve the algorithm's resilience. 2 Properties of loss manifolds for a jet tagging algorithm trained on nominal or adversarial samples The assumption of different geometry of loss manifolds has been motivated by observations made when looking at the impact of adversarial attacks split by flavour, where adversarial training behaves somewhat symmetrically, but adversarial attacks performed for nominal training push inputs preferably into specific directions to invert expected physics [13]. While the illustrations presented in Ref. [13] give a hint on how the loss surfaces of different training strategies could look like, it has been an open question to perform realistic scans of such surfaces. First results of such a visualization of geometry with respect to input variations are presented in Fig. 1. The construction is obtained by first selecting a random jet drawn from a sample which has not been used for training or validation. Focusing on two observables (for visualization purposes, using well-understood global jet features), a grid of \(500\times 500\) variations is generated, using a uniform and symmetric binning around the original nominal features. Taking the full distribution of the respective feature into account, the spanned range corresponds to \(\pm 0.5\sigma\), ensured by only working in the input feature space after standardization. While the target remains unchanged, both the nominal and adversarial training are reevaluated on the resulting 250000 samples, and the resulting loss is recalculated. Moving a jet's pseudorapidity without changing transverse momentum or other properties will not affect the respective network prediction error, or loss, for adversarial training. Nominal training on the other hand is not agnostic to changes in any of the two variables shown. While nominal training offers in general a lower network prediction error, adversarial training offers a flatter manifold with a certain level of invariance with respect to distortions of specific features. Figure 2 reveals how for nominal training, adversarial attacks would find a clear direction, while for adversarial training, due to the invariance or symmetry with respect to pseudorapidity, multiple directions are possible to increase the loss. Despite this finding, only one specific direction will be chosen by the attack as a result of the inherent operation of taking the sign of the gradient, although other directions would essentially lead to the same effect. Figure 1: Different geometries of loss manifolds for nominal (bottom) and adversarial (top) training. ## 3 Discussion This observation is a key element to understand why adversarial training may be preferred in settings with potentially distorted inputs (due to experimental effects, precision and resolution limitations) or other systematic differences between the domain on which the identification algorithms have been trained (simulation) and the domain built from actually recorded detector data. This can be interpreted as a sign of regularization induced by adversarial training. ### Using properties of the loss manifold during training Having probed the loss manifold on a macro-scale for two features which may not offer highest discriminating power (and which have been reweighted to a common target distribution to ensure bias-free predictions) [13], we propose to explore this technique more systematically and potentially incorporate this into the training itself. Showing the loss surface as a function of two input features is a simplification which allows us to investigate the geometry graphically. To overcome this limitation, the loss manifold needs to be constructed in several more dimensions in the feature space. Then, measuring flatness around the original inputs can be introduced as an independent cross-check during training to probe and improve robustness. We can construct a summary quantity as an additional term in the loss function, for example capturing the maximally observed relative impact on the calculated cross-entropy loss when moving inputs in the allowed \(B_{\frac{\sigma}{2}}\)-ball. This can be weighted by a hyperparameter to control how much focus is given to regularization, compared to plain performance metrics, and training would then follow this modified loss function during backpropagation to update the model parameters. ### Building other attacks which preserve directionality of the gradient of the loss function From the observed loss surfaces it seems sufficient to continue focusing on first order attacks, although taking the sign of the gradient (FGSM) might be too inefficient when actual directions of gradients and relative contributions of features are to be taken into account. Using the \(p\)-norm of gradients where e.g. \(p=2\) instead, the individual input feature's contribution can be maintained quantitatively. The resulting distortion vector can be scaled by the inverse of the aforementioned norm to allow comparisons across different jet samples, while at the same time yielding small disturbances only. This leads to an attack which is not easy to predict, both for the direction of the shift, as well as the magnitude per feature, unlike for FGSM, where only \(\pm\epsilon\) shifts are possible. Introducing the modified attack instead will include correlations between Figure 2: Possible directions of adversarial attacks for different models. Starting from kinematic quantities which yield small loss, multiple arrows can be found for an FGSM attack imposed for adversarial training, while only one such arrow is constructed for nominal training. features, a shortcoming of the FGSM attack typically mentioned in the context of HEP. In an adversarial training against this new attack, we would not need large distortions, resulting distorted jet samples will not be easy to detect in validation methods (such as one-dimensional histograms). ## 4 Conclusion In this paper, we presented a study of the loss manifold with respect to input features of a typical jet tagging algorithm, when trained on nominal or adversarial samples. Differences with respect to flatness and thus invariance to small distortions are observed, explaining and confirming previously explored differences in robustness and generalization. With such loss surfaces at hand, we proposed modified training strategies to explicitly use that newly gained knowledge of the network's properties directly during backpropagation. Putting more focus on regularization and correlations, the proposed methods can bridge the gap between machine learning-theoretical studies and their application for object identification in particle physics, where the physical behaviour of observables shall be maintained. ## Acknowledgments Simulations were performed with computing resources granted by RWTH Aachen University under project rwth1244. This work has received support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, projects SCHM 2796/5 and GRK 2497), and the Bundesministerium fur Bildung und Forschung (BMBF, Project 05H2021).
2309.01621
A fresh perspective on the 3D dynamics of Tycho's supernova remnant: Ejecta asymmetries in the X-ray band
450 years after the explosion of the Type Ia SN1572, the dynamics of the Tycho supernova remnant can give us keys to understand the explosion mechanism and the interaction of the remnant with the interstellar medium. To probe the asymmetries and the evolution of the SNR, we track the ejecta dynamics using new methods applied to the deep X-ray observations available in the Chandra space telescope archive. For the line of sight velocity measurement Vz, we use the Doppler effect focused on the bright Si line in the 1.6-2.1 keV band. Using the component separation tool General Morphological Component Analysis (GMCA), we successfully disentangle the red and blueshifted Si ejecta emission. This allows us to reconstruct a map of the peak energy of the Si line with a total coverage of the SNR at a 2'' resolution and a proxy of the velocity in the line of sight. For the proper motions in the plane of the sky Vxy, we develop a new method, named Poisson Optical Flow, to measure the displacement of 2D features between the observations of 2003 and 2009. The result is a field of 1700 velocity vectors covering the entire SNR. These exhaustive 3D velocity measurements reveal the complex and patchy dynamics of the SNR. At the large-scale, an asymmetry with the North being dominantly blueshifted and the South redshifted is observed. The proper motion vector field Vxy highlights different dynamics between the East and the West parts of the SNR. The eastern velocity field is more disturbed by external inhomogeneities and the South-East ejecta knot. In particular, a slow-down is observed in the North-East which could be due to the interaction with higher densities as seen in other wavelengths. The vector field is also used to backtrace the center of the explosion which is then compared with potential stellar progenitors distances from the latest Gaia DR3, leaving only stars B and E as possible candidates.
Leila Godinaud, Fabio Acero, Anne Decourchelle, Jean Ballet
2023-09-04T14:10:39Z
http://arxiv.org/abs/2309.01621v2
# A fresh perspective on the 3-D dynamics of Tycho's supernova remnant: ejecta asymmetries in X-rays ###### Abstract Context:450 years after the explosion of the Type Ia SN 1572, the dynamics of the Tycho Supernova Remnant (Tycho's SNR) can give us keys to understand the explosion mechanism and the interaction of the remnant with the interstellar medium. Aims:To probe the asymmetries and the evolution of the SNR, we track the ejecta dynamics using new methods applied to the deep X-ray observations available in the _Chandra_ space telescope archive. Methods:For the line of sight velocity measurement (\(V_{\rm s}\)), we use the Doppler effect focused on the bright Si line in the 1.6-2.1 keV band. Using the component separation tool General Morphological Component Analysis (GMCA), we successfully disentangle the red and blueshifted Si ejecta emission. This allows us to reconstruct a map of the peak energy of the silicon line with a total coverage of the SNR at a 2" resolution. We then obtain a proxy of the integrated velocity in the line of sight. For the proper motions in the plane of the sky (\(V_{\rm xy}\)), we develop a new method, named Poisson Optical Flow, to measure the displacement of two-dimensional features between the observations of 2003 and 2009. The result is a field of around 1700 velocity vectors covering the entire SNR. Results:These exhaustive three-dimensional velocity measurements reveal the complex dynamics of Tycho's SNR. Our study sheds light on a patchy velocity \(V_{s}\) map where most regions are dominated by the foreground or the background part of the shell. At the large-scale, an asymmetry with the North being dominantly blueshifted and the South redshifted is observed. The proper motion vector field \(V_{\rm xy}\) highlights different dynamics between the East and the West parts of the SNR. The eastern velocity field is more disturbed by external inhomogeneities and the South-East ejecta knot. In particular, a slow-down is observed in the North-East which could be due to the interaction with higher densities as seen in other wavelengths. The vector field is also used to backtrace the center of the explosion which is then compared with potential stellar progenitors in the area. Latest _Gaia_ DR3 parallax measurements exclude most stellar candidates based on their distances, leaving only stars B and E as possible candidates, at respective distances of \(2.53^{+0.23}_{-0.20}\) kpc and \(3.52^{+2.0}_{-1.0}\) kpc, consistent with the expected distance range of the SNR of 2.5-4 kpc. Conclusions: ## 1 Introduction The 450th anniversary of Tycho's Nova Stella (SN 1572) is the occasion for revisiting the deep X-ray archival observations of the _Chandra_ telescope with recent advanced analysis techniques. A type Ia supernova explosion is at the origin of this <<new star observed in November 1572 by Tycho Brahe. Earlier observations were also recorded by Korean and Chinese astronomers (Green & Stephenson, 2003). The event is thought to be a "normal" type Ia based on analysis of the X-ray emitting ejecta (Badenes et al., 2006), which has been confirmed by the spectroscopy of the observed light echoes of the explosion (Krause et al., 2008). However, the understanding of type Ia supernovae is still subject to debate. Two scenarios are possible: the single-degenerate model (a white dwarf accretes matter from a non-degenerate companion) and the double degenerate model (the explosion is due to two white dwarfs). Centuries after the explosion, these explosion scenarios will influence the type Ia supernova remnant (SNR) and its dynamics (Ferrand et al., 2019). This can be probed by the ejecta X-ray emission in young ejecta dominated SNRs. Contrary to the core collapse SNRs, remnants of thermonuclear supernovae show a more spherical expansion (Lopez et al., 2011) as observed in Tycho's SNR. However, some asymmetries can be highlighted by studying the dynamics in detail. Their origin can be imate or acquired: either due to an initial anisotropy in the supernova, or related to interactions between the expansion and inhomogeneities in the ambient interstellar medium. Simulations show that an initial asymmetric explosion will leave an imprint in the SNR hundreds of years after (Ferrand et al., 2019, 2022). Some high-velocity components seen in the echo light of this SNR could be explained by an aspherical supernova (Krause et al., 2008). The origin of the fast iron and silicon knot in the South-East (SE) is as well interpreted as ejecta bullets formed during the explosion (Yamaguchi et al., 2017). Sato et al. (2019) also show that clumpiness in the early remnant best explains the current morphology of Tycho's SNR. However, the environment of Tycho's SNR is known to be inhomogeneous. Williams et al. (2013) find a density gradient based on radio observation. Zhou et al. (2016) observed in addition a potential molecular cloud in the northwest, also highlighted by Arias et al. (2019). To probe these possibilities, studies have been carried out in X-rays to follow the SNR evolution across multiple epochs. The velocity of the forward shock was first studied by measuring the shifts of synchrotron filaments (Katsuda et al., 2010; Williams et al., 2016; Tanaka et al., 2021) following the method of Katsuda et al. (2008). This protocol was then applied to the ejecta (Williams et al., 2017; Millard et al., 2022) to measure the projected velocity in the plane of the sky. The first direct measurement of the projected velocity in the line of sight was realized by Sato & Hughes (2017), using Doppler effect. Then Williams et al. (2017) and Millard et al. (2022) combined these two methods to obtain three-dimensional velocity vectors of around 80 ejecta blobs (combining the two studies). Based on these dynamics measurements, an East-West asymmetry is observed in the forward shock velocities (Williams et al., 2016), which can be explained by a density gradient (Williams et al., 2013). However, no such asymmetry is seen for the ejecta dynamics in the plane of the sky, except for the fast moving knot in the South-East (Yamaguchi et al., 2017). In the line of sight, Millard et al. (2022) using the high resolution grating spectrometer on fifty bright ejecta knots/blobs highlight a North-South asymmetry, where the northern ejecta is more blue-shifted than the southern regions. In the case of gratings, only bright blobs can be studied and the number of zones is limited, not enough to do a statistical study or to consider three-dimensional reconstruction (x, y, z) of the SNR's expansion. In previous studies, the three-dimensional nature (x, y, energy) of the X-ray data is also not used to its full potential as in most cases the spectral and spatial information are used separately. New analysis methods can be developed to exploit this wealth of information. For example, Principal Component Analysis has been used by Warren et al. (2005) to find interesting regions to study. Iwasaki et al. (2019) used unsupervised deep learning to propose a more sophisticated decomposition of the supernova remnant Tycho. In this article, we will use a tool named GMCA for General Morphological Component Analysis (Bobin et al., 2015; Picoquent et al., 2019). The general idea is to do a blind source separation on an X-ray data cube and retrieve components with common spectral signatures and provide as an output the spectrum and associated image of each component. It has been used to study the Cassiopeia A SNR in Picoquent et al. (2021) to highlight some redshift/blueshift asymmetries of individual emission lines, and in the SNR N103B to reveal a double-ring structure in the ejecta component (Yamaguchi et al., 2021). The objectives of the current paper are to provide a velocity vector field of the ejecta to study the three-dimensional dynamics of the entire Tycho's SNR. With this aim, we propose new methods to study the three-dimensional ejecta expansion. We will analyze separately the velocity in the line of sight, \(V_{\rm z}\), and the velocity in the plane of the sky \(V_{\rm xy}\). First, we will present the data from the _Chandra_ telescope and the new tools in Section 2 and 3. We obtain a complete map of the peak energy for the silicon line and so of the redshift in the line of sight (see Section 4) and around 1700 proper motions in the plane of the sky (see Section 5). This gives precise information on the dynamics asymmetries, an evaluation of the center of the explosion to search for a potential progenitor, and clues toward three-dimensional reconstruction as discussed in Section 6. In this paper, we will suppose that the distance of Tycho's SNR is 3.5 kpc. A complete review of the distance is given by Hayato et al. (2010), we will use this value to be consistent with the results of Williams et al. (2017). For the center of the explosion used as a reference to measure a radius in the plane of the sky, we will use the value that we find (see Section 5.2 ) R.A. 00\({}^{\prime\prime}\)25\({}^{\prime\prime}\)20\(\aas@@fstack{\prime\prime}\)79 and Dec. 64\({}^{\circ}\)08\({}^{\prime\prime}\)09\(\aas@@fstack{\prime\prime}\)04. We will also use the following conventions : the velocity in the plane of the sky is called proper motion (hereafter \(V_{\rm xy}\)). In the line of sight, the velocity measured with the Doppler effect is named \(V_{\rm z}\), which is positive away from us. ## 2 Observations and data reduction Tycho supernova remnant has been observed multiple times by the _Chandra_ X-ray telescope, in particular in 2009 with a deep observation of 734 ks with nine observations in a month. We will also use the observation from 2003 with around 145 ks exposure time. All these observations are summarized in Table 1. In our analysis, the new methods and their inputs are different for the \(V_{\rm xy}\) and \(V_{\rm z}\) velocities. We must therefore adapt the binning of our data cube (RA, DEC, E) according to the problem. * For the \(V_{\rm z}\) velocities, we use the component separation method GMCA. This algorithm needs a data cube as input and high statistics. We focus on the deep 2009 data set and stacked all the observations of the year. This method allows to study the Doppler effect on the silicon line and deduce the velocity. So we will use the native energy binning of 14.6 eV and a spatial binning of 2". It is four times the native spatial binning in order to obtain a high number of counts in all voxels. * For the proper motion \(V_{\rm xy}\) we measure very small shifts between two images from 2003 and 2009. Here the data cubes are stacked across energy between 0.5 keV and 7 keV to obtain images. We use the native spatial binning of _Chandra_ (0\(\aas@@fstack{\prime\prime}\)5) to have more details. Despite the good absolute astrometry of _Chandra_, an image registration of each observation with respect to a reference observation allows for a more accurate astrometry. We note that we could not use the astrometric corrections from Tanaka et al. (2021) as the data currently in the archive have been reprocessed in late 2020 and are not the same as used in their study. The current reprocessions (_repro5_) comes with an new calibration which provides an improved astrometry1. The procedure is to detect point sources with wavdetect, compute transformation matrices by crossmatching common sources via wcs_match, and update the event and aspect solution files via wcs_update (see 2 for \begin{table} \begin{tabular}{c c c} \hline \hline ObsID & Date (YYYY/MM/DD) & Exposure time (ks) \\ \hline 3837 & 2003/04/29 & 145.6 \\ \hline 10093 & 2009/04/13 & 118.4 \\ 10094 & 2009/04/18 & 90.0 \\ 10095 & 2009/04/23 & 173.4 \\ 10096 & 2009/04/27 & 105.07 \\ 10097 & 2009/04/11 & 107.4 \\ 10902 & 2009/04/15 & 39.5 \\ 10903 & 2009/04/17 & 23.9 \\ 10904 & 2009/04/13 & 34.7 \\ 10906 & 2009/05/03 & 41.1 \\ \hline \end{tabular} \end{table} Table 1: _Chandra_ observations used in this study more details). All observations have been aligned to a reference observation (ObsID 10095, the deepest observation). Depending on the observation, between 4 and 11 common point sources can be used for the alignment. The maximum offset correction is of the order 0\(\aas@@fstack{\prime\prime}\)25 and the average correction of 0\(\aas@@fstack{\prime\prime}\)12. We obtain smaller offset corrections compared to Tanaka et al. (2021), likely due to the improved astrometry provided by _repro5_. ## 3 Data analysis methods In this Section, we will present and describe the innovative tools that we use. For the line of sight, we will use the General Morphological Component Analysis (GMCA) method (Bobin et al., 2015) to decompose our data cube into the red and blueshifted ejecta components, which will then be used to estimate \(V_{x}\). To measure the proper motion \(V_{\rm sys}\), we developed a new tool named Poisson Optical Flow (POF) to track the displacement of 2-D features across observations. ### General Morphological Component Analysis tool The data coming from a spectro-imaging telescope such as _Chandra_ have a four-dimensional nature (x, y, E, t) : we will use the two spatial dimensions and the energy dimension. To exploit simultaneously the spatial and spectral information, and to extract overlapping physical components we use the GMCA method. This tool decomposes a cube X into a linear combination of spectra A\({}_{\rm i}\) and associated images S\({}_{\rm i}\) by resolving the inverse problem : \[\mathrm{X}=\sum_{k=1}^{n}\mathrm{A}_{k}\ \mathrm{S}_{k}+\mathrm{N} \tag{1}\] The parameter \(N\) is the noise that is dealt with by the algorithm and \(n\) is the number of components chosen by the user. To choose the best number of GMCA components, we tried various values but we were quickly limited by the intrinsic statistics of the data. If too many components are requested, the image output becomes very noisy, with unrealistic discontinuities in the spectra. To find an optimal number of components, we can use the Akaike Information Criterion (see Appendix B of Picoquent et al., 2019). This parameter corresponds to the negative log-likelihood with a penalty for an increasing number of degrees of freedom. The minimum of this criterion gives the best number of components. We find that three components is the best to decompose our cube centered on the silicon line. As in a Principal Component Analysis (PCA), we can see the outputs of GMCA spectra as vectors of a basis to reconstruct the spectrum in all pixels. The weight associated to each vector in a given pixel is the value of this pixel in the associated GMCA image. To disentangle the components, the algorithm optimizes the spatial and spectral differences between the components jointly Figure 1: GMCA’s outputs for data cube of Tycho’s SNR in the Si band (1.6 - 2.1 keV). _Top :_ In black the observed spectrum for the all SNR compared to the three spectra found by GMCA. _Bottom :_ The images associated to the three spectral components found by GMCA. The exposure map was only corrected in the output images, not in the GMCA inputs. Based on the morphology and spectra of the outputs, we interpret the decomposition as follow: the first component corresponds to the continuum, mostly the synchrotron emission. The two others are the thermal emission of the ejecta, component 2 being redshifted, and component 3 being blueshifted. in the wavelet domain. This method is a blind source separation algorithm, with no prior spectral information and so no bias because of a prior. There is nevertheless an option of spectral initialization. The user can constrain the spectra of one or more components and only the normalization of these spectra will be adapted to solve the inverse problem. So the shape of the spectra must be optimized by the user before. This option can be useful to retrieve a component hidden because of smaller statistics, or to clean other components for leakage. Figure 1 shows the GMCA results for the stacked data cube of 2009 in the 1.6-2.1 keV energy band, corresponding to the silicon line. In this analysis, three components were set for the decomposition. The first is initialized to capture the underlying continuum, and the second and third are the ejecta that we will study. To initialise the continuum component we use a power-law spectrum with the same parameters as Williams et al. (2017) : a photon index of 2.6 and an absorbing column density of 6 x \(10^{21}\) cm\({}^{-2}\). So the inputs here are the number of components, three, the data cube to analyze, and the initialization for the power-law spectrum. The outputs shown in Figure 1 can be interpreted as physical emissions despite the blind aspect of the separation method. The first component corresponds to the fixed power-law component with the goal to capture the underlying synchrotron emission map. We can see in the image of component one, that the algorithm successfully retrieves the synchrotron map characterized by filamentary structures despite being buried under the thermal emission from the Si-dominated ejecta. Some leakage from the thermal emission of the ejecta is possible, specifically in the North-West where the thermal emission is particularly bright. Here the initialization is necessary because the power-law component is too faint to be detected in a pure blind mode in this restricted energy range. To our knowledge, this is Tycho's first synchrotron map in the 1-2 keV band clean of thermal emission. While this is beyond the scope of this paper, investigating the synchrotron filament structures at different energies could be useful to characterize the magnetic field properties as done in Picoquenot et al. (2023), in particular for the synchrotron stripes in the West of the SNR. The second and third components are associated to the ejecta emission: the spectra correspond to the thermal emission (silicon line and underlying Bremsstrahlung continuum) and the associated images show the clumpy aspect due to Rayleigh-Taylor instabilities. GMCA even succeeds at separating blue and redshifted ejecta as revealed by the shifted spectral lines in the top panel of Figure 1. For the supernova remnants, GMCA can be very efficient because the physical components that we want to separate like synchrotron emission, the various ejecta elements (intermediate elements or iron emission), and the redshifted or blueshifted ejecta have very different spectral and spatial signatures. Nevertheless, one limitation of GMCA is the absence of uncertainties for the outputs. ### Optical flow to measure proper motion Optical flow methods are a part of the computer vision research domain, which means all the methods linked to detection or velocity measurements. In our case, it consists of measuring the spatial evolution of the ejecta in the plane of the sky. The goal is to detect small shifts of a few pixels between two images within a 6-year time interval in our case. For this, we suppose that there is no significant morphological variation of the small features that we will track between years. Note that the angular resolution needs to be comparable between the two epochs for all features (ideally same telescope pointing ). We first tested the library OpenCV 3 which is generally used for daily life images and video analysis, as detecting and measuring the movement of a car. This library has been applied to X-ray observations in Sato et al. (2018) and Tsuchioka et al. (2021). There are two steps: first detect some good features to track and then measure their displacement between two images. We obtained good results but all the algorithms were completely black box and without a special optimization for astrophysics. In particular, there are no uncertainties in the outputs, no handling of the Poisson noise, or the difference in exposure maps. So we decided to develop our tool adapted for Poisson statistics of the X-ray data which we call the Poisson Optical Flow (POF) tool. Footnote 3: [https://opencv.org](https://opencv.org) The goal is to measure the shift of a small feature across epochs. The deep 2009 flux map (corrected by its exposure map) is used as the model. We can also smooth it to decrease the noise and limit fluctuations in the model if the statistics are limited. The second image is the observation which is not modified at any step of the protocol to maintain the Poisson nature of the signal. The general idea is captured by the following Equation 2. \[\mathrm{L}(\Delta x,\Delta y)=\mathrm{cstat}\left(\frac{\mathrm{I}_{\mathrm{ Mod}}(x+\Delta x,y+\Delta y)}{\mathrm{Exp}_{\mathrm{Mod}}(x+\Delta x,y+\Delta y)} \right)\mathrm{Exp}_{\mathrm{Obs}}(x,y),\ \mathrm{I}_{\mathrm{Obs}}(x,y) \tag{2}\] We create a small vignette around the feature at position \((x,y)\) in the observation image \(\mathrm{I}_{\mathrm{Obs}}\) and compare it with the equivalent in the model observation \(\mathrm{I}_{\mathrm{Mod}}\). The model observation moves in X and Y axes with shifts \(\Delta x\) and \(\Delta y\), in a zone of exploration. We do a cubic interpolation of the model vignette to do sub-pixel steps (five times smaller than the native pixel). Then, at each position, we evaluate the 2-D likelihood \(\mathrm{L}(\Delta x,\Delta y)\) with the _cstat_ statistical function (Cash, 1979), adapted to the Poisson statistic. We create like this a complete statistical landscape corresponding to all the explored zone around the feature. The minimum of this landscape corresponds to the most likely displacement where the model and observed vignette overlap. Then to precisely measure the shift, we do a local 2-D fit of the statistical landscape (with a 2-D polynomial function of degree 4) only in an area of 2\(\times\)2 native pixels around the local minimum. It corresponds to the distance between the minimum and the initial position with sub-pixel precision. Finally, we obtain the proper motion \(V_{xy}\), by dividing this shift by the baseline of 6 years and supposing a distance of Tycho's SNR of 3.5 kpc. We can also derive the ellipse of uncertainties : it is the cut of the 3-D landscape for a _cstat_ equal to the minimum plus \(\Delta cstat\). For uncertainties at 1 sigma, \(\Delta cstat\) equals 2.3. It is noticeable that Cstat varies a lot at the native pixel scale, much more than 2.3. The interpolations are necessary to obtain the uncertainties. We present in Appendix A some examples of features, statistical landscapes and profiles from our method. A similar idea was used by Sato and Hughes (2017) to measure proper motions in Kepler's SNR. We add the exposure map correction and the interpolation of the statistical landscape to have precise uncertainties. ## 4 Results : Line of sight velocities \(V_{x}\) In principle, we expect the X-ray emission from Tycho's SNR to arise approximately from a shell with half going toward us (blueshifted emission) and half away from us (redshifted emission). What we see in a pixel is the sum of emissions in the line of sight because the SNR X-ray emission is optically thin. In the outputs found by GMCA (see Section 3.1) in Figure 1, two components are associated with ejecta emission. The major difference being the shift of the silicon line in their spectra, these components are interpreted as redshifted and blueshifted emission of Si ejecta. To determine where the Si ejecta emission is predominantly blue or redshifted we produced their ratio map from the GMCA outputs. Then we explored the possibility to derive physical maps from these two components : a map of Si line peak energy (\(E_{\rm p}\)) and a map of \(V_{x}\) velocity. ### Ratio map of the red and blueshifted emission As explained previously, each component is optimized to reproduce best the true spectrum in a pixel as a linear combination of the GMCA spectra. The weight of each spectrum of the GMCA basis is the value in each pixel of the GMCA image. To investigate the dominance of one ejecta component against the other and highlight some asymmetries we compute a ratio of the GMCA images which is defined as : \[\rm{Map}=\frac{S_{\rm{blue}}-S_{\rm{red}}}{S_{\rm{blue}}+S_{\rm{red}}} \tag{3}\] So this ratio map can be read as follows : a pixel with a ratio of \(-1\) is dominated by redshifted emission while a ratio of 1 is a dominantly blueshifted, and zero if both component are equal. The resulting map is shown in Figure 2. The lack of strong correlation between the red/blue structures and the brightness contours show that our map is brightness independent. We observe also a clear asymmetry in the map with the south more redshifted than the north, but it is difficult to do more interpretation without a physical meaning for the ratio of components. So we need to construct a physical proxy of the \(V_{x}\) velocity. ### From ratio map to velocity map In this Section we explore how our red and blueshifted maps can be used as a proxy to estimate the mean \(V_{x}\) velocity in each pixel. According to the GMCA definition, the spectrum in a pixel (\(i\), \(j\)), \(\rm{A_{ij,~{}tot}}(E)\) can be written as : \[\rm{A_{ij,~{}tot}}(E)=\sum_{k}S_{ij,~{}k}~{}A_{k,~{}GMCA}(E) \tag{4}\] Figure 2: Ratio map (see Equation 3) of the red and blueshifted GMCA images. For example, a pixel with a value of 1 is dominated by the blueshifted GMCA image. The main synchrotron filaments are indicated in green, found with a contour detector in the first GMCA component. The dark contours come from the total image of the SNR in the 1.6 - 2.1 keV band smoothed by a 5 arcsec Gaussian kernel. with S\({}_{\parallel\,k}\) the value of pixel (\(i,j\)) in the image of the kth component and A\({}_{\rm{L,\ GMCA}}\)(\(E\)) its spectrum. As we study the ejecta dynamics, we discard the synchrotron component and we consider only the two line components. We can approximate their GMCA spectra as Gaussian functions as in the following Equation 5. \[{\rm{A_{L,\ GMCA}}}(E)=\beta_{k}+\alpha_{k}\exp\left(-\left(\frac{E-\bar{E}_{k}} {\sigma_{k}}\right)^{2}\right) \tag{5}\] If we want to find the peak energy of the silicon line in a pixel (\(i,j\)), the energy where the line reaches its maximum value, we must find the \(E_{\rm{p,\ \dot{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! * The choice of this Gaussian function may not provide the best fitting for the GMCA spectrum. However, it is the easiest way to obtain the peak energy map analytically. * The most limiting approximation is probably the simplification of the exponential to find an easy analytic solution for the map of peak energy. So our map must be seen as a proxy of the integrated peak energy and \(V_{\rm z}\). * Finally the transformation from peak energy to \(V_{\rm z}\) raises the question of the energy of reference, which is more deeply studied in Appendix B. * Only an average velocity weighted by the local flux in the line of sight is reconstructed. In conclusion, we obtain a map of the mean \(V_{\rm z}\) at a 2" pixel level with a total coverage of the SNR for the first time. Overall we find higher \(V_{\rm z}\) in the center than at the edge, as expected for a spherical expansion but with many patchy features dominating in the foreground or in the background. The important features which will be discussed in more detail later is the clear North/South asymmetry. A similar trend was seen by Millard et al. (2022) but with limited sampling and is clearly confirmed here thanks to our full coverage. ### Limitations due to integration in the line of sight As SNRs are optically thin in the X-ray domain, there is a notable difficulty with the map of \(V_{\rm z}\) (Figure 4). The spectrum in a pixel is integrated over the line of sight, so the peak energy and the corresponding velocity are also weighted by the local brightness along the line of sight. In a perfectly spherical remnant with a homogeneous emission and spherical expansion, velocities from the two half-shells would cancel out and no Doppler shift would be measured with our method, only a line broadening would be observed. However, even for a regular type Ia supernova remnant, these assumptions are not valid. In particular, the flux varies at large scale in the SNR. Due to the Rayleigh-Taylor instabilities and the clumpy aspect of the ejecta, bright clumps can also dominate the line of sight. This is why the usual method used in Williams et al. (2017), Sato & Hughes (2017), and Millard et al. (2022), which consists of studying only bright blobs is interesting. By isotropy, we can suppose that the blob is also locally small in the line of sight and that it dominates the emission. In this case, the measures of \(V_{\rm z}\) are located at one point on the line of sight (which is necessary to do 3-D reconstruction). But it is difficult to find good blobs to study in the SNR. There are around a hundred points currently in the accumulation of all the studies using this method, providing a limited coverage of the SNR. Figure 4: Map of the peak energy reconstructed with GMCA components and its equivalence in terms of integrated velocity in the line of sight. The markers are the measurements from Williams et al. (2017) (circles),Sato & Hughes (2017) (triangles), and Millard et al. (2022) (squares). Their colors (blueshifted or redshifted) come from the results of these articles. Our method is complementary: we can quickly obtain a proxy of the integrated \(V_{\rm z}\) velocity with total coverage. We are consistent with the local measurements (see Figure 5) and we highlight some large-scale asymmetries (see Discussion). But there are some limitations with our method. * We have a degeneracy at low values (in grey in Figure 4): it can be due to true slow velocities (which are expected at the edge), or compensation of the two SNR halves. * Our values tend to be underestimated (the slope of the correlation in Figure 5 is larger than one). In general, there is probably not a clear dominance of one side over another. So the velocities are averaged on the line of sight. * We have a problem in the interpretation of these large-scale asymmetries: both asymmetries of velocity and flux can explain them. * It is difficult to localize the position of the emitting region in the line of sight to do a full three-dimensional reconstruction as we will present in Section 6.5. ## 5 Results : Plane of the sky velocities \(V_{\rm xy}\) ### Proper motion vector field As explained in the methodological Section 3.2 we use POF, a tool of our conception to compute around 1700 proper motion vectors in the plane of the sky. The first step of our method is to find small morphological features whose shift between two observations at different epochs will be measured. We initially tested the method only on bright features with a sharp morphology, like ejecta knots. However it appeared that when extending the method to fainter less contrasted features, the tool succeeded to measure a shift with reasonable likelihood profiles providing relatively small errors in all directions as shown in Appendix A (see Figure A.1, center panel). Our method is not only sensitive to bright knots, but also to more diffuse structures. This is because we are using the full 2-D information in the likelihood and not only a 1-D projection, which necessarily produces some loss of information. Following these tests, we decided to not only follow bright knots but map the proper motion of the entire remnant by tracking features defined on a regular grid of points. We take only the points inside a mask of ejecta created with the GMCA outputs, excluding most of the synchrotron filaments. The boxes around the good features (called vignettes in the following) are 30 pixels (15") wide and their centers are separated by 20 pixels. So there is some overlap from one box to another. Then we applied POF to the epochs 2003 and 2009. The 2009 observation has a deeper exposure time and will be used as the model. The maximal shift that can be measured in the exploration zone is 8 pixels (corresponding to \(\sim\)11000 km s\({}^{-1}\)). We must then deal with the anomalies in our outputs. If an initial feature is located in a zone without enough counts and/or contrast, our tool will not succeed to measure a shift. Most of the vignettes where the method provides unreliable results are located close to the exposure gaps in the 2003 observation (i.e. on the bad columns and the CCD gaps). So we chose to keep only the measurements with an expansion index \(m\) (where \(m=\frac{V_{\rm xy}\log\rm{km}}{R_{\rm xy}}\)) lower than 1.2 (some of them were up to 12). This removes around 8.6 % of the outputs, essentially in areas of the CCD gaps. For the parameters described above, we obtain 1722 velocity vectors shown in Figure 6 out of 1884 initial features in the grid. The ellipses of uncertainty, different for each vector, are not shown here, for more readability. The mean value for these 1-sigma uncertainties on the vector direction is 370 km s\({}^{-1}\) (see Figure A.1 for some examples). As expected for a nearly spherical expansion projected on the plane of the sky, the velocity in the plane of the sky is higher at the edge than in the center. The distribution of \(V_{\rm xy}\) has a mean value of around 3610 km s\({}^{-1}\) or 0\({}^{\circ}\).217 yr\({}^{-1}\). This corresponds to a mean expansion index of 0.59. Our values are consistent with previous studies about ejecta dynamics : Williams et al. (2017) found a mean \(V_{\rm xy}\) velocity of 4430 km s\({}^{-1}\) with a range from 2400 to 6600 km s\({}^{-1}\) and Millard et al. (2022) a mean of 4150 km s\({}^{-1}\) in a range from 1890 to 5950 km s\({}^{-1}\). Our vector field in Figure 6 may seem a bit noisy, the vectors are not perfectly radial in general. There is no spatial regularisation, we probe only the local behavior. Physically, they can be non-radial because of local turbulence, and deviations due to interactions with dense clumps or large interstellar clouds. The angular deviation distribution between our vectors and a radial equivalent is a Gaussian centered at 3.1\({}^{\circ}\) with a standard deviation of 19.6\({}^{\circ}\). ### Center of the explosion from the vector field With this velocity vector field, we can attempt to find the common origin of these vectors, that is the center of the explosion. To do this we use the method from Sato & Hughes (2017b). The idea is to suppose a power-law radius expansion \(r\propto r^{m}\) where \(m\) is the expansion index. If \(m\) is low, the ejecta have slowed down, if \(m\) is near 1, the ejecta are in free expansion. Under these assumptions the projected radius \(R_{\rm xy}\) is equal to \(\frac{V_{\rm xy}\log\rm{km}}{m}\). And so the center of the explosion can be deduced from the position of each vector, its expansion index and the age of the SNR \(t_{\rm SNR}\). However this origin of the explosion is also needed to calculate \(m\) Figure 5: Comparison of the \(V_{\rm z}\) velocity obtained with our method and with spectral studies using ACIS (Williams et al. (2017) with black markers and Sato & Hughes (2017a) with red markers) and gratings (Millard et al. 2022) (blue markers). To solve this we use an iterative protocol. We initiate it with the center from Williams et al. (2016), and at each step : * We calculate \(m\) and the angular deviation \(\Delta\theta\) from a pure radial expansion for each vector. * We create a mask to have a "golden sample" of vectors with \(1.2>m>m_{\rm lim}\) and \(|\Delta\theta|<\Delta\theta_{\rm lim}\), to keep vectors that have not decelerated too much and have had little angular deviation. * We calculate an origin for each vector of this golden sample, which is in the direction of the vector at a distance of \(R_{\rm xy}=\frac{V_{\rm xy}/\rm{sgn}}{m}\). * We take the median of these origins as a new center. We use the distribution of origins at the final step to obtain the uncertainty contours using a Gaussian kernel density estimate. The final value and its error bar are the median and stan Figure 6: Proper motion vector field between 2003 and 2009 obtained with the tool POF. The size of the vignette for each tracked feature is 15\({}^{\rm e}\). There are 1722 vectors, colored by the value of their norm. The colorbar is saturated at 7000 km s\({}^{-1}\). The background image is the observation of Tycho’s SNR in 2003 without any exposure map correction. dard deviation of this last distribution as shown in Figure 7 together with a comparison with previous studies. We use a hundred iterations, in practice the convergence is very fast. As a limit for the golden sample we take a maximal deviation from radial vector \(\Delta\theta_{lim}=5^{\circ}\) and a minimal expansion index \(m_{lim}=0.75\). Finally 44 vectors remain, much more than all the other studies using this method, and we obtain a value of \(\mathrm{R.A.}=00^{h}25^{m}20^{s}.79\)\({}^{+12.3}_{-10.3}\) and \(\mathrm{Dec}=64^{\circ}08^{\circ}09^{\circ}.04^{+5.7}_{-5.9}\). Our result is closer to the measurement of Warren et al. (2005), which was based on geometrical considerations. The measure from Williams et al. (2016) that we use as the starting point is based on the measurement of forward shock expansion in 17 regions and a relation to measure center of the explosion offset from geometrical center based on simulation (Williams et al., 2013). Our result uses the ejecta as tracer, which is more directly connected to the explosion than the forward shock. The latter is more sensitive to the circumstellar medium and the perturbations due to the expansion. The result from Millard et al. (2022), using the same protocol but with fewer vectors, is also compatible with our result. Given this new estimate of the explosion center, we carried out a search for a potential progenitor using _Gaia_ data release 3 (DR3) which is presented in Section 6.4. ## 6 Discussion Thanks to the two new velocity measurement methods that we have developed, we obtain around 1700 \(V_{\mathrm{xy}}\) vectors and a \(V_{\mathrm{z}}\) map, at a 2" spatial resolution, with a total coverage of the SNR. All our measurements are summarized in a histogram in Figure 8. As explained in Section 4.3, our distribution of \(V_{\mathrm{z}}\) is biased by the integration along the line of sight and the velocities are likely underestimated for high values. At first glance, the velocities on the three axes are in a range between -6000 and 6000 km s\({}^{-1}\) with a symmetric distribution. As expected for a SNR issued from a thermonuclear supernova, Tycho's SNR has a regular shape and dynamics overall. However a detailed inspection reveals a more complex behavior with dynamics asymmetries both at large and small scales. The origin of these behaviors can be innate, which means due to the explosion anisotropy, or acquired because of inhomogeneities in the environment that slow down the expansion. In this second case the question is also raised to know the age of this interaction and the origin of this density inhomogeneities: is it due to the progenitor (circumstellar medium, CSM) or was it pre-existing (interstellar medium, ISM)? In the following Sections 6.1, 6.2 and 6.3, the velocity maps are used to investigate large and small scales dynamics anisotropies in the context of our understanding of the surrounding medium. Then we discuss the use of the ejecta vector field to pinpoint the explosion center and search for stellar progenitors (Section 6.4). Finally we combine all the velocity information into a 3-D representation in Section 6.5. ### Large scale asymmetries in the \(V_{\mathrm{z}}\) map The most obvious large-scale asymmetry is in the line of sight velocity \(V_{\mathrm{z}}\) from Figure 4. In the south, the redshifted emission is dominant and in the North, there is more blueshifted emission. This has been also remarked in Millard et al. (2022) and Sato & Hughes (2017a) but with a much more limited coverage. This is clearly confirmed in our work, with our full coverage of the entire remnant. That means that in the North, the bulk of the material is preferentially moving toward us or that the near side is brighter than the back side. And in the South, it is the opposite. There are two ways to interpret this asymmetry. This can be due to an asymmetry during the explosion. Hundreds of years after, the explosion asymmetry could still be visible in the SNR structure as shown for example in the type Ia simulation of Ferrand et al. (2019). Perhaps the asymmetry that we see is due to the SNR being an oblong shell, elongated at an angle with respect to the line of sight creating this blue/red patterns in the North and in the South respectively. Another possibility is that this asymmetry is acquired due to an interaction with interstellar material which slows down the ejecta. If a cloud is behind the SNR in the North and another in front of the SNR in the South, we would have the type of ejecta dynamics that we observe. However, these interactions would increase the brightness of the slower side of the shell. As we measure integrated velocity along the line of sight, this could compensate the higher velocity of the non-interacting side. A ring of circumstellar matter could explain this distribution as is observed for SN 1987a. Nevertheless, there are no clear observations of a cloud in front of the SNR in the South. This hypothesis Figure 8: Normalized histograms of our velocities\(V_{\mathrm{z}}\) and \(V_{\mathrm{y}}\) obtained with the proper motion method and \(V_{\mathrm{z}}\) estimated via the Doppler shift. The x-axis is oriented to the West, the y-axis to the North, and the z-axis is positive away from the observer. Figure 7: Locations of the center of the explosion found by this study (see main text) and by other studies. The contours are our uncertainties at 1, 2 and 3 sigma based on Gaussian kernel density estimation. The star symbols are the potential donors listed by Kerzendorf et al. (2013) together with their current and past position, and proper motion vectors from _Gaia_ DR3. of a slow down due to an interaction raises the question of when such an interaction could have happened. Either the SNR is currently interacting with large scale clouds, or this asymmetry was acquired during the first decades after the explosion in a scenario in which the SNR had evolved in a dense, but small, wind bubble as described in Chiotellis et al. (2013). The second possibility could explain why we do not currently observe a cloud in front of the southern half. Nevertheless, in their one-dimensional simulation, they show that current dynamics of Tycho's SNR will be identical to a case without a wind shell, there will be only an impact on the ionization time and a small variation of the reverse shock's radius. Further work could disentangle these scenarios with a complete mapping of the ejecta plasma parameters (in particular the plasma temperature and ionization timescale) via X-ray spectral analysis. ### Large scale asymmetries on the \(V_{\rm xy}\) vector field A large-scale asymmetry is also known in the plane of the sky. This was noticed in forward shock proper motion measurements from Williams et al. (2016), Katsuda et al. (2010) and Tanaka et al. (2021). In Williams et al. (2013), mid-infrared observations have highlighted a density gradient from East to West which agrees with the forward shock asymmetries. This East/West asymmetry was not observed in the ejecta proper motion vector field (Williams et al., 2017; Millard et al., 2022). At first glance in our vector field in Figure 6, it is difficult to say if the ejecta also show this East/West asymmetry. To study in more detail our proper motion vector field, we represent in Figure 9 profiles of the \(V_{\rm xy}\) velocity as a function of the radius in the plane of the sky \(R_{\rm xy}\) for eight angular sectors. These sectors are based on the morphology of Tycho's SNR as seen in the middle panel and details in its proper motion's dynamic. Sector F between 200 and 300 degrees is used as a ref Figure 9: Profiles of the proper motion \(V_{\rm xy}\) as a function of the radius in the plane of the sky \(R_{\rm xy}\) for eight angular sectors. In the central panel, sectors are overlaid on the 0.5-7 keV _Chandra_ map from the deep 2009 observations. The color red/blue is the velocity \(V_{\rm z}\) of our map in Figure 4, in the same position as the POF measurements in Figure 6. We add on the profiles the forward shock velocity measurements from Katsuda et al. (2010) (black circles) and Williams et al. (2016) (black squares) that are located in the associated sector. erence of an expected dynamics without perturbations in other analyses (Chiotellis et al., 2013; Badenes et al., 2006). Sector C matches the known fast iron and silicon rich knots (Yamaguchi et al., 2017). Sectors E and H represent the protrusions, where the ejecta reach the forward shock. The small sector A was selected because of its unexpected dynamics seen in Figure 6. Sector G has higher flux and slower velocities at the edge. Finally sectors B and D probe large scale dynamics in the East and South-East where the forward shock is slower. In this figure, we also add the forward shock measurements from Katsuda et al. (2010) and Williams et al. (2016), based on _Chandra_ data using the same 3.5 kpc distance as in this paper. We can see the forward shock asymmetry with velocities up to 6000 km s\({}^{-1}\) in the West (sector F) and slower velocities of around 4000 km s\({}^{-1}\) or less in the East (sector B). The first observation is that we do not see the same contrast in our ejecta velocities. However when comparing ejecta dynamics with forward shock dynamics, there is a clear pattern where in the West, the forward shock moves, as expected, faster than the ejected material. Due to projection effects there is also a linear relation between \(V_{\rm xy}\) and \(R_{\rm xy}\), with very few deviations for the sector F. In the East, in sector B and D for example, the forward shock is slower than the ejecta and we observe strong variations around the expected linear behaviour. Maps of the ambient medium at the edge of the SNR based on infrared (Williams et al., 2013) and radio observations (Arias et al., 2019; Castelletti et al., 2021) show that the West has indeed no potential clouds which could disturb the spherical expansion. Nevertheless, Chiotellis et al. (2013) argue that an interaction with a small and dense wind bubble during the early expansion phase (less than 100 year) of Tycho's SNR could explain the dynamic and spectral properties in the sector F. The East on the contrary seems to have a complex external structure in the multi-wavelength observations, which explains the velocity difference between the forward shock and the ejecta. This medium could also be the origin of some local anomalies in our vector field which are discussed in the next subsection. Figure 9 also includes the velocity \(V_{x}\) in the color of each marker. It shows that there are no correlations between the behaviour at large scale in the plane of the sky and in the line of sight. or 7 kpc (Ruiz-Lapuente et al. 2019), making it unlikely to be associated with the SNR. However, the latest parallax measurement from Gaia DR3 indicates that star E is much closer. The photo-geometric distance, a useful method for poorly measured parallax, takes into account both parallax and photometric data to constrain the distance using stellar models. Using this method, the errors in the distance to star E narrow to 3.34\({}^{+10}_{-0.7}\) kpc (Bailer-Jones et al. 2021), which is consistent with the SNR distance. Furthermore, a spectroscopic study by Ihara et al. (2007) found that star E is the only star in our sample to exhibit an absorption Fe I line at 3720 A (though this detection is disputed by Gonzalez Hernandez et al. 2009). The fact that only the blueshifted side of the absorption feature was detected would indicate that the star is within the SNR sphere. However, confirming whether star E resides inside the SNR or is located in its background is challenging due to uncertainties associated with the template stellar spectra that impact the detection of the redshifted side of the absorption feature (Ihara et al. 2007). For star B, Kerzendorf et al. (2018) and Ruiz-Lapuente et al. (2019) ruled out an association with the SNR based on several arguments. One of them was its distance but the _Gaia_ DR2 parallax used in these papers has evolved from 0.491\(\pm\)0.051 mas (\(\sim\) 2 kpc) to 0.373\(\pm\)0.032 mas in DR3 therefore placing the star slightly further away (2.53\({}^{+0.23}_{-0.20}\) kpc) and in better agreement with the UV-optical luminosity distance estimate of \(d=2.63^{+0.69}_{-0.23}\) kpc from Kerzendorf et al. (2018) using _Hubble_ space telescope data. As the stellar companion in a Type Ia explosion is supposed to be flung out of the system, the remaining donor star after the supernova is expected to have an unusual velocity with respect to surrounding stars. We therefore compared the velocity properties of star B (V\(=54\) km s\({}^{-1}\) at a distance of 2.53 kpc) with the sample of stars in a 30' radius, lying in a distance slice of 2.5-4 kpc, a parallax fractional error better than 20 % (good distance estimate) and a proper motion error better than 0.1 mas yr \({}^{-1}\). This sample resulted in a total of \(\sim\)500 stars. When building a histogram of stellar tangential velocities, estimating the velocity for each star at its geometrical distance, star B is amongst the 25\({}^{\rm th}\) percentile of fastest stars in this sample. In theory this exercise should be carried out using the full 3-D stellar velocity of the sample. However, while the radial velocity of star B has been measured (V\({}_{\rm rad}=51.29\pm 1.8\) km s\({}^{-1}\), Kerzendorf et al. 2018), only \(\sim\)150 out of our 500 stars have Gaia radial velocity measurements. In this biased sample (mostly limited by magnitude), the 3-D velocity of star B (V\(=\)74 km s\({}^{-1}\)) is below the median value (85 km s\({}^{-1}\)) of the sample showing that it has no particular velocity with respect to the neighboring stars. In light of the latest measurements from _Gaia_ DR3, it appears that stars B and E are the only potential donor stars for the SNR, other stars likely being foreground objects. Thus, it may be concluded that either star B is associated with SN 1572 in a single degenerate scenario, wherein most of the Fe inside is highly ionized to account for the absence of an Fe II absorption line in its UV spectrum (Kerzendorf et al. 2018). Alternatively, star E could be the progenitor, but further spectroscopic observations are required to confirm the Fe I absorption feature. Finally, it is possible that there is no discernible stellar progenitor, and SN 1572 resulted from a double degenerate explosion. ### 3-D reconstruction In this study, we have obtained the velocities in the plane of the sky \(V_{\rm x}\) and \(V_{\rm y}\) and the integrated velocity in the line of sight \(V_{\rm z}\). We have also directly the position of the vector in the plane of the sky (\(x\) and \(y\)). Two limitations remain to obtain a complete 3-D reconstruction of the SNR and its dynamics. We need the \(V_{\rm z}\) velocity in one point, not an integration over the line of sight, and the position of this point in the line of sight \(z\). To limit this integration on the line of sight problem, we select only the regions that are dominantly red or blue-shifted. To do this we detect local extrema on our map of \(V_{\rm z}\) velocity using the tool peak_local_max 5 of the _Skimage_ library. Choosing the points not too near the SNR edge where \(V_{\rm z}\) is poorly determined, we obtain around 350 redshifted points and 320 blueshifted points, evenly spaced on the SNR. Then we apply the tool OPF presented in Section 3.2 to measure the corresponding proper motion of these specific features. Finally we select only points with an expansion index \(m\) less than 1.2 (as in Section 5.1) and an angular deviation from a radial expansion less than 40\({}^{\circ}\) ending up with a collection of nearly 530 points. For this sample the mean of the space velocity \(V_{\rm vyz}\) is of 3650 km s\({}^{-1}\) with a standard deviation of 1420 km s\({}^{-1}\). This is in agreement with values found by Millard et al. (2022) which are in a range of around 1900 - 6000 km s\({}^{-1}\). Footnote 5: [https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_peak_local_max.html](https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_peak_local_max.html) To obtain the line of sight position \(z\) for each of these points, we must add an hypothesis. If we suppose that the velocity and radius vectors are colinear, there is a simple kinematic relation \(z=\frac{V_{\rm z}}{V_{\rm y}}\)\(r_{\rm xy}\) which is true in each point. In Section 5.1, we obtain an estimation of the angular deviation between the radius and the velocity vectors: its distribution is a Gaussian centered around zero with a standard-deviation of around 20\({}^{\circ}\). So the position \(z\) that we obtain is only an approximation. Nevertheless, we have now a proxy of the space radius \(r_{\rm vyz}\) for all of our points. 75% of our sample have a space radius bigger than 2.2 arcmin. That is between the estimation of the position of the reverse shock from Yamaguchi et al. (2014), 2.6 arcmin and the one from Millard et al. (2022), 2.0 arcmin. Finally, combining the parameters (\(x\), \(y\), \(z\)) and the three-dimensional velocity vectors, we obtain a full reconstruction of the dynamics of Tycho's SNR presented in Figure 10. Each plot represents the expansion of half a shell viewed along the x or y-axis. Each arrow is color coded with its line of sight velocity \(V_{\rm z}\). The lack of vectors for \(z\) around 0 is due to the local extrema search which selects only high V\({}_{\rm z}\) values. In Section 4.2, we also underline that due to calibration uncertainties, the values of V\({}_{\rm z}\) with a norm less than 500 km s\({}^{-1}\) are unreliable. Broadly speaking, we must be aware that the distribution of our sample is not \begin{table} \begin{tabular}{c|c c c} \hline \hline Star & Mag & Parallax (mas) & d\({}_{\rm geom}\) (kpc) \\ \hline A & 12.41 & 0.825 \(\pm\) 0.035 & 1.20\({}^{+0.06}_{-0.04}\) \\ B & 15.11 & 0.373 \(\pm\) 0.032 & 2.53\({}^{+0.23}_{-0.20}\) \\ C & 18.17 & 3.561 \(\pm\) 0.523 & 0.30\({}^{+0.05}_{-0.04}\) \\ D & 19.36 & 1.256 \(\pm\) 0.282 & 0.93\({}^{+0.27}_{-0.19}\) \\ E & 18.93 & 0.266 \(\pm\) 0.175 & 3.52\({}^{+2.0}_{-1.0}\) \\ G & 17.96 & 0.518 \(\pm\) 0.099 & 1.95\({}^{+0.47}_{-0.32}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Properties of SN 1572 potential donor stars from the _Gaia_ DR3 catalog (Gaia Collaboration et al. 2022). The geometrical distance derived from the parallax is given at the 16, 50, and 84 percentiles from the posterior distribution of the Baeyesian distance estimate from Bailer-Jones et al. (2021). Mag is the G-band mean magnitude. evenly distributed in the emission shell, so we must be cautious with the zones where there is a lack of vectors. The North/South asymmetry we saw in the integrated \(V_{x}\) map (Figure 4) is visible in these three-dimensional views : in the top left, the north half shell is more blueshifted and on the contrary, the south half shell is dominantly redshifted (top right panel). In the same way, in the bottom panels, a bipolar large-scale velocity asymmetry is observed, which agrees with this large-scale asymmetry. To have an interactive representation of this complex dataset, we use the tool _Blender6_ to build a 3-D visualization of our results, which can be found in the platform _Sketchlab_ at this address. Footnote 6: [https://www.blender.org](https://www.blender.org) ## 7 Conclusion The _Chandra_ observations of the Tycho supernova remnant are the perfect dataset to apply new tools to study the SNR ejecta dynamics in great details. In this study, we measure separately the velocity in the line of sight (V\({}_{x}\)) and the proper motion (V\({}_{xy}\)) in the plane of the sky. To estimate V\({}_{x}\), we used the tool GMCA (General Morphological Component Analysis) to decompose our data cube (\(x\), \(y\), \(E\)) and to separate the redshifted from the blueshifted emission. We obtain a map of the mean velocity V\({}_{x}\) with full coverage of the SNR at 2" spatial resolution for the first time. Then, we develop the tool POF (Poisson Optical Flow) to measure the shift of features between epochs with a two-dimensional fit adapted to Poisson noise. The result for Tycho's SNR is a velocity vector field with more than 1700 vectors. These velocity fields with an unprecedented level of detail underline the complex dynamics of Tycho's SNR despite its overall regular shape. Our main findings in this study are : * In the line of sight, the full coverage of the SNR confirms the North/South velocity asymmetry hinted by Millard et al. (2022). This bipolar structure could be due to an asymmetric elongated explosion tilted towards the observer, or to an interaction with some overdensity in front and behind the remnant. * In the plane of the sky, a slow down of the forward shock velocity was previously measured in the East compared to the West associated to a gradient of density. In the ejecta dynamics, we observe that the velocity linearly increases with the radius in the western undisturbed region, with a forward shock faster than the ejecta. Whereas in the East, the dynamics are more complex, likely due to the density gradient, and some inner ejecta have higher velocities than the forward shock. * At small scales, we observe in our \(V_{xy}\) vector field an interesting structure in the North-East where the velocity increases followed by a decrease with an increasing radius. Figure 10: Three dimensional vector field of the dynamics of Tycho’s SNR based on our results. _Top right and left :_ View along the y-axis (from above). _Bottom right and left :_ View along the x-axis (from the right). The colors are the velocities \(V_{x}\). The green arrows at left indicate the position of the observer and we add in green some indications of the zones seen by the observer looking at this plot. The lack of vectors for positions in the line of sight \(z\) near zero, is due to a selection bias (see the text). A 3-D visualisation is available at this link. This is unexpected as the velocity profile should increase linearly with radius due to projection effects. The position of this feature matches a potential molecular cloud seen in the radio. This could be interpreted as a complex projected profile of the current deceleration of the ejecta interacting with the cloud or two different components such as a fast knot in the foreground and ejecta slowing down in the background. * Using the V\({}_{xy}\) field of ejecta vectors, we estimate the center of explosion by finding the common origin of these vectors and we revisit the properties of potential stellar progenitors using the _Gaia_ DR3 catalog. Latest parallax measurements place stellar candidate B slightly further away (d\(\sim\)2.5 kpc) than in the DR2 catalog (d\(\sim\)2.0 kpc), at a distance now compatible with the SNR. With improved measurements, star E is also an interesting alternative candidate at a distance of \(\sim\)3.5 kpc and with potential Fe I line absorption due to the SNR ejecta. * Combining V\({}_{xy}\) and V\({}_{x}\) we reconstruct a 3-D vector field with around 450 positions in the SNR and build an interactive vizualization of this complex dataset. The new methods developed in this study benefit from the very good statistics of the data of Tycho's SNR observed by the _Chandra_ telescope. However, they could also be used for other supernova remnants or astrophysical objects. In particular, the GMCA algorithm is a powerful tool to decompose any cube of data with good contrast between the underlying components. The tool POF could be also applied on other objects to study their dynamics if they are observed with sufficient spatial resolution and statistics. ###### Acknowledgements. We thank Gabriel Pratt for his useful comments on the analysis and on the manuscript, and Jerome Bobin for discussions on the GMCA method. We also thank Benjamin Romain, who produced the three dimensional visualisation with _Blender_. The research leading to these results has received funding from the European Union's Horizon 2020 Programme under the AHED2020 project (grant agreement n. 871158). This work was supported by CNES, focused on methodology for X-ray analysis. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (IPAC, [https://www.cosmos.esa.int/web/gaia/dpc/consortium](https://www.cosmos.esa.int/web/gaia/dpc/consortium)). Funding for the IPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
2305.07185
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
Autoregressive transformers are spectacular models for short sequences but scale poorly to long sequences such as high-resolution images, podcasts, code, or books. We proposed Megabyte, a multi-scale decoder architecture that enables end-to-end differentiable modeling of sequences of over one million bytes. Megabyte segments sequences into patches and uses a local submodel within patches and a global model between patches. This enables sub-quadratic self-attention, much larger feedforward layers for the same compute, and improved parallelism during decoding -- unlocking better performance at reduced cost for both training and generation. Extensive experiments show that Megabyte allows byte-level models to perform competitively with subword models on long context language modeling, achieve state-of-the-art density estimation on ImageNet, and model audio from raw files. Together, these results establish the viability of tokenization-free autoregressive sequence modeling at scale.
Lili Yu, Dániel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis
2023-05-12T00:55:41Z
http://arxiv.org/abs/2305.07185v2
# MegaByte: Predicting Million-byte Sequences with Multiscale Transformers ###### Abstract Autoregressive transformers are spectacular models for short sequences but scale poorly to long sequences such as high-resolution images, podcasts, code, or books. We propose MegaByte, a multi-scale decoder architecture that enables end-to-end differentiable modeling of sequences of over one million bytes. MegaByte segments sequences into patches and uses a _local_ submodel within patches and a _global_ model between patches. This enables sub-quadratic self-attention, much larger feedforward layers for the same compute, and improved parallelism during decoding--unlocking better performance at reduced cost for both training and generation. Extensive experiments show that MegaByte allows byte-level models to perform competitively with subword models on long context language modeling, achieve state-of-the-art density estimation on ImageNet, and model audio from raw files. Together, these results establish the viability of tokenization-free autoregressive sequence modeling at scale. Machine Learning, ICML, ICML ## 1 Introduction Sequences of millions of bytes are ubiquitous; for example, music, image, or video files typically consist of multiple megabytes. However, large transformer decoders (LLMs) typically only use several thousand tokens of context (Brown et al., 2020; Zhang et al., 2022)--both because of the quadratic cost of self-attention but also, more importantly, the cost of large feedforward networks per-position. This severely limits the set of tasks where LLMs can be applied. We introduce MegaByte, a new approach to modeling long byte sequences. First, byte sequences are segmented into fixed-sized patches, loosely analogous to tokens. Our model then consists of three parts: (1) a _patch embedder_, which simply encodes a patch by losslessly concatenating embeddings of each byte, (2) a _global_ module, a large autoregressive transformer that inputs and outputs patch representations and (3) a _local_ module, a small autoregressive model that predicts bytes within a patch. Crucially, we observe that for many tasks, most byte predictions are relatively easy (for example, completing a word given the first few characters), meaning that large networks per-byte are unnecessary, and a much smaller model can be used for intra-patch modelling. The MegaByte architecture gives three major improvements over Transformers for long sequence modelling: 1. **Sub-quadratic self-attention** Most work on long sequence models has focused on mitigating the quadratic cost of self-attention. MegaByte decomposes long sequences into two shorter sequences, and optimal patch sizes reduces the self-attention cost to \(O(N^{\frac{4}{3}})\), which remains tractable for even long sequences. 2. **Per-patch feedforward layers** In GPT3-size mod Figure 1: Overview of MegaByte with patch size \(P=4\). A small _local_ model autoregressively predicts each patch byte-by-byte, using the output of a larger _global_ model to condition on previous patches. Global and Local inputs are padded by \(P\) and \(1\) token respectively to avoid leaking information about future tokens. els, more than 98% of FLOPS are used in computing position-wise feedforward layers. MegaByte uses large feedforward layers per-patch rather than per-position, enabling much larger and more expressive models for the same cost. With patch size \(P\), where a baseline transformer would use the same feedforward layer with \(m\) parameters \(P\) times, MegaByte can use a layer with \(mP\) parameters once for the same cost. 3. **Parallelism in Decoding** Transformers must perform all computations serially during generation because the input to each timestep is the output from the previous timestep. By generating representations for patches in parallel, MegaByte allows greater parallelism during generation. For example, a MegaByte model with 1.5B parameters can generate sequences 40% _faster_ than a standard 350M Transformer, whilst also improving perplexity when trained with the same compute. Together, these improvements allow us to train much larger and better-performing models for the same compute budget, scale to very long sequences, and improve generation speed during deployment. MegaByte also provides a strong contrast to existing autoregressive models that typically use some form of tokenization, where sequences of bytes are mapped to larger discrete tokens (Sennrich et al., 2015; Ramesh et al., 2021; Hsu et al., 2021). Tokenization complicates pre-processing, multi-modal modelling, and transfer to new domains, while hiding useful structure from the model. It also means that most state-of-the-art models are not truly end to end. The most widely used approaches to tokenization require language-specific heuristics (Radford et al., 2019) or lose information (Ramesh et al., 2021). Replacing tokenization with efficient and performant byte models would therefore have many advantages. We conduct extensive experiments for both MegaByte and strong baselines. We use a fixed compute and data budget across all models to focus our comparisons solely on the model architecture rather than training resources, which are known to benefit all models. We find that MegaByte allows byte-level models to perform competitively with sub-word models on long context language modeling, achieve state-of-the-art perplexities for density estimation on ImageNet, and allow audio modelling from raw audio files. Together, these results establish the viability of tokenization-free autoregressive sequence modeling at scale. ## 2 MegaByte Transformer ### Overview MegaByte is an autoregressive model for efficiently modeling long input sequences. MegaByte is comprised of 3 components: (1) a _patch embedder_ that inputs a discrete sequence, embeds each element, and chunks it into patches of length \(P\) (2) a large _global_ Transformer that contextualizes patch representations by performing self-attention over previous patches, and (3) a smaller _local_ Transformer that inputs a contextualized patch representation from the global model, and autoregressively predict the _next_ patch. ### Components **Patch Embedder** with patch size of \(P\) maps a byte sequence \(x_{0..T}\) to a sequence of patch embeddings of length \(K=\frac{T}{P}\) and dimension \(P\cdot D_{G}\). First, each byte is embedded with a lookup table \(E^{\text{global-embed}}\in\mathbb{R}^{V\times D_{G}}\) to an embedding of size \(D_{G}\) and positional embeddings are added. \[h_{t}^{\text{embed}}=E^{\text{global-embed}}_{x_{t}}+E^{\text{pos}}_{t}\qquad t \in[0..T] \tag{1}\] Then, byte embeddings are reshaped into a sequence of \(K\) patch embeddings with dimension \(P\cdot D_{G}\). To allow autoregressive modelling, the patch sequence is padded to start with a trainable patch-sized padding embedding (\(E^{\text{global-pad}}\in\mathbb{R}^{P\times D_{G}}\)), and the last patch is removed from the input. This sequence is the input to the global model, and is denoted \(h^{\text{global-in}}\in\mathbb{R}^{K\times(P\cdot D_{G})}\). \[h_{k}^{\text{global-in}}=\begin{cases}E^{\text{global-pad}},&\text{if $k=0$},\\ h_{((k-1)\cdot P):(k\cdot P)}^{\text{embed}},&k\in[1..,K),\end{cases} \tag{2}\] **Global Model** is a decoder-only Transformer with dimension \(P\cdot D_{G}\) that operates on a sequence of \(K\) patches. It incorporates a self-attention mechanism and causal masking to capture dependencies between patches. It inputs a sequence of \(K\) patch representations \(h_{0:K}^{\text{global-in}}\), and outputs an updated representation \(h_{0:K}^{\text{global-out}}\) by performing self-attention over previous patches. \[h_{0:K}^{\text{global-out}}=\text{transformer}^{\text{global}}(h_{0:K}^{\text{ global-in}}) \tag{3}\] The output of the final global layer \(h_{0:K}^{\text{global}}\) contains \(K\) patch representations of dimension \(P\cdot D_{G}\). For each of these, we reshape them into sequences of length \(P\) and dimension \(D_{G}\), where position \(p\) uses dimensions \(p\cdot D_{G}\) to \((p+1)\cdot D_{G}\). Each position is then projected to the dimension of the local model with a matrix \(w^{\text{GL}}\in\mathbb{R}^{D_{G}\times D_{L}}\) where \(D_{L}\) is the local model dimension. We then combine these with byte embeddings of size \(D_{L}\) for the tokens in the _next_ patch \(E^{\text{local-embed}}_{x_{(k\cdot P+x_{p-1})}}\). The local byte embeddings is offset by one with a trainable local padding embedding (\(E^{\text{local-pad}}\in\mathbb{R}^{D_{L}}\) to allow autoregressive modelling within a patch. This results in a tensor \(h^{\text{local-in}}\in\mathbb{R}^{K\times P\times D_{L}}\). \[h^{\text{local-in}}_{k,p}=w^{\text{GL}}h^{\text{global-out}}_{k,(p \cdot D_{G}):((p+1)\cdot D_{G})}+E^{\text{local-embed}}_{x_{(k\cdot P+p-1)}} \tag{4}\] **Local Model** is a smaller decoder-only Transformer of dimension \(D_{L}\) that operates on a single patch \(k\) containing \(P\) elements, each of which is the sum of an output from the global model and an embedding of the previous byte in the sequence. \(K\) copies of the local models are run on each patch independently (and in parallel during training), computing a representation \(h^{\text{local-out}}\in\mathbb{R}^{K\times P\cdot D_{L}}\). \[h^{\text{local-out}}_{k,0:P}=\text{transformer}^{\text{local}}(h^{\text{ local-in}}_{k,0:P}) \tag{5}\] Finally, we can compute the probability distribution over the vocabulary at each position. The \(p\)th element of the \(k\)th patch corresponds to element \(t\) of the complete sequence, where \(t=k\cdot P+p\): \[p(x_{t}|x_{0:t})=\text{softmax}(E^{\text{local-embed}}h^{\text{ local-out}}_{k,p})_{x_{t}} \tag{6}\] ### Variations and Extensions We experiment with several extensions of MegaByte. #### 2.3.1 Convolutional Patch Encoder One limitation of chunking sequences into patches is that it is not translation invariant, and byte sequences may receive a different representation depending on their position in the patch. This may mean, for example, that a model has to relearn the meaning of a word at different offsets. To mitigate this issue, we experimented with augmenting the Patch Embedder with causal convolutional layers, which allow translation-invariant contextual representations of the bytes before they are chunked into patches. We use a stack of convolutional layers, with filter sizes of 3, 5 and 7. #### 2.3.2 Cross-patch Attention The Local model uses short sequences for efficiency, and relies on the Global model for long-range information. However, we can increase the context of the Local model with little overhead by allowing it to condition on \(r\) elements from the previous patch. This approach allows the Global model to focus on a longer-range context. Specifically, when computing self-attention in each layer, we concatenate the keys and values with the last \(r\) keys and queries from the previous patch. We use rotary embeddings (Su et al., 2021) to model relative positions between elements in the sequence. This approach is reminiscent of TransformerXL (Dai et al., 2019) but differs by being fully differentiable. #### 2.3.3 Strided Inference We observed empirically that the per-token loss within each patch would increase towards the end of the patch, as the prediction relies more on the weaker Local model. To alleviate this issue, we propose _strided inference_, in which we predict the sequence with two forward passes of the full model, whose inputs are offset by \(p/2\) positions from each other. We then combine the first \(p/2\) positions in each patch for our predictions to predict the complete sequence. Similarly to sliding window techniques (Press et al., 2020), this approach doubles the cost of inference but improves results. ### Motivation Having described the model, we briefly discuss the motivation behind some of the architectural choices. **Why is the local model needed?** Many of the efficiency advantages of the MegaByte design could be realized Figure 2: Summary of MegaByte with vocabulary \(V\), sequence length \(T\), global and local dimensions \(D_{G}\) and \(D_{L}\), and \(K\) patches of size \(P\). Transformer layers use masked self attention to not observe information from future timesteps. with the Global model alone, which would resemble a decoder version of ViT (Dosovitskiy et al., 2020). However, the joint distribution over the patch \(p(x_{t+1},..,x_{t+P}|x_{0..t})\) has an output space of size \(256^{P}\) so direct modeling is only tractable for very small patches. We could instead factor the joint distribution into conditionally independent distributions \(p(x_{t+1}|x_{0..t}).p(x_{t+P}|x_{0..t})\), but this would greatly limit the model's expressive power. For example, it would be unable to express a patch distribution such as 50% _cat_ and 50% _dog_, and would instead have to assign probability mass to strings such as _cag_ and _dot_. Instead, our autoregressive Local model conditions on previous characters within the patch, allowing it to only assign probability to the desired strings. **Increasing Parameters for Fixed Compute** Transformer models have shown consistent improvements with parameter counts (Kaplan et al., 2020). However, the size of models is limited by their increasing computational cost. MegaByte allows larger models for the same cost, both by making self attention sub-quadratic, and by using large feedforward layers across patches rather than individual tokens. **Re-use of Established Components**MegaByte consists of two transformer models interleaved with shifting, reshaping and a linear projection. This re-use increases the likelihood that the architecture will inherit the desirable scaling properties of transformers. ## 3 Efficiency Analysis ### Training Efficiency We analyze the cost of different architectures when scaling both the sequence length and size of the models. AttentionThe cost of the self attention in a transformer architecture for a sequence of length \(T\) has \(O(T^{2})\) complexity. Much work has been explored reducing this; for example, Sparse Transformers (Child et al., 2019) and Routing Transformers (Roy et al., 2020) show strong results with a complexity \(O(T^{\frac{2}{2}})\). Numerous linear attention mechanisms have also been proposed (Katharopoulos et al., 2020; Schlag et al., 2021; Choromanski et al., 2020), although we are not aware of competitive results on large scale language modeling tasks. As a function of sequence length \(T\) and patch size \(P\), the Global model has a sequence of length \(\frac{P}{T}\) so uses \(O(\frac{T^{2}}{P^{2}})\) operations, and the Local model uses \(\frac{P}{T}\) sequences of length \(P\) so uses \(O(\frac{TP^{2}}{P})=O(PT)\) operations. The overall cost of MegaByte is therefore in \(O(\frac{T^{2}}{P^{2}}+TP)\). \(P\) is a hyperparameter that is chosen to create an architecture for sequences of size \(T\). By setting \(P=T^{\frac{1}{3}}\) the complexity is in \(O(T^{\frac{4}{3}})\). Using much shorter patches of \(P=T^{\frac{1}{5}}\) would give a complexity of \(O(T^{\frac{8}{5}})\). The cost is less than the transformer for all non-trivial values of \(P\) such that \(1<P<T\). Feedforward LayersHowever, attention is not the main cost in large transformers. Instead of increasing the sequence length, transformers are more commonly scaled by increasing the dimension of their latent state \(d\), and the feedforward network cost dominates the model's overall cost (Kaplan et al., 2020). For example, in the GPT3 architecture, the quadratic self-attention computation accounts for only 1.4% of FLOPS. Following the approximation of (Kaplan et al., 2020), a forward pass with a large transformer with \(m\) non-embedding parameters on a sequence of length \(T\) uses roughly \(2mT\) FLOPS. MegaByte contains two transformers: the Global model uses \(m_{g}\) parameters on a sequence of length \(\frac{T}{P}\), and a Local model with \(m_{l}\) parameters that sees \(\frac{T}{P}\) sequences of length \(P\), giving an estimate of \(2T(\frac{m_{g}}{P}+m_{l})\) FLOPS. When \(m_{g}\gg m_{l}\), the FLOPS used by MegaByte is approximately \(\frac{2Tm_{g}}{P}\), allowing a model \(P\) times larger than a transformer with equivalent FLOPS. This analysis holds irrespective of any efficient attention mechanisms used in the transformer. Combined AnalysisTo understand efficiency at different sequence lengths and model sizes, we calculate the total FLOPS used by transformers, Linear Transformers and MegaByte. For each operation, we use FLOP estimates from (Kaplan et al., 2020), except for attention in Linear Transformers, which we estimate as \(9D\) FLOPS/ Figure 3: Computational cost (FLOPS/token) for different model architectures at different scales. MegaByte architectures (here with \(P=8\)) use less FLOPS than equivalently sized Transformers and Linear Transformers (Katharopoulos et al., 2020) across a wide range of model sizes and sequence lengths, allowing larger models to be used for the same computational cost. token1, where \(D\) is the model embedding dimension. Figure 3 shows that for models of size 660M to 173B and sequence lengths of up to 1M tokens, MegaByte with \(P=8\) uses less FLOPS than either transformers or Linear Transformers. Baseline model architectures are based on GPT3, and Megabyte global/local model sizes are 452M/151M, 5.8B/604M, 170B/3.2B respectively. Footnote 1: This may underestimate the time taken by Linear Transformer decoders, which use a recurrence mechanism that is harder to parallelize on current hardware. ### Generation Efficiency Generating long sequences with transformers is slow, because the input to each timestep is the output from the previous timestep, meaning each layer must be computed for each token serially. As running a layer on a single token typically does not saturate the amount of parallelism available within a GPU, for analysis, we model each layer as a constant cost independently of size. Consider a MegaByte model with \(L_{\text{global}}\) layers in the Global model and \(L_{\text{local}}\) layers in the Local model and patch size \(P\), compared with a Transformer architecture with \(L_{\text{local}}+L_{\text{global}}\) layers. Generating each patch with MegaByte requires a sequence of \(O(L_{\text{global}}+P\cdot L_{\text{local}})\) serial operations, whereas the Transformer requires \(O(P\cdot L_{\text{global}}+P\cdot L_{\text{local}})\) serial operations. When \(L_{\text{global}}\gg L_{\text{local}}\) (i.e. the Global model has many more layers than the Local model), MegaByte can reduce inference costs by a factor close to \(P\). ## 4 Experimental setup ### Controlling for Compute and Data Models show consistent improvements when increasing both data and compute (Kaplan et al., 2020; Hoffmann et al., 2022), meaning that one model can outperform another because of an increased training budget instead of an improved architecture. However, in practice, both compute and data are typically limited. We conduct experiments using a fixed compute and data budget across all models to focus comparisons solely on the model architecture rather than training resources. To achieve this, we adjust model hyperparameters (mainly, number of layers) within each architecture so that the forward pass time taken per byte is matched, and then train all models for the same number of bytes. ### Comparison Systems We compare MegaByte with both a standard decoder-only Transformer and PerceiverAR (Hawthorne et al., 2022). PerceiverAR extends the original transformer with a single cross-attention layer over a much longer context sequence, and is the best performing general purpose autoregressive model we are aware of and achieves state-of-the-art results across several modalities. We implemented both models in the same codebase, and all models share a similar data loader, preprocessing step, and trainer to avoid any artifacts in our compute-controlled experiments. ### Training Procedure All models were trained using the Metaseq2 code base (Zhang et al., 2022). The training used the PyTorch framework (Paszke et al., 2019), with fairscale to improve memory efficiency through fully sharded model and optimizer states (Baines et al., 2021). Mixed precision training was used to improve training efficiency at scale (Micikevicius et al., 2017). More training details and various model parameters can be found in Section A.1 in the Appendix. Footnote 2: [https://github.com/facebookresearch/metaseq](https://github.com/facebookresearch/metaseq) To validate our implementation of PerceiverAR, we reproduced their experiments on downsized ImageNet at 64 pixels. By carefully matching hyperparameters, we achieved a bits per byte (bpb) score of 3.53, compared to the reported 3.54 in the original paper. ### Inference Methods Several techniques have been proposed for trading off speed for performance during inference with language models, including sliding windows (Press et al., 2020) and our strided inference (Section 2.3.3). We only use these methods when comparing with prior published work (Tables 3 and 4). ## 5 Language Modeling We evaluated the performance of MegaByte on language modeling on a set of 5 diverse datasets emphasizing long-range dependencies: Project Gutenberg (PG-19), Books, Stories, arXiv, and Code. **Datasets** We experiment on a range of long form text datasets. The PG-19 dataset (Rae et al., 2019) consists of English-language books written before 1919 and is extracted from the Project Gutenberg online library. The Stories dataset (Trinh and Le, 2018) is a subset of CommonCrawl data meant to emulate Winograd schemas. Books (Gao et al., 2020) is another collection of English-language books. The arXiv dataset is a collection of technical publications written \begin{table} \begin{tabular}{l r r} \hline \hline Dataset & Total Bytes & Mean document size (bytes) \\ \hline PG-19 & 10.1GB & 411,404 \\ Stories & 21.3GB & 35,265 \\ Books & 79.7GB & 509,526 \\ arXiv & 91.5GB & 58,518 \\ Code & 353.7GB & 7,461 \\ \hline \hline \end{tabular} \end{table} Table 1: Text dataset sizes and mean document lengths. in LaTeX from the arXiv online archive. Finally, the Code dataset is a large publicly available dataset of open source code, under Apache, BSD or MIT licenses. More details on dataset sizes and document lengths are shared in Table 1. **Controlled Experiments** Table 2, lists bbp on each dataset. Each model is trained for 80 billion bytes, and models are scaled to use the same compute budget. We carefully tune hyperparameters for all architectures to best utilize the available compute budget. MegaByte consistently outperforms both baseline transformers and PerceiverAR across all datasets. We use the same set of parameters on all dataset. In all experiments presented in Table 2, transformer has size of 320M with context length of 1024, PerceiverAR has size of 248M with context size of 8192 and latent size of 1024, and MegaByte global/local model sizes are 758M/262M with context length of 8192 and patch size of 8. **Scaling Experiment** We scale up our training data on PG-19 (Table 3), and compare MegaByte with byte baselines, as well as converting all results to word-level perplexities to benchmark with state-of-art token based models. We train a byte-level Transformer, PerceiverAR and MegaByte models for 400B bytes and the same compute budget using same model parameters as in the controlled experiments. We find that MegaByte outperforms other byte-level models by a wide margin at this scale.3 Footnote 3: The only prior byte-level experiments we are aware of are at a smaller scale in Hutchins et al. (2022), who report results equivalent to test perplexities of 46.5 with a version of the Block-Recurrent transformer, and 49.5 with Memorizing Transformers Wu et al. (2022), compared to 36.4 with our model. We also compare with the best previously reported numbers for sub-word models. These results may be confounded by differing amounts of compute and tuning used, but show that MegaByte gives results competitive with state-of-the-art models trained on subwords. These results suggest that MegaByte may allow future large language models to be tokenization-free. ## 6 Image Modeling ### Sequence Modeling on ImageNet We test MegaByte on variants of the autoregressive image generation task on ImageNet (Oord et al., 2016), to measure its ability to efficiently use long context. We test on three different resolutions of images, ranging from 64x64 to 640x640 pixels - the latter requiring the effective modeling of sequences with over 1.2M tokens. This generation task becomes increasingly challenging as the image's resolution grows: doing well on this task requires the modeling of local patterns (textures, lines, etc.) and long-range context that provides information about the high level structure of the image. Inspired by recent works in Vision Transformers (Dosovitskiy et al., 2020), we model image data patch by patch (more details can be found in Appendix D.1). ### Comparison with State of the Art We train a large MegaByte model on ImageNet 64x64 with Global and Local models sized 2.7B and 350M parameters, respectively, for 1.4T tokens. We estimate that training this model consumed less than half the GPU hours we would have needed to reproduce the best PerceiverAR model described by (Hawthorne et al., 2022). As shown in Table 4, MegaByte matches the state-of-the-art performance of PerceiverAR whilst using only half the compute. ### Scaling to higher resolutions We compare three transformer variants (vanilla, PerceiverAR, MegaByte) to test scalability to long sequences on increasingly large image resolutions. We use our own implementations of these in the same framework and budget the same amount of GPU hours and data to train each of these model variants. MegaByte is able to handle all sequence lengths with a single forward pass of up to 1.2M tokens. We found neither the standard Transformer nor PerceiverAR could model such long sequences at a reasonable model size, so instead we split images into segments of size 1024 and 12000 respectively. For Megabyte, we set patch size as 12 for Image64 and patch size as 192 for Image256 and Image640 datasets. Model sizes are adjusted to match overall training speeds across models and we do not use any form of sliding window evaluation in this experiment. As seen in Table 5, MegaByte outperforms baselines across all resolutions in this compute-controlled setting. The precise settings used for each of the baseline models such as context length and number of latents are summarized in Table 11. Results show that MegaByte outperforms the other systems at all resolutions, demonstrating an effective model of sequences of over 1M bytes. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline & PG-19 & Stories & Books & arXiv & Code \\ \hline Transformer & 1.057 & 1.064 & 1.097 & 0.816 & 0.575 \\ PerceiverAR & 1.104 & 1.070 & 1.104 & 0.791 & 0.546 \\ MegaByte & **1.000** & **0.978** & **1.007** & **0.678** & **0.411** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance (bits-per-byte) of compute and data controlled MegaByte, PerceiverAR, and Transformer models on various text modalities. ## 7 Audio Modeling Audio has aspects of both the sequential structure of text and the continuous nature of images, so is an interesting application for MegaByte. Raw audio is typically stored as a sequence of 16-bit integer values (one per timestep); a softmax layer would need to output 65,536 probabilities per timestep to model all possible values. To address this issue, various techniques have been developed to reduce the memory and computational requirements of the softmax layer. For instance, van den Oord et al. (2016) apply \(\mu\)-law companding transformation and quantizes the input into 256 possible values. Alternatively, van den Oord et al. (2017) model the samples using the discretized mixture of logistics distribution introduced by Salimans et al. (2017). Finally, Kalchbrenner et al. (2018) use a dual softmax technique to produce 8 coarse and 8 fine bits. In our approach, we simplify the audio modeling process by directly reading the bytes (256 possible values) from the audio file and conducting an autoregressive language model on top of that. This greatly streamlines the modeling process, making it easier and more efficient. Our audio modeling approach focuses on 16 kHz, 16-bit audio, which equates to 32k bytes per one-second clip. We use an extensive audio dataset consisting of 2 terabytes (roughly 18,000 hours) of audio. We use a sequence length of 524,288, a patch size of 32, and a batch size of 32 to facilitate model training. By utilizing these settings, we can effectively train our model on large volumes of audio data, helping to improve its accuracy and efficacy. Our model obtains bpb of 3.477, much lower than the results with perceiverAR (3.543) and vanilla transformer model (3.567). More ablation results are presented in Table 7. ## 8 Analysis ### Generation speed We also compare the text generation speed between MegaByte and a transformer. We compare a 350M parameter baseline transformer and a MegaByte model with a 1.3B parameter Global model and a 218M parameter local model, trained on PG19 with equal compute. As shown in Table 6, the MegaByte model achieves much lower perplexity as expected. However, MegaByte also generates a sequence of 8192 tokens 40% _faster_ than transformer, despite having over 4 times the parameters. This speed up is due to the bulk of the parameters being in the Global model, which only needs to be computed once for every 8 tokens, whereas all the parameters in the baseline model are used on every token. \begin{table} \begin{tabular}{l c c c c} \hline \hline & Global & (Local) & bpb & Generation \\ & Size & Size & & Time (s) \\ \hline Transformer & - & 350M & 1.064 & 132 \\ MegaByte & 1.3B & 218M & 0.991 & 93 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of bits per byte (bpb) and generation speed of 8192 bytes of transformer model (with context length 1024) and MegaByte with context length 8192 and patch size 8. \begin{table} \begin{tabular}{l c} \hline \hline ImageNet64 & bpb \\ \hline Routing Transformer (Roy et al., 2020) & 3.43 \\ Combiner (Ren et al., 2021) & 3.42 \\ Perceiver AR (Hawthorne et al., 2022) & **3.40** \\ MegaByte & **3.40** \\ \hline \hline \end{tabular} \end{table} Table 4: Bits per byte (bpb) on ImageNet 64x64. MegaByte matches the current state-of-the-art while only using half the amount of GPU hours to train. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Tokenizer & Vocab Size & Context Length & Validation & Test \\ \hline TransformerXL (Rae et al., 2019a) & SentencePiece & 32k & 512+1024 (subwords) & 45.5 & 36.3 \\ CompressiveTransformer (Rae et al., 2019a) & SentencePiece & 32k & 512+512+2x512 (subwords) & 43.4 & 33.6 \\ PerceiverAR (Hawthorne et al., 2022) & SentencePiece & 32k & 2048 (subwords) & 45.9 & 28.9 \\ BlockRecurrent (Hutchins et al., 2022) & SentencePiece & 32k & 1024+recurrence (subwords) & - & **26.5** \\ \hline Transformer byte-level (ours) & Bytes & 256 & 2048 (bytes) & 81.6 & 69.4 \\ PerceiverAR byte-level (ours) & Bytes & 256 & 8192 (bytes) & 119.1 & 88.8 \\ MegaByte & Bytes & 256 & 8192 (bytes) & **42.8** & 36.4 \\ \hline \hline \end{tabular} \end{table} Table 3: Larger scale experiments on PG19, converting bits-per-byte to word-level perplexities for comparison with prior work. Results below the line are compute-matched. MegaByte outperforms other byte models by a wide margin, and gives results competitive with state-of-the-art models trained on subwords. \begin{table} \begin{tabular}{l c c c} \hline \hline ImageNet64 & bpb \\ \hline Routing Transformer (Roy et al., 2020) & 3.43 \\ Combiner (Ren et al., 2021) & 3.42 \\ Perceiver AR (Hawthorne et al., 2022) & **3.40** \\ MegaByte & **3.40** \\ \hline \hline \end{tabular} \end{table} Table 3: Larger scale experiments on PG19, converting bits-per-byte to word-level perplexities for comparison with prior work. Results below the line are compute-matched. MegaByte outperforms other byte models by a wide margin, and gives results competitive with state-of-the-art models trained on subwords. ### Model Components In Table 7, we analyze the significance of different components in the MegaByte architecture by studying arXiv, Librlight-L and ImageNet256 datasets. Removing Local (_w/o local model_) or global (_w/o global model_) model, we observe a substantial increase in bpb on all datasets, showing that both parts are crucial. The performance of the model without the cross-patch local model (_w/o cross-patch local model_) is competitive, indicating that the architecture is robust to this modification. We observe slight improvement on the Librlight-L and ImageNet256 datasets by augmenting the MegaByte model with a CNN encoder (_w/ CNN encoder_). This suggests that the MegaByte architecture can benefit from integrating alternative encoding mechanisms. ### Effective Use of Context Long-context models often struggle to benefit from the full context (Sun et al., 2021). Figure 4 shows that later tokens within each context window consistently have a higher likelihood, indicating that MegaByte can effectively use at least 8k bytes of context on the PG19 dataset. ### Strided Inference We find that within a single patch, on average, the MegaByte performs worse on later tokens within a patch (see Figure 5). Section 2.3.3 proposes _strided inference_ as a solution, where two forward passes are performed offset by \(\frac{P}{2}\) tokens, and results from the first half of each patch are combined. Table 8 shows performance improvements from strided inference, which are additive with the standard sliding window. ### Hyperparameters MegaByte introduces several additional hyperparameters. We tuned these parameters independently for different modalities and reported performance based on the best setting we found. All experiments in the same group use the same compute. **Patch Size.** We experimented with various patch sizes on Image256 dataset and found that there is a wide range of values where MegaByte performs similarly. We found similar robustness against the choice of this hyperparameter across all modalities, although the optimal patch size itself can be different across modalities. Figure 4: Average log probability assigned to the token at different positions within the context length by MegaByte model with 8192 context size and by a vanilla transformer model trained using the same compute (PG19 test set). MegaByte likelihoods rise throughout its context window, demonstrating that it can use tokens from 8k bytes previously to improve its predictions. Figure 5: An illustration of strided inference with patch size 8. Lines below the text represent the patches used in the two rounds of inference, the plot above it represents the average probability assigned to the token at a given position within a patch. By considering only the first half of each patch from the two rounds of inference and combining them (bold lines on top), we achieve a better overall bpb. \begin{table} \begin{tabular}{l r r r} \hline \hline & Arxiv & Audio & ImageNet256 \\ \hline MegaByte & 0.6871 & **3.477** & **3.158** \\ _w/o local model_ & 1.263 & 5.955 & 4.768 \\ _w/o global model_ & 1.373 & 3.659 & 3.181 \\ _w/o cross-patch attention_ & **0.6781** & 3.481 & 3.259 \\ _w/ CNN encoder_ & 0.6871 & **3.475** & **3.155** \\ \hline \hline \end{tabular} \end{table} Table 7: Ablation of MegaByte model components, showing that both Local and Global models are critical to strong performance, but the architecture is robust to other modifications. We report bits-per-byte on text, audio, and image prediction tasks. All models within a column are trained using the same compute and data. The hyperparameters are listed in Table 11. \begin{table} \begin{tabular}{c c c c} \hline \hline Patch Size & Global Size & Local Size & bpb \\ \hline 48 & 125M & 114M (D=768, L=11) & 3.178 \\ 192 & 125M & 125M (D=768, L=12) & 3.158 \\ 768 & 125M & 83M (D=768, L=8) & 3.186 \\ \hline \hline \end{tabular} \end{table} Table 9: Effects of patch size on performance on the Image256 dataset. All versions use the same amount of GPU hours and data. Figure 4: Average log probability assigned to the token at different positions within the context length by MegaByte model with 8192 context size and by a vanilla transformer model trained using the same compute (PG19 test set). MegaByte likelihoods rise throughout its context window, demonstrating that it can use tokens from 8k bytes previously to improve its predictions. **Local to Global model Size Ratio.** We experimented with different Local/Global model size ratios on PG19 dataset. By grouping bytes into patches, MegaByte effectively uses \(P\) times less tokens for the Global model as on the Local model--enabling us to increase the size of the Global model without reduced cost. We find that a given compute budget is spent optimally when the Global model has more parameters than the Local model. This trend was consistent across all modalities and various patch sizes. ## 9 Related Work Prior research has explored the possibility of improving the efficiency of Transformers on long sequences, primarily motivated by mitigating the quadratic cost of self-attention. **Efficient Encoder Models** Several related techniques to ours have been developed for transformer encoder architectures but cannot be straightforwardly applied to decoders. In particular, patchifying operations have previously been used in image _encoder_ models such as ViT (Dosovitskiy et al., 2020), and down- and up-sampling operations have been used for text encoders (Clark et al., 2022), but such methods cannot be naively applied to decoder-only models without leaking information to future bytes in the same patch. MegaByte generalizes these approaches to an efficient decoder model by using a intra-patch transformer to predict each sequence element's likelihood, and offseting the inputs to the two models to avoid leaking information. Jaegle et al. (2021) which uses self-attention on a shorter latent sequence, and Didolkar et al. (2022) which uses recurrent model to process chunks with \(k\) input steps also resemble patchification, but this technique cannot easily be applied to decoder architectures without leaking information to future timesteps. **Efficient Decoder models** Improving the efficiency of decoder models is more challenging because of the need to make one prediction per timestep, and not leak information to future timesteps. The most popular approaches can be categorized as (1) chunking sequences into smaller blocks, and propagating information from previous blocks with either recurrence (Dai et al., 2019; Hutchins et al., 2022) or cross-attention (Hawthorne et al., 2022), (2) linear alternatives to attention, which typically involve forms of token-level recurrence (Katharopoulos et al., 2020; Schlag et al., 2021) or state space models (Gu et al., 2021; Smith et al., 2022; Ma et al., 2022), or (3) sparse approximations of attention (Kitaev et al., 2020; Beltagy et al., 2020; Child et al., 2019; Wu et al., 2022). However, the performance of dense attention means it is typically still chosen for large scale decoders (Touvron et al., 2023; Chowdhery et al., 2022). MegaByte takes the alternative approach of decomposing the complete sequence into two shorter sequences, giving sub-quadratic attention. We also note that feedforward networks are the dominant cost in large decoders, not self-attention. Our approach to compressing sequences allows much larger models than would be possible when using large feedforward networks at every timestep. **Tokenization** The most common approach to shortening sequence lengths in Transformer decoders is to pre-process the input with a form of tokenization, in which multiple bytes are mapped to a single discrete token from a fixed vocabulary. For text, this can be done losslessly using methods such as BPE (Sennrich et al., 2015) and SentencePiece (Kudo and Richardson, 2018), but these approaches can require language-specific heuristics (Radford et al., 2019), limit out-of-domain performance (Sharami et al., 2023), and can affect prompting and truncated sampling in unpredictable ways.4 Edman et al. (2022) downsamples characters using subword information and has shown promising results in machine translation tasks. The amount of high-frequency information in images and audio means that tokenization cannot be performed losslessly, and instead clustering (Hsu et al., 2021) or discrete auto-encoders (Ramesh et al., 2021) are used to compress the inputs, which lose information and likely limit generative model performance. Our patches are analogous to traditional lossless tokens, and the Local model performs the role of mapping a hidden state to a distribution over possible patches. Footnote 4: For example, whether or not a prompt should end in whitespace depends on details of the underlying subword algorithm used. ## 10 Conclusion We introduced MegaByte, a scaleable architecture for modeling long sequences. MegaByte outperforms existing byte-level models across a range of tasks and modalities, allowing large models of sequences of over 1 million tokens. It also gives competitive language modeling results with subword models, which may allow byte-level models to replace tokenization. However, the scale of experiments here is far below those of state-of-the-art language models (Brown et al., 2020), and future work should explore scaling MegaByte to much larger models and datasets. \begin{table} \begin{tabular}{c c c} \hline \hline Global Size & Local Size & bpb \\ \hline 350M (D=1024,L=24) & 290M (D=1024,L=20) & 1.014 \\ 760M (D=1536,L=24) & 262M (D=1024,L=18) & 1.002 \\ 1.3B (D=2048,L=24) & 218M (D=1024,L=15) & 0.991 \\ \hline \hline \end{tabular} \end{table} Table 10: Effects of Local / Global model size on performance on the PG19 dataset. Increasing the capacity of global model improves performance. Models are compute and data matched.
2304.01805
Exploration of Lightweight Single Image Denoising with Transformers and Truly Fair Training
As multimedia content often contains noise from intrinsic defects of digital devices, image denoising is an important step for high-level vision recognition tasks. Although several studies have developed the denoising field employing advanced Transformers, these networks are too momory-intensive for real-world applications. Additionally, there is a lack of research on lightweight denosing (LWDN) with Transformers. To handle this, this work provides seven comparative baseline Transformers for LWDN, serving as a foundation for future research. We also demonstrate the parts of randomly cropped patches significantly affect the denoising performances during training. While previous studies have overlooked this aspect, we aim to train our baseline Transformers in a truly fair manner. Furthermore, we conduct empirical analyses of various components to determine the key considerations for constructing LWDN Transformers. Codes are available at https://github.com/rami0205/LWDN.
Haram Choi, Cheolwoong Na, Jinseop Kim, Jihoon Yang
2023-04-04T14:02:42Z
http://arxiv.org/abs/2304.01805v1
# Exploration of Lightweight Single Image Denoising with Transformers and Truly Fair Training ###### Abstract. As multimedia content often contains noise from intrinsic defects of digital devices, image denoising is an important step for high-level vision recognition tasks. Although several studies have developed the denoising field employing advanced Transformers, these networks are too memory-intensive for real-world applications. Additionally, there is a lack of research on lightweight denoaning (LWDN) with Transformers. To handle this, this work provides seven comparative baseline Transformers for LWDN, serving as a foundation for future research. We also demonstrate the parts of randomly cropped patches significantly affect the denoising performances during training. While previous studies have overlooked this aspect, we aim to train our baseline Transformers in a truly fair manner. Furthermore, we conduct empirical analyses of various components to determine the key considerations for constructing LWDN Transformers. Codes are available at [https://github.com/rami2025/LWDN](https://github.com/rami2025/LWDN). lightweight image denoasing baselines, Transformers, fair training, hierarchical network, channel self-attention, spatial self-attention + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Journal: Computer vision + Footnote †: journal: Computer vision + Transformers into LWDN field for diverse baselines. Specifically, the four best large DN methods are downsized, such as Uformer (Yi et al., 2017), Restormer (Rastormer et al., 2018), ART (Shi et al., 2019), and CAT (Shi et al., 2019). We adopt three SOTA lightweight SR methods, such as SwinIR-light (Yi et al., 2017), ELAN-light (Shi et al., 2019), and NGswin (Shi et al., 2019). These well-made Transformers, proposed during the last two years, represent our interesting field. In terms of human-perception, they show comparable results to the large DN models with even fewer parameters, as illustrated in Figure 1. **Second**, we identify some unfairness in existing denoising studies. We figure out an issue opposing to conventional wisdom: The numerous trials would almost remove the performance differences resulting from randomness. Instead, since randomly selected patches for training process can obviously change the results (Section 4.3), the direct comparisons in previous papers are inevitably unfair. Consequently, we authentically control the randomness for training all models. The same random patch from a training image is used by all networks at a certain iteration. Additionally, while some studies trained their models with constant variance for deciding Gaussian noise level (Yi et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018), others employed blind (unknown) one (Yi et al., 2017). Yet, the models learned with constant one is good at restoring a single level of noise but bad at recovering the other noise levels. Thus, we standardize our work by using blind noise level for training all models. **Third**, we empirically analyze the different components of our baselines. Please note that we do not present new methods to enhance the performances. However, our novelty is that we establish the baselines for an under-explored topic, and deliver interpretability and insight, thereby encouraging future research. Starting with a (1)hierarchical network, we characterize it by three aspects: the encoder connection, bottleneck input, and decoder structure. We apply the robust and advanced elements proposed by (Shi et al., 2019) with respect to these aspects to another hierarchical network, and confirm the potential of hierarchical structures to be improved. Next, we discover that the (2)channel self-attention is worse at recovering the noisy images than the spatial self-attention methods, under the parameter constraint (_i.e._, lightweight condition). After that, we show (3)excessive weight sharing may lead to unstable learning due to limited flexibility and representation of the network. At last, we illuminate that the careful (4)design of CNNs is still relevant in the present where self-attention is widely adopted by varying the shared tail module composed of only CNNs. The summarized main contributions are as follows: 1. We provide various comparison groups of lightweight Transformer architectures for color and grayscale Gaussian denoising, which have not been explored until recently. Three lightweight super-resolution and four state-of-the-art large denoising methods are used to establish LWDN Transformer baselines. They can serve as foundation of active future studies (Sections 3.1, 3.2, 4.2). 2. Since many image restoration papers have overlooked the truly same training settings, we aim to implement the authentically fair experiments. All models used in this paper are trained on identically cropped random patches (Sections 3.3, 4.3). 3. Some empirical studies on different components provide interpretability or insight for LWDN field. These practices are expected to facilitate and inspire future works (Section 4.4). ## 2. Related Work **Importance of Baselines.** The models with remarkable improvements take several years to be accumulated so that the research area evolves independently. For example, lightweight super-resolution (SR) had been a separate area, only after several years of monumental baselines proposed (Yi et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018) (2016-2018). Afterwards, many researchers introduced lightweight SR networks (Shi et al., 2019; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018). This phenomenon was also observed in other unrelated fields, such as reinforcement learning (RL). After DQN (Li et al., 2018) introduced a deep learning method in RL, various innovative methods were proposed over a few years (2015-2018) (Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018). Since then, other deep learning approaches have been developed in RL (Li et al., 2018; Li et al., 2018). Meanwhile, well-designed lightweight SR and large DN Transformers have been proposed over the past two years. Our work takes advantages of these techniques to shorten the periods for future LWDN research with Transformers. **Image Restoration.** Many Transformer-based approaches improved image restoration (IR) performances, such as image denoising (DN) and super-resolution (SR). SwinIR (Yi et al., 2017) exploited local window self-attention (SA) (Yi et al., 2017) of Swin Transformer (Li et al., 2018). Subsequent studies focused on expanding the receptive field while leveraging the long-range dependencies of SA. Uformer (Li et al., 2018) introduced locally enhanced feed-forward network while keeping a U-Net structure (Shi et al., 2019). Restormer (Rastormer et al., 2018) performed global SA in a channel space instead of spatial dimension. ELAN (Shi et al., 2019) employed shift-convolution (Yi et al., 2017) and multi-scaled local window SA. CAT (Shi et al., 2019) replaced a square window with a rectangular one. ART (Shi et al., 2019) introduced sparse attention by dilated window SA. NGswin (Shi et al., 2019) proposed N-Gram embedding that considers neighboring regions of each window before SA. **Patch-Driven IR.** Our attempt at fair training is related to interpretation studies. They implied that the patches selected for training should be deemed important. As prior work, the authors of (Li et al., 2018) proposed a local attribution map (LAM) to visualize the contribution of each pixel in image recovery. They demonstrated that some areas in a local patch, like edges and textures, significantly affect the restoration performances. Magid et al. (Magid et al., 2019) evaluated the error based on semantic labels from a learned texture-classifier. They distinguished between more complex and simpler textures of low-quality images to restore. The researchers of RCAN-it (Yi et al., 2017) hypothesized that if a network were trained more on the low-quality patches that have a lower PSNR over their high-quality counterparts, the performance could be improved. Although the performances decreased, they found that there were attributes of the random patches that influence the low-level vision tasks. In spite of those evidences, existing IR papers have overlooked the influences of randomly selected patches and compared their works in an unfair manner. ## 3. Methodology ### LWDN Transformer Employing seven state-of-the-art Transformer methods, we establish baselines for lightweight denoising (LWDN). Three models originate from lightweight super-resolution task, including SwinIR-light (Yi et al., 2017), ELAN-light (Shi et al., 2019), and NGswin (Shi et al., 2019). Each architecture remains unchanged, with an exception of the final reconstruction module (See Section 3.2). The other four Transformers come from the large DN task, including Restormer (Rastormer et al., 2018), Uformer (Li et al., 2018), CAT (Shi et al., 2019), and ART (Fan et al., 2017). We reduce the number of Transformer blocks and channels, or change other hyper-parameters. As a result, the total number of learnable parameters in each model is set to around 1M. The details of reductions are in Table 2. We also summarize the attributes of the network components in each model in Table 1. ### Shared Common Components To maintain consistency across models, we apply identical shallow (or head) module, reconstruction (or tail) modules, and loss function to all models. Figure 2 depicts the brief pipeline. The only difference is the Transformer blocks (body). This unity assures to identify the effectiveness of unique algorithms in self-attention and feed-forward networks, which are the key factors of Transformers. **Shallow Module.** This module consists of a \(3\times 3\) convolution. It takes a low-quality noisy image \(I_{\text{LQ}}\in\mathbb{R}^{C_{in}\times H\times W}\), extracting the shallow feature \(z_{s}\in\mathbb{R}^{C\times H\times W}\), where \(C_{in}\) is 1 or 3 according to whether grayscale or color input, and H and W indicate the resolution of the input. \(C\) is the embedding dimension (channels) of each network. **Reconstruction Module.** The final reconstruction module \(\mathcal{F}_{\text{{recon}}}\) is composed of two \(3\times 3\) convolutional layers. The first adjusts the channels of feature maps to \(C_{out}\), which is equal to \(C_{in}\). Then the second layer produces the residual output \(I_{res}\), which is added to \(I_{\text{LQ}}\). Finally, we get the reconstructed clean image \(I_{\text{{RC}}}\), as follows: \[I_{res}=\mathcal{F}_{\text{{recon}}}(\mathcal{F}_{\text{{body}}}(z_{s})),\ I_{ \text{{RC}}}=I_{\text{{LO}}}+I_{res}, \tag{1}\] where \(\mathcal{F}_{\text{{body}}}\) represents the Transformer blocks. The tail modules of SwinIR-light, ELAN-light, and NGswin differ from the original ones. An upsampling pixel-shuffle (Fan et al., 2017) layer is removed. In Section 4.4.4, we examine the variants of this module. This is because image restoration tasks still need convolution for aggregating local features despite the robustness of self-attention (Wang et al., 2019). **Loss Function.** We minimize \(L_{1}\) pixel loss for training LWDN baseline networks: \(\mathcal{L}=\|I_{\text{{HQ}}}-I_{\text{{RC}}}\|_{1}\), where \(I_{\text{{HQ}}}\) is a high-quality ground truth image. ### Fair Training In this section, we identify two unfair problems in existing studies, and present our training strategies to resolve each problem. **Foremost,** most recent denoising studies have trained their models on randomly cropped patches from training images (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), because the resolution of the original image is too high to process with current hardware. However, as opposite to conventional wisdom that numerous trials always lead to almost identical results, we discover that the areas randomly cropped from training data hugely influence the denoising performances. Even if existing studies have striven to compare models fairly, it was unfair at least for denoising task. For example, assume that an image \(I_{\text{{LO}}}\) is used for training the networks at a iteration, as illustrated in Figure 3. While one random seed \(\alpha\) crops a patch that is relatively easy to recover (_e.g._, background sky or ground), another random seed \(\beta\) crops a patch that is challenging to restore (_e.g._, complex pattern or texture) (Fan et al., 2017; Wang et al., 2019). Even when the learned network architecture is the same, a network using random seed \(\beta\) (or \(\alpha\)) shows better performances than \(\alpha\) (or \(\beta\)) (Table 4). We, therefore, struggle to control every randomness that can appear during training. The same random patch from a training image is guaranteed to be chosen through all networks at a certain iteration. The identical data augmentation (see Section 4.1) is also applied at that iteration. We cross-check whether the same patches are really used for training. Figure 4 reveals that the fair training is realized. The isomorphic movement of loss of every network means that identical data points are used for training the different models. In implement, the mini-batch size and the number of GPUs affect the randomly selected patches or augmentation parameters. Some models, such as SwinIR-light and ART-light, require more GPU memory than the others, which result in a smaller batch size or more GPUs. It causes the random patches and augmentation to alter. Therefore, we record the vertical and horizontal start points of cropped areas, as well as the random augmentation parameters (flip and roation), at each iteration while training a model. This information is loaded when training the others. \begin{table} \begin{tabular}{l|l|l|l|l} \hline \hline Model & Depth & Channels & Hidden (FFN) & \#Params \\ \hline Restormer [-1] & [4, 6, 6, 8, 6, 4, 4] \(\rightarrow\) & \(48\to 16\) & \(128\to 32\) & \(26,112\text{K}\to 1,054\text{K}\) \\ & [2, 2, 2, 2, 2, 2, 2, 2, 2] & \(32\to 16\) & \(128\to 32\) & \(50,801\text{K}\to 1,084\text{K}\) \\ \hline \multirow{2}{*}{CAT (Fan et al., 2017)} & [4, 6, 8, 6, 4, 4] \(\rightarrow\) & \(48\to 16\) & \(128\to 32\) & \(25,770\text{K}\to 1,042\text{K}\) \\ & [2, 2, 4, 2, 2, 2, 2, 2] & \(180\to 60\) & \(720\to 120\) & \(16,150\text{K}\to 1,084\text{K}\) \\ \hline \hline \end{tabular} \end{table} Table 2. Reduction of large to lightweight DN. “Depth” indicates the number of Transformer blocks in each layer. “Hidden (FFN)” means the hidden dimension in feed-forward network after self-attention. We keep the number of learnable parameters as around one million. \begin{table} \begin{tabular}{l|l|l|l|l} \hline \hline Method & Hier. & Self-attention (SA) & Feed-forward network & Bottleneck \\ \hline SwinIR-light & X & Plain window (Fan et al., 2017) & Plain (Fan et al., 2017) & - \\ ELAN-light & X & Multi-scale window & Before SA, Shift-conv (Fan et al., 2017) & - \\ NGswin & O & N-Gram neighbor window & Post-layer-norm (Fan et al., 2017) & SCDP \\ Restormer-light & O & Channel space (Fan et al., 2017) & Adding depthwise conv & Transformer \\ Uformer-light & O & Plain window (Fan et al., 2017) & Adding depthwise conv & Transformer \\ CAT-light & O & Rectangle window & Plain (Fan et al., 2017) & Transformer \\ ART-light & X & Sparse and dense window & Plain (Fan et al., 2017) & - \\ \hline \hline \end{tabular} \end{table} Table 1. Summary of the characteristics of our lightweight denoising baseline Transformers. “Hier.” indicates whether each network adopts a hierarchical U-Net [Fan et al., 2017] based architecture or a non-hierarchical structure. Figure 2. Brief pipeline of baselines. The only difference between each model is Transformer block (body). **Next**, the common method to generate random noise is to exploit additive white Gaussian noise (AWGN). This follows an assumption that Gaussian distribution can approximate the distribution of real-world unknown noise (Srivastava et al., 2017). Given a high-quality image \(I_{HQ}\), a low-quality noisy image \(I_{DQ}\) can be produced as follows: \[I_{IQ}=I_{HQ}+\mathcal{S},\mathcal{S}\sim\mathcal{N}(0,\sigma^{2}), \tag{2}\] where \(\mathcal{S}\) denotes a noise term and \(\sigma^{2}\) indicates the variance of Gaussian distribution \(\mathcal{N}\). \(\sigma\) determines noise level, _i.e._, the larger \(\sigma\) adds more noise. While some studies use a constant \(\sigma\) for training each independent model (Golovolov et al., 2012; Golov et al., 2013; Golov et al., 2014; Golov et al., 2015), others utilize a blind \(\sigma\) to construct a single model (Srivastava et al., 2017; Golov et al., 2018; Golov et al., 2019). The latter is worse at restoring a specific \(\sigma\) the former chooses. In contrast, the former is bad at recovering noisy images from the other \(\sigma\) values. Because of this difference, it is unfair to compare the former and latter directly. Thus, we get the low-quality noisy images by adding Gaussian noise with blind \(\sigma\) (sampled uniformly between 0 and 50), and train all Transformers following this rule. ## 4. Experiments ### Experimental Setup We implemented all works using PyTorch (Paszl et al., 2017) on 2 NVIDIA GeForce RTX 4090 GPUs, including the model configurations, training, and evaluation procedures. **Training**. Following previous works (Golov et al., 2013; Golov et al., 2014; Golov et al., 2015), we used a merged dataset DFBW including 8,594 high-quality images (800 DIV2K (Golov et al., 2014), 2,650 Flickr2K (Fischer et al., 2016), 400 BSD500 (Golov et al., 2014), and 4,744 WED (Golov et al., 2014)). The training process lasted for 400 epochs. As previously mentioned, a blind Gaussian noise was added to a high-quality image. Moreover, we employed progressive learning following Restormer (Golov et al., 2016). The patch size for random cropping was initialized as 64\(\times\)64 (batch size: 64) and then increased to 96\(\times\)96 (batch size: 32) and 128\(\times\)128 (batch size: 16) after 100 and 200 epochs, respectively. As emphasized in Section 3.3, a random patch at a certain iteration was all the same for all models. After random cropping, we augmented the data by random horizontal flipping and rotation (90\({}^{\circ}\), 180\({}^{\circ}\), 270\({}^{\circ}\)). The learning rate was initialized as \(0.0004\), which is halved after \(\{200,300,350,375\}\) epochs. For the first 20 epochs, there was warmup phase (Golov et al., 2014) that linearly increased the learning rate from 0.0 to 0.0004. We used Adam (Kingmare et al., 2014) optimizer. **Evaluation**. We reported PSNR (dB) and SSIM (Kingmare et al., 2014) on the standard benchmark test datasets as metrics. The test sets for color DN include C BSD68 (Golov et al., 2014), Kodak24 (Golov et al., 2014), McMaster (McMaster et al., 2016), and Urban100 (Golov et al., 2014). The performances on Set12 (Golov et al., 2014), BSD68 (Golov et al., 2014), and Urban100 (Golov et al., 2014) for grayscale DN were evaluated. The noise levels \(\sigma\) of evaluation were 15, 25, and 50. ### Main Results of Baselines As shown in Table (a)a, we compare our fairly trained lightweight Transformer baselines for color blind Gaussian denoising (DN). We witnessed two interesting points in this table. In terms of the **original task** of each model, the networks from lightweight super-resolution (SR) field generally perform better than the counterparts stemming from large DN. This differences result from a reason that the methods from lightweight SR were already designed to perform efficiently. It implies that lightening deep neural networks is beyond simply reducing the number of parameters. Therefore, we discuss this issue in Section 4.4 to provide some considerations and insights when designing a effective lightweight network. Although not covered in this work, more sophisticated skills, such as quantization (Golov et al., 2013; Golov et al., 2014; Golov et al., 2015; Golov et al., 2015) or network pruning (Golov et al., 2014; Golov et al., 2015; Golov et al., 2015), may be also considered. Next, with respect to the **network architecture**, non-hierarchical structure (please remind Table 1) results in better performances on lower noise level. Non-hierarchical ART-light performs the best among the networks from large DN (below dashline) on \(\sigma=15,25\). As demonstrated in (Golov et al., 2015), this is because reconstruction of high quality image by utilizing higher resolution features is more straightforward than by handling smaller features. But situation changes when recovering the highly distorted images (\(\sigma=50\)). ART-light only shows similar results to Uformer-light and CAT-light. Other Figure 4. Trends of training loss of each model. The isomorphic movements across all models along each epoch means that the identical patches are used at a certain iteration. Note that the training loss of NGswin and ELAN-light are compared in Section 4.4.3 to describe the instability of ELAN-light. Figure 3. Examples of randomly cropped patches according to a random seed \(\alpha\) or \(\beta\). The random seed \(\beta\) can select more the regions challenging to recover than \(\alpha\). In the extreme cases, \(\alpha\) leads to lower performances, as in Table 4. algorithms of self-attention or FFN arranged in Table 1 affected this challenging task. Meanwhile, NGswin seems to overcome the issue of hierarchical network by several crucial components designed efficiently (see Section 4.4.1). In addition, Restormer-light shows the low reconstruction performances. It employs channel self-attention to capture global dependency of every pixel instead of local spatial self-attention adopted in the other baselines. While the large DN model (Restormer (Seng et al., 2017)) achieved their goal by a number of parameters, Restormer-light lacks at the capacity to consider sufficient spatial information due to parameter constraint (around one milion). It is discussed in Section 4.4.2. Secondarily, we also provide lightweight Transformer baselines for grayscale blind Gaussian denoising in Table 2(b). The results were similar to color denoising. Interestingly, however, CAT-light recorded outstanding results especially on BSD68 dataset. From the result, we drew the possibility that a task- or dataset-oriented architecture can be designed intentionally. The visual comparisons are supplied in Figure 5. ### Analysis of Randomness As recorded in Table 4, PSNR scores of all models on all datasets increased with a new seed, except for Restormer-light on Urban100. SSIM values for all but Restormer-light also grew up. For example, NGswin with new seed \(\beta\) outperformed SwinIR-light using the original seed \(\alpha\) (refer to Table 3). In turn, ELAN-light with \(\beta\) surpassed NGSwin using \(\alpha\). It is demonstrated that a vast number of trials cannot solve problem of randomness at least in image denoising task. Please note that those overall improved results are not attributed to a novel or smart approaches. Rather, they proved accident selection of random seed gives more successful results. By contrast to previous works that overlooked this problem, our attempt to fairly prepare the training patches and compare the models based on this fairness is compelling. To support our findings, we verify the true cause of these results by comparing the results from randomly cropped data and randomly initialized weights in Table 5. The latter could not make relatively meaningful differences when randomly cropped patches are maintained as the same at a certain iteration. As a result, it is necessary to consider and control the training data resulting from randomness for truly fair comparison. ### Empirical Analysis of Components #### 4.4.1. Hierarchical Structure The hierarchical structures have been widely employed in the general image restoration (IR) tasks for the network efficiency (Krizhevsky et al., 2012; Krizhevsky et al., 2012; Krizhevsky et al., 2012; Krizhevsky et al., 2012; Krizhevsky et al., 2012). Among our LWDN Transformer baselines, NGswin, Restormer-light, Uformer-light, Uformer-light, \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Method & Seed & \multicolumn{2}{c|}{CBSD68 [\(\uparrow\)]} & Kodak24 [\(\uparrow\)] & McMaster [\(\uparrow\)] & Urban100 [\(\uparrow\)] \\ \hline ELAN-light & \(\alpha\) & 28.07 & 0.7957 & 29.35 & 0.8028 & 29.51 / 0.8277 & 28.67 / 0.8596 \\ \cline{2-9} & \(\beta\) & **28.20** & **0.8020** & **29.4** & **0.8020** & **29.65** & **0.8338** & **28.55** \\ \hline NGswin & \(\beta\) & 28.13 & 0.8011 & 29.42 / 0.807 & 29.50 / 0.8392 & 28.75 / 0.8646 \\ \cline{2-9} & \(\beta\) & **28.27** & **0.8027** & **29.58** & **0.8114** & **29.75** & **0.8362** & **28.90** / **0.8671** \\ \hline Restormer-light & \(\alpha\) & 28.04 & **0.7974** & 29.19 / **0.804** & **29.31** & **0.826** & **28.36** & **0.8359** \\ \cline{2-9} & \(\beta\) & **28.11** & **0.7751** & **29.25** & **0.8028** & 29.35 / 0.8248 & 28.28 / 0.8533 \\ \hline Uformer-light & \(\alpha\) & 28.11 & 0.7968 & 29.26 / 0.8020 & 29.46 / 0.8259 & 28.33 / 0.8551 \\ \cline{2-9} & \(\beta\) & **28.12** & **0.7986** & **29.34** & **0.8051** & **29.51** & **0.8299** & **28.44** / **0.8591** \\ \hline \end{tabular} \end{table} Table 4. Study on randomness. The random seed \(\alpha\) is the same as what our baselines follow. Another seed \(\beta\) differs from \(\alpha\). The results marked as a same seed mean that the identical patches and corresponding augmentation are used at a certain iteration. PSNR / SSIM are evaluated with \(\sigma=50\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\#Params} & \multirow{2}{*}{\(\sigma\)} & \multicolumn{2}{c|}{Self12 [\(\uparrow\)]} & \multicolumn{2}{c|}{IBDM4 [\(\uparrow\)]} & \multicolumn{2}{c|}{Urban100 [\(\uparrow\)]} \\ \cline{3-10} & & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline SwinIR-light & 903K & 33.94 & 0.8922 & 31.38 / 0.8020 & 29.318 & 33.69 / 0.8478 & 0.8371 \\ ELAN-light & 613K & 15 & 33.04 & 0.9392 & 35.50 / 0.9256 & 33.09 / 0.8277 & 34.47 / 0.8644 \\ NGswin & 903K & 34.12 & 0.9324 & 35.12 / 0.9283 & 35.17 / 0.8277 & 34.53 / 0.8676 \\ \hline Resformer-light & -1.05K & -1.05K & -1.39 & 0.9314 & 34.0624 & 34.60 / 0.8226 & 34.00 / 0.8492 \\ Uformer-light & 1.04K & 15 & 34.02 & 0.9334 & 34.092 & 34.63 / 0.8262 & 34.01 / 0.8240 & 0.8492 \\ CAT-light & 1.04K & 15 & 34.01 & 0.9394 & 34.092 & 34.63 / 0.8267 & 34.63 / 0.8267 & 0.8041 \\ \hline Air-light & 1.04K & 34.01 & 0.9334 & 34.092 & 0.9327 & 34.63 / 0.8267 & 0.8041 & 0.8044 \\ Air-light & 1.04K & 34.01 & 0.9335 & 34.00 & 0.9235 & 34.10 / 0.8267 & 0.8041 & OOM \\ Air-light & 1.04K & 34.01 & 34.01 & 0.9335 & 34.00 & 0.9235 & 34.10 / 0.8267 & OOM \\ Air-light & 1.04K & 34.01 & 34.01 & 0.9324 & 34.01 & 0.9235 & 34.10 / 0.8267 & OOM \\ Air-light & 1.04K & 34.01 & 34.01 & 0.9324 & 34.10 & 0.9235 & 34.10 / 0.8267 & OOM \\ Air-light & 1.02K & 34.01 & 0.9335 & 34.00 & 0.9235 & 34.10 / 0.8934 & OOM \\ \hline SwinIR-light & 903K & 34.01 & 0.8835 & 0.832 & 0.826 & 0.826 & 0.826 & 34.02 / 0.8222 & 0.8222 \\ ELAN-light & 613K & 25 & 31.93 & 0.8866 & 25.36 & 32.05 & 32.05 / 0.8267 & 32.05 \\ Uformer-light & 1.04K & 25 & 31.93 & 0.8866 & 25.36 & 32.05 & 32.05 / 0.8267 & 31.05 \\ Uformer-light & 1.04K & 25 & 31.93 & 0.8866 & 25.32 & 32.05 & 32.05 / 0.8267 & 31.05 \\ Uformer-light & 1.04K & 25 & 31.93 & 0.8866 & 25.32 & 32.05 & 32.05 & 31.05 \\ Uformer-light & 1.04K & 25 & 30.57 & 0.8566 & 29.38 & 32.05 & 33.08 & 32.01 / 0.8292 \\ CAT-light & 1.04K & 31.93 & 0.8864 & 0.8641 & 26.03 & 0.8308 & OOM \\ Air-light & 1.082K & 30.52 & 0.8800 & 29.25 & 0.8285 & OOM & OOM \\ \hline SwinIR-light & 903K & 30.88 & 0.76 & 0.76 & 29.35 & 27.61 & 28.19 / 0.8196 \\ EAN-light & 613K & 50 & 27.46 & 0.7959 & 28.35 & 0.8727 & 28.63 / 0.8172 \\ NGswin & 903K & 28.13 & 0.911 & and CAT-light utilize this U-Net [51] based architectures (recall Table 1). However, the layers taking and producing lower-resolution features lose the spatial details of high-frequency information [\(\cdot\)\(\cdot\)]. Considering the degradation in other IR tasks (_e.g._, deraining, demosaicing) follows a relatively homogeneous pattern, preserving high-frequency details is particularly crucial in denoising task to recover edges and textures destroyed by heterogeneous random noise. Thus, the hierarchical denoiser tends to fall behind the non-hierarchical structures when the parameter budget is maintained similar. The fact that the non-hierarchical SwinIR-light is the best baselines highlights the importance of this issue. Although Restormer [68], Uformer [63], and CAT [82] (_i.e._, large DN models) tried to overcome it by enlarging their model size, they suffered from too many parameters (26M, 51M, and 26M, respectively). This strategy is not reasonable in lightweight IR tasks that extremely constrain the network size (around 1M parameters in this paper). Nevertheless, a hierarchical Nswin stops the significant drop of the performances. In that point we investigate the U-Net components that can compensate the drawbacks efficiently. In Table 6, we contrast NsGwin with the other hierarchical baselines in terms of the main layers of U-Net. First, NsGwin placed a dense connectivity [20] between encoder layers, while there were not any specific connections in the others. This cascading mechanism conveys the information of the previous layers efficiently [2]. Second, an input to a bottleneck layer is also different. After the encoder stages, NsGwin introduces the bottleneck taking merged multi-scale features. It is named as SCDP; pixel-Shuffle, Concatenation, Depthwise convolution, and Point-wise projection. SCDP can enhance the performances with the negligible extra parameters. Third, NsGwin exploits an asymmetric single decoder that is smaller than the encoder. It not only highly increases the network efficiency but also takes advantages of high-resolution features. As shown in Table 7, we conduct an ablation study applying those robust U-shaped components to Uformer-light, to inspect the potential of the hierarchical structures. First of all, the features from the shallow module and each encoder layer are densely connected. The performances slightly gain with a few additional parameters. Next, we replaced the plain bottleneck with a modified SCDP. We transformed some steps in SCDP of the original paper [\(\cdot\)\(\cdot\)] due to the fundamental structural differences between NsGwin and Uformer-light. As this bottleneck only took the features before downsizing (_i.e._, the direct outputs from each encoder level), the 3rd downsizing layer was no more required. Therefore, we could reduce the parameters but further enhance reconstruction accuracy. \begin{table} \begin{tabular}{c|c|c|c} \hline Method & Encoder Connection & Bottleneck Input & Decoder Structure \\ \hline Restormer-light & None & Last encoder output & Symmetric \\ Uformer-light & None & Last encoder output & Symmetric \\ CAT-light & None & Last encoder output & Symmetric \\ \hline NsGwin & Dense connection [\(\cdot\)] & Marged multi-scale encoder features & Asymmetric [\(\cdot\)] \\ \hline \end{tabular} \end{table} Table 6. The differences of hierarchical LWDN Transformers. Figure 5. The visual comparison of denoising results of our seven baseline Transformers and a large model. While the large SwinIR recovers degraded images the best, our baselines can generally produce the comparable results for human-perception with much fewer parameters. The performances of enhanced Uformer-light were comparable to NGswin and ELAN-light (refer to Table 3). Finally, we changed a symmetric decoder into an asymmetric one. The three decoder levels were fused into one levels, which allows more encoder layers to be included. The network depth shifts from \([2,4,2,2,2,4,2]\) to \([4,4,2,2,8]\). Despite the deeper depth, removing existing decoders that took quite large channels enabled the number of parameters to be almost halved compared to the baseline. This transformation also improved the performances. It is demonstrated that the lightweight hierarchical network has the potential to progress. #### 4.4.2. Spatial vs. Channel Self-Attention It is ideal to involve every pixel of the feature maps in the spatial self-attention (SP-SA) computation as done in ViT (Fan et al., 2017) and IPT (Fan et al., 2018), but very high resolution of inputs for image restoration task leads to quadratic increase of time-complexity. Thus, the origin (Fan et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2020; Wang et al., 2021) of our baselines employed the local window-based SP-SA except for Restormer (Fan et al., 2017). Restormer utilized a channel self-attention (CH-SA) taking advantage of the global2 information, as local SP-SA is insufficient for considering global context. The time-complexity3 of typical local SP-SA and CH-SA are: Footnote 2: In this section, the term “global” expresses that it involves all pixels of feature maps in computation of self-attention, not some pixels within a “local” window. Footnote 3: We omit other components proposed in each model, and softmax. \[\begin{split}\Omega(\text{local SP-SA})&=4H_{i}W_{ i}C_{i}^{2}+2M^{2}H_{i}W_{i}C_{i},\\ \Omega(\text{CH-SA})&=4H_{i}W_{i}C_{i}^{2}+2H_{i}W_{ i}C_{i}^{2}/L_{i},\end{split} \tag{3}\] where \(H_{i}\), \(W_{i}\), and \(C_{i}\), denote the height, width, and channels of feature maps in an \(i\)-\(th\) Transformer block, and \(M\) is a size of local window. \(L_{i}\) is the number of multi-heads. CH-SA looks more efficient than SP-SA, as the main differences can be abbreviated as \(M^{2}\) and \(C_{i}/L_{i}\) in the second terms. But there is a general trend that as the time complexity increases, so does the network capacity. In other words, the capacity of CH-SA inversely proportional to \(L_{i}\) means that more parallel multi-heads for attending to various spatial details from different perspectives (Fan et al., 2018) reduces the network capacity. In the models without parameter constraint (_i.e._, in larger models), this can be overcome by increasing the channels. On the other hand, under a lightweight circumstance, the channels are highly reduced, which limits the increase of the parallel multi-heads in order to conserve capacity. The inevitably limited (fewer) multi-heads, in turn, decrease the ability of attending to different parts of the input. Correspondingly, CH-SA lacks the capability to capture and preserve semantic information in spatial dimension compared to SP-SA (Table 3(a)). To reinforce our claims, we conducted an ablation study in Figure 5(a). While the other structures or hyper-parameters were retained as the same of the baseline, we modified two components; the space of self-attention and the number of channels. First, we tried to exploit global SP-SA following the original aim of Restoremer, but hardware was unable to endure massive complexity. CH-SA of Restormer-light, therefore, was replaced with local square window-based SP-SA adopted in SwinIR-light, Uformer-light, and NGswin. The result shows local SP-SA is superior over CH-SA under the lightweight condition. The PSNR on McMaster dataset gains 0.3 dB with negligible extra parameters and time-complexity. Second, we increased the channels while keeping CH-SA. Despite a notable improvement with over twice the parameters, increasing channels did not meet SP-SA, which exposed the superiority of local SP-SA again. Plus, when both modifications were applied, it barely outperformed SwinIR-light with 2.58 times more parameters. Finally, we compare the models in both large and lightweight size. Figure 5(b) shows that CH-SA is effective without parameter constraints as mentioned before, whereas the effectiveness dwindles due to insufficient spatial comprehension in the lightweight field. #### 4.4.3. Excessive weight sharing ELAN-light (Wang et al., 2019) employed many weight sharing methods. First, it proposed the accelerated self-attention, which shares the _query_ and _key_ in computation of self-attention (_i.e._, \(Q=K\)). Second, once a shallower layer calculates the attention scores \((softmax(\frac{QK^{T}}{\sqrt{D}}),Q=K,D:dimension)\), a consecutive layer shares them instead of separately producing them. Third, ELAN-light employed shift-convolution (Wang et al., 2019), where several elements,of which the original spatial locations and channels differ from each other, share the weight of a linear projection. However, we figure out that the excessive weight sharing of this network leads to an unstable learning (Fan et al., 2018), as depicted in Figure 7. The training becomes stabilized when ELAN-light discards weight sharing methods. The excessive weight sharing results in limited network flexibility and weak representation toward diverse inputs. We hypothesize that those flaws may let a particular data point (an image patch) make the hypertrophied (overgrown) gradients during back-propagation. This phenomenon causes the network parameters to diverge from optimal points in a moment, bringing out an abnormal loss. Certainly, the mild weight sharing in a \begin{table} \begin{tabular}{l|c|c c c c|c c c} \hline Configuration & \multirow{2}{*}{**\#Params**} & \multirow{2}{*}{**\#**} & \multicolumn{2}{c|}{CNSDM [\(\times\)]} & \multicolumn{2}{c|}{Eddda2 [\(\times\)]} & \multicolumn{2}{c|}{MCMaster [\(\times\)]} & \multicolumn{2}{c}{DJub20 [\(\times\)]} \\ \cline{3-10} & & PSNR & A & PSNR & A & PSNR & A & PSNR & A \\ \hline Baseline & 1,0008 & 15.02 & 8.01 & 8.01 & 8.01 & 8.01 & 8.01 & 8.01 \\ - Dense Connection & 1,0078 & 15.42 & 4.00 & 4.94 & 4.01 & 34.42 & 4.01 & 34.03 & -6.01 \\ Multi-scale Reflect neural network is beneficial for some purposes, such as memory- and computation-efficiency. Therefore, since the weight sharing leads to a trade-off between efficiency and flexibility, it is expected that future works aim to systematically find the optimal point of this trade-off. Some regularization strategies, such as gradient clipping (Wang et al., 2017; Wang et al., 2018), or neural architecture search (NAS) methods (Wang et al., 2018) can be helpful for handling this issue. #### 4.4.4. Still useful CNN Despite long-range dependency of the self-attention mechanism, the role of the meticulous composition of CNN is still relevant for image restoration tasks. Unlike high-level vision tasks (_e.g._, classification, object detection), low-level tasks mainly aim to reconstruct each distorted pixel. As this recovery process requires the information in the surrounding areas of each pixel (Wang et al., 2017; Wang et al., 2018; Wang et al., 2018), CNN, which is conventionally good at extracting local features, is essential. Figure 8 visualizes the effect of variants of a reconstruction (tail) module, which is composed of only the convolutional layers. In this experimental settings, we increased the number of convolutional layers or their kernel size. The extra CNN layers added to the tail module outputted the same channels as the input features (a kernel size was fixed at \(3\times 3\)). When the kernel size increased, the number of layers was kept at \(2\). As a result, the performance was proportional to the number of CNN in the tail module, while the kernel size followed case by case. #### 4.4.5. A Supplement In Figure 9, we supply the training losses of all experiments in Section 4.4. Considering the similar movements of all of them, our crucial goal is achieved, the truly fair training. ## 5. Conclusion This work presented seven Transformer baselines for lightweight denoising (LWDN), which has been unexplored until recently. We aimed to control the randomness and train all models in a truly fair manner, because the patches randomly selected from a training image were found outstandingly influential in the recovery performances. Based on our baselines, the empirical studies on different components delivered the considerations for LWDN with Transformers. We verified the potential of hierarchical network to be further improved with the advanced elements, such as a dense connection, a multi-scale bottleneck, and an asymmetric decoder. And it was proven more effective to utilize local window-based spatial self-attention in lightweight tasks rather than channel self-attention, unlike the models without parameter constraint. Besides, excessive weight sharing caused the learning unstable, and the design of convolution was still relevant to denoising tasks. In closing, we hope this work can encourage succeeding researchers to develop this field by using our baselines and findings. **Acknowledgements.** This paper was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant (No.2022-0-00956) and Korea Health Industry Development Institute (KHIDI) grant (No. H122C1983) funded by the Korea government (MSIT). Figure 8. Study on tail variants. We increase the number of CNN layers or kernel size. PSNR is evaluated on Urban100 (Wang et al., 2018) with \(\sigma=50\). Figure 7. Trends of training loss. \(\varnothing\) mark denotes removal of weight sharing in the model. The training of ELAN-light becomes unstable at some epochs. However, ELAN-light without weight sharing is trained stably. Figure 9. Training loss of all experiments in Section 4.4. (a) Table 4. (b) Table 5. (c) Table 7. (d) Figure 6. (e), (f) Figure 8. Note: the legends of (b) mean (data seed, init seed), which reveals only the data seed can lead to similar trends of loss.
2308.01508
Circumventing Concept Erasure Methods For Text-to-Image Generative Models
Text-to-image generative models can produce photo-realistic images for an extremely broad range of concepts, and their usage has proliferated widely among the general public. On the flip side, these models have numerous drawbacks, including their potential to generate images featuring sexually explicit content, mirror artistic styles without permission, or even hallucinate (or deepfake) the likenesses of celebrities. Consequently, various methods have been proposed in order to "erase" sensitive concepts from text-to-image models. In this work, we examine five recently proposed concept erasure methods, and show that targeted concepts are not fully excised from any of these methods. Specifically, we leverage the existence of special learned word embeddings that can retrieve "erased" concepts from the sanitized models with no alterations to their weights. Our results highlight the brittleness of post hoc concept erasure methods, and call into question their use in the algorithmic toolkit for AI safety.
Minh Pham, Kelly O. Marshall, Niv Cohen, Govind Mittal, Chinmay Hegde
2023-08-03T02:34:01Z
http://arxiv.org/abs/2308.01508v2
# Circumventing Concept Erasure Methods For Text-to-Image Generative Models ###### Abstract Text-to-image generative models can produce photo-realistic images for an extremely broad range of concepts, and their usage has proliferated widely among the general public. Yet, these models have numerous drawbacks, including their potential to generate images featuring sexually explicit content, mirror artistic styles without permission, or even hallucinate (or deepfake) the likenesses of celebrities. Consequently, various methods have been proposed in order to "erase" sensitive concepts from text-to-image models. In this work, we examine seven recently proposed concept erasure methods, and show that targeted concepts are not fully excised from any of these methods. Specifically, we devise an algorithm to learn special input word embeddings that can retrieve "erased" concepts from the sanitized models with no alterations to their weights. Our results highlight the brittleness of post hoc concept erasure methods, and call into question their use in the algorithmic toolkit for AI safety. ## 1 Introduction Motivation.Text-to-image models [1; 2; 3; 4; 5; 6; 7; 8] have garnered significant attention due to their exceptional ability to synthesize high-quality images based on text prompts. Such models, most prominently Stable Diffusion (SD) [7] and DALL-E 2 [4], have been adopted in a variety of commercial products spanning application realms ranging from digital advertising to graphics to game design. In particular, the open-sourcing of Stable Diffusion has democratized the landscape of image generation technology. This shift underlines the growing practical relevance of these models in diverse real-world applications. However, despite their burgeoning popularity, these models come with serious caveats: they have been shown to produce copyrighted, unauthorized, biased, and potentially unsafe content [9; 10]. What is the best way to ensure that text-to-image models do not produce sensitive or unsafe concepts? Dataset pre-filtering [11] may present the most obvious answer. However, existing filtering procedures are imperfect and may exhibit a large proportion of false negatives. See the extensive studies reported in [12] on how LAION-400M, a common dataset used in training text-image models, contains numerous offensive image samples which persist after applying standard NSFW filters. Even if perfect data pre-filtering were possible, substantial resources would be required to retrain large models from scratch in response to issues unearthed post-training. As a result, several _post hoc_ concept-erasure methods have emerged of late. Some advocate inference guidance [13; 14]. Others require fine-tuning the weights on an auxiliary subset of training data [15; 16; 17]. These may be categorized as more practical alternatives to full model-retraining with a stripped-down version of the original training data. Many of these methods are accompanied by public releases of the weights of the "sanitized" models. Such concept erasure methods are purported "to permanently remove [targeted concepts] from the weights"; moreover, they are presented as "not easy to circumvent since [the method] modifies weights" [15]. An array of results on several test instances across use cases (object removal, artistic style forgetting, avoidance of NSFW content, avoiding likeness of specific people) seem to support the efficacy of these methods. Our contributions.Our main contribution in this paper is to show that: _Post hoc concept erasure in generative models provides a false sense of security_. We investigate seven recently announced concept-erasure methods for text-to-image generative models: (i) Erased Stable Diffusion [15], (ii) Selective Amnesia [16], (iii) Forget-me-not [17], (iv) Ablating Concepts [18], (v) Unified Concept Editing [19], (vi) Negative Prompt [14], and (vii) Safe Latent Diffusion [13]. All of these were either published or appeared online in the first 9 months of 2023. Somewhat surprisingly, we show that all seven techniques can be circumvented. In all cases, the very same "concept-erased" models -- with zero extra training or fine-tuning -- may produce the erased concept with a suitably constructed (soft) prompt. Therefore, the seemingly-safe model may still be used to produce sensitive or offensive content. Overall, our results indicate that there may be a fundamental brittleness to post hoc erasure methods, and entirely new approaches for building (and evaluating) safe generative models may be necessary. See Figure 1 for examples. Techniques.Our approach stems from the hypothesis that existing concept erasure methods may be, in reality, performing some form of _input filtering_. More specifically, in these methods, the modified generative models produced by these methods are evaluated on a limited subset of text inputs: the original offending/sensitive text, and related prompts. However, this leaves the model vulnerable to more sophisticated text prompts. In particular, we design individual _Concept Inversion_ (CI) "attack" techniques to discover special word embeddings that can recover erased concepts when fed to the modified model. Through the application of CI, we provide evidence that these unique word embeddings outmaneuver concept erasure methods across various use cases such as facial likeness, artistic style, object-types, and NSFW concepts. Therefore, it is not the case that these concepts have been permanently removed from the model; these still persist, albeit remapped to new embeddings. Implications.Our extensive experiments below highlight two key points: 1. Our results call into question the premise that existing erasure methods (fully) excise concepts from the model. Our results show that this premise is not correct and that the results in these previous works on concept erasure should be scrutinized carefully. Figure 1: **Concept erasure methods fail to excise concepts from text-to-image models. This figure shows results from ESD [15], which is a variant of Stable Diffusion trained to avoid generating NSFW content, specific artist styles, and specific objects, like trucks (\(2^{nd}\) row). We circumvent this method by generating the “erased” concepts (\(3^{rd}\) row) by designing special prompts.** 2. We call for stronger evaluation methodologies for concept erasure methods. Measuring the degree of concept erasure in text-to-image models is tricky, since there are potentially a vast number of prompts that a motivated (and moderately well-equipped) attacker can use as inputs. As a first step to mitigate this issue, we recommend evaluating models in terms of our CI attacks during evaluation, and not merely limited to evaluating over mild variations of the original text prompts. Overall, our findings shine a spotlight on the considerable challenges in sanitizing already trained generative AI models (such as Stable Diffusion) and making them safe for wide public use. ## 2 Background Denoising Diffusion Models.Diffusion models belong to a category of generative models that sample from a distribution via an iterative Markov-based denoising process [20; 21]. The process begins with a sampled Gaussian noise vector, denoted as \(x_{T}\), and undergoes a series of \(T\) denoising steps to ultimately restore the final data, referred to as \(x_{0}\). In practical applications, the diffusion model is trained to predict the noise \(\epsilon_{t}\) at each timestep, \(t\), utilized to generate the progressively denoised image, \(x_{t}\). Latent diffusion models (LDM) [7] offer improved efficiency by operating in a lower dimensional space learned by an autoencoder. The first component of LDM consists of an encoder \(\mathcal{E}\) and a decoder \(\mathcal{D}\) that have been pre-trained on a large collection of images. During the training of LDM, for an image \(x\), the encoder learns to map \(x\) into a spatial latent code \(z=\mathcal{E}(x)\). The decoder maps such latent codes back to the original images such that \(\mathcal{D}(\mathcal{E}(x))\approx x\). The second component is a diffusion model trained to produce codes within the learned latent space. Given a conditional input \(c\), the LDM is trained using the following objective function: \[\mathcal{L}=\mathbb{E}_{z\sim\mathcal{E}(x),t,c,e\sim\mathcal{N}(0,1)}\left[ \|\epsilon-\epsilon_{\theta}(z_{t},c,t)\|_{2}^{2}\right], \tag{1}\] Here \(z_{t}\) is the latent code for time \(t\), and \(\epsilon_{\theta}\) is the denoising network. At inference time, a random noise tensor is sampled and gradually denoised to produce a latent \(z_{0}\), which is then transformed into an image through the pre-trained decoder such that \(x^{\prime}=\mathcal{D}(z_{0})\). [22] propose a classifier-free guidance technique is used during inference and requires that the model be jointly trained on both conditional and unconditional denoising. The unconditional and conditional scores are used to create the final latent \(z_{0}\). There, we start with \(z_{T}\sim\mathcal{N}(0,1)\) which is transformed to obtain \(\tilde{\epsilon}_{\theta}(z_{t},c,t)=\epsilon_{\theta}(z_{t},t)+\alpha( \epsilon_{\theta}(z_{t},c,t)-\epsilon_{\theta}(z_{t},t))\,,\) to get \(z_{T-1}\). This process is repeated sequentially until \(z_{0}\) is produced. Machine Unlearning.The conventional goal in machine learning is to foster generalization while minimizing reliance on direct memorization. However, contemporary large-scale models possess the capacity for explicit memorization, whether employed intentionally or as an inadvertent byproduct [23; 24; 25]. The possibility of such memorization has led to the development of many works in machine unlearning [26; 27], the core aim of which is to refine the model to behave as though a specific set of training data was never presented. Mitigating Undesirable Image Generation.Numerous methods have been proposed to discourage the creation of undesirable images by generative models. One initial approach is to exclude certain subsets of the training data. However, this solution can necessitate the retraining of large-scale models from scratch, which can be prohibitive. An alternative put forward by [13; 14] involves manipulating the inference process in a way that steers the final output away from the target concepts. Yet another approach employs classifiers to alter the output [10; 11; 28]. Since inference guiding methods can be evaded with sufficient access to model parameters [29], subsequent works [15; 16; 17; 18; 19] suggest fine-tuning Stable Diffusion models. [30] study the capability of generating unsafe images and hateful memes of various text-to-image models. The authors then propose a new classifier that outperforms existing built-in safety checkers of these models. Diffusion-based Inversion.Image manipulation with generative networks often requires _inversion_[31; 32], the process of finding a latent representation that corresponds to a given image. For diffusion models, Dhariwal & Nichol [33] demonstrate that the DDIM [34] sampling process can be inverted in a closed-form manner, extracting a latent noise map that will produce a given real image. More recent works [35; 36; 37; 38] try to invert a user-provided concept to a new pseudo-word in the model's vocabulary. The most relevant approach for our work is Textual Inversion [36] which learns to capture the user-provided concept by representing it through new "words" in the embedding space of a frozen text-to-image model without changing the model weights. In particular, the authors designate a placeholder string, \(c_{*}\), to represent the new concept the user wishes to learn. They replace the vector associated with the tokenized string with a learned embedding \(v_{*}\), in essence "injecting" the concept into the model vocabulary. The technique is referred to as Textual Inversion and consists of finding an approximate solution to the following optimization problem: \[v_{*}=\operatorname*{arg\,min}_{v}\mathbb{E}_{z\sim\mathcal{E}(x),c_{*},e\sim \mathcal{N}(0,1),t}\big{[}\|\epsilon-\epsilon_{\theta}(z_{t},c_{*},t)\|_{2}^{ 2}\big{]}.\] ## 3 Preliminaries Basic setup and threat model.For the remainder of the paper, we will leverage inversion techniques to design an "attack" on concept-erased models. We assume the adversary has: (1) access to the weights and components of the erased model, (2) knowledge of the erasure method, (3) access to example images with the targeted concept (say via an image search engine), and (4) moderately significant computational power. A trivial approach to "un-erase" an erased concept would be via fine-tuning a sanitized model on sufficiently many example images. Therefore, we also assume that: (5) the adversary cannot modify the weights of the erased model. To show that our CI attack is a reliable tool for establishing the existence of concepts in a model, we conduct two experiments to investigate whether Textual Inversion (TI) by itself can generate a concept that the model has not captured during training. If TI can hallucinate totally novel concepts, then even data filtering before training might not be able to avoid producing harmful/copyrighted content. In the first experiment, we compare TI performance on concepts that are better represented in the training data of Stable Diffusion 1.4, versus those that are likely not present. In the second experiment, we conducted a more controlled study by training two diffusion models on MNIST [39] from scratch. We include all the training classes in the first run and exclude one class in the second run. In both experiments, we find that Textual Inversion works significantly worse when the concept is not well represented in the training set of the generative model. See Figure 14 in the Appendix. ## 4 Circumventing Concept Erasure ### Experimental setup In this section, we examine seven (7) different concept erasure methods. To the best of our knowledge, this list constitutes all the concept erasure methods for Stable Diffusion models published up to September 19, 2023. We design CI procedures tailored to each erasure method that search the space of word embeddings to recreate the (purportedly) erased visual concepts. Importantly, our approach relies solely on the existing components of the post-erasure diffusion models. For these experiments, wherever possible we use the pre-trained models released by the authors unless explicitly stated otherwise; for concepts where erased models were not publicly available, we used public code released as-is by the authors to reproduce their erasure procedure. In each subsection, we start by describing the approach, then show how to attack their approach using Concept Inversion. We interleave these with results, and reflect on their implications. Finally, we show evidence that current concept erasure methods are likely performing input filtering, and demonstrate transferability of the learned word embeddings. Our code is available for reproducibility purposes at [https://nyu-dice-lab.github.io/CCE/](https://nyu-dice-lab.github.io/CCE/) ### Evaluation Protocol For each concept erasure method that we will be discussing below, we initially deploy it to erase 4 concept categories including art style, object, ID, and NSFW content. We use Stable Diffusion 1.4 (SD 1.4) for all our experiments. We assume that the adversary can access a small number of examples of the targeted concept from Google Images; see Appendix for details. Art style:We select 6 styles from modern artists and artistic topics that have been reported to have been captured by SD 1.4: the movie series "Ajin: Demi Human", Thomas Kinkade, Tyler Edlin, Van Gogh, Kelly McKernan, and Tyler Edlin. We generate images from the erased models using the prompt "A painting in the style of [_artist name_]". After performing CI, we generate images by replacing [_artist name_] with \(c_{*}\) - the special placeholder string associated with the learned word embedding. In addition to qualitative results, we follow [15] and conduct a human study to measure the effectiveness of our CI methods. In particular, for each artist, we collect 10 images of art created by those artists from Google Images. We then generate 10 images from the erased model using the standard concept name, and 10 images using CI per style and per concept erasure method. Participants were shown 5 real reference images from the same artist and another image of the same style (either real, from the erased model or from CI). They were then asked to estimate, on a five-point Likert scale, their confidence level that the experimental image has the same style as the reference images. Our study consists of 50 participants, with 96 responses per participant. Objects:Following Gandikota _et al._[15], we investigate the Imagenette [40] dataset which comprises ten easily identifiable classes (cassette player, chain saw, church, etc.) We evaluate CI methods by examining the top-1 predictions of a ResNet-50 Imagenet classifier on 500 generated images. We generate images from the erased models using the prompt "A photo of a [_object name_]". For CI, we generate images by replacing [_object name_] with the special string \(c_{*}\). ID:Following Heng & Soh [16], we select "Brad Pitt" and "Angelina Jolie" as identity concepts. We then utilize the GIPHY celebrity detector [41] for Concept Inversion evaluation. We generate 500 images from the erased models using the prompt "A photo of a [_person name_]". For CI, we generate the same number of images by replacing [_person name_] with the special placeholder string \(c_{*}\). NSFW content:Introduced by Schramowski _et al._[13], the I2P dataset comprises 4703 unique prompts with corresponding seeds, which (to date) is the definitive benchmark for measuring the effectiveness of NSFW concept erasure. This process involves generating images using the prompts and seeds and subsequently utilizing NudeNet [28] to classify the images into various nudity classes. The I2P benchmark is effective as its prompts do not necessarily contain words strictly related to nudity. Hence, an effective erasure method on this benchmark requires some degree of robustness to prompt selection. To evaluate each concept erasure method, we first used SD 1.4 to generate 4703 images using the I2P dataset. We used NudeNet to filter out 382 images with detected exposed body parts, on which we performed Concept Inversion. To measure how well the NSFW concept is recovered, we generated another 4703 images using the erased model by using the I2P prompts with the special placeholder string \(c_{*}\) prepended, which are then evaluated by NudeNet. #### 4.2.1 Erased Stable Diffusion (ESD) Concept Erasure Method Details.Gandikota _et al._[15] fine-tune the pre-trained diffusion U-Net model weights to remove a specific style or concept. The authors reduce the probability of generating an image \(x\) based on the likelihood described by the textual description of the concept, i.e. \(\mathbb{P}_{\theta^{*}}(x)\propto\frac{\mathbb{P}_{\theta}(x)}{\mathbb{P}_{ \theta}(c|x)^{\eta}}\), where \(\theta^{*}\) is the updated weights of the diffusion model (U-Net), \(\theta\) is the original weights, \(\eta\) is a scale power factor, \(c\) is the target concept to erase, and \(\mathbb{P}(x)\) represents the distribution generated Figure 2: **Quantitative results of Concept Inversion for artistic concept: Our human study ratings (with \(\pm\) 95% confidence intervals) show that we can recover the erased artistic concept across all models. The CI Likert score is even higher than the images generated by SD 1.4.** by the original model. Based on Tweedie's formula [42] and the reparametrization trick [21], the authors derive a denoising prediction problem as \(\epsilon_{\theta^{*}}(x_{t},c,t)\leftarrow\epsilon(x_{t},t)-\eta[\epsilon_{ \theta}(x_{t},c,t)-\epsilon_{\theta}(x_{t},t)]\). By optimizing this equation, the fine-tuned model's conditional prediction is steered away from the erased concept when prompted with it. The authors propose two variants of ESD: ESD-\(x\) and ESD-\(u\), which finetune the cross-attentions and unconditional layers (non-cross-attention modules) respectively. Concept Inversion Method.We employ standard Textual Inversion on fine-tuned Stable Diffusion models from [15] to learn a new word embedding that corresponds to the concept of the training images. The authors provide pre-trained ESD-\(x\) models for all 6 artistic concepts, the pre-trained ESD-\(u\) model for NSFW content concept, and training scripts for object concepts. For ID concepts, we train our own ESD-\(u\) models prior to CI. #### 4.2.2 Unified Concept Editing (UCE) Concept Erasure Method Details.Latent diffusion models [7] operate on low-dimensional embedding that is modeled with a U-Net generation network. The model incorporates conditioned textual information via embeddings derived from a language model. These embeddings are introduced into the system via cross-attention layers. Inspired by Orgad _et al._[43] and Meng _et al._[44], Gandikota _et al.[19]_ edit the U-Net of Stable Diffusion models without training using a closed-form solution conditioned on cross-attention outputs. They update attention weights to induce targeted changes to the keys/values that correspond to specific text embeddings for a set of edited concepts, while minimizing changes to a set of preserved concepts. Concept Inversion Method.We employ standard Textual Inversion [36] on fine-tuned Stable Diffusion models from UCE [19] to learn a new word embedding that corresponds to the (identity) concept of the training images. Gandikota _et al._[19] provide training scripts to reproduce their art style, object, and NSFW content concepts. For ID concepts, we adapt their publicly posted code to train our own models. \begin{table} \begin{tabular}{l c c c c c c c c} \cline{2-10} & SD 1.4 & ESD & FMN & UCE & AC & NP & SLD-Med & SA \\ \hline cassette player & 6.4 & 0.2 / 6.2 & 0.2 / 8.8 & 0.0 / 2.8 & 0.0 / 4.2 & 4.0 / 9.4 & 1.0 / 2.4 & 0.6 / 6.2 \\ chain saw & 68.6 & 0.0 / 64.0 & 0.0 / 0.2 & 0.0 / 43.6 & 0.0 / 17.8 & 4.0 / 82.8 & 0.8 / 86.6 & 0.0 / 2.0 \\ church & 79.6 & 0.8 / 87.4 & 0.0 / 0.0 & 10.0 / 82.2 & 0.4 / 72.6 & 25.4 / 78.4 & 20.6 / 72.0 & 56.2 / 65.6 \\ english springer & 93.6 & 0.2 / 48.2 & 0.0 / 0.0 & 0.0 / 69.6 & 0.3/ 22.6 & 27.0 / 90.4 & 24.6 / 96.4 & 0.0 / 8.2 \\ french horn & 99.3 & 0.0 / 81.6 & 0.0 / 59.0 & 0.4 / 99.4 & 0.6 / 66.2 & 24.4 / 99.0 & 17.0 / 97.6 & 0.2 / 87.0 \\ garbage truck & 83.2 & 0.8 / 57.0 & 6.4 / 69.6 & 16.4 / 89.6 & 0.0 / 79.4 & 39.4 / 84.6 & 19.8 / 94.8 & 12.6 / 35.4 \\ gas pump & 76.6 & 0.0 / 73.8 & 7.8 / 80.4 & 0.0 / 73.0 & 0.0 / 31.2 & 18.0 / 79.6 & 12.8 / 75.6 & 0.6 / 54.8 \\ golf ball & 96.2 & 0.0 / 28.6 & 22.6 / 74.4 & 0.2 / 18.6 & 0.0 / 28.4 & 45.2 / 88.4 & 60.2 / 98.8 & 3.0 / 49.0 \\ parachute & 96.2 & 0.0 / 94.2 & 0.9 / 93.4 & 16.9 / 94.2 & 42.0 / 92.4 & 32.8 / 77.2 & 52.8 / 95.8 & 22.6 / 78.6 \\ tench & 79.6 & 0.3 / 59.7 & 0.4 / 60.6 & 0.0 / 20.6 & 0.0 / 29.4 & 27.6 / 72.6 & 20.6 / 75.4 & 10.2 / 16.0 \\ \hline Average & 77.9 & 0.2 / 60.1 & 3.9 / 44.6 & 2.9 / 59.4 & 0.04 / 45.5 & 28.6 / 76.2 & 23.0 / 79.5 & 10.6 / 40.3 \\ \hline \end{tabular} \end{table} Table 1: **Quantitative results of Concept Inversion for object concept (Acc. % of erased model / Acc. % of CI):** Concept erasure methods can cleanly erase many object concepts from SD 1.4, evidenced by a significant drop in classification accuracy. Using Concept Inversion, we can generate images of the erased objects, which can be seen by an increase in average accuracy across all methods. Figure 3: **Qualitative results of Concept Inversion for artistic concept:** Columns 4, 6, 8, and 10 demonstrate the effectiveness of concept erasure methods in not generating the targeted artistic concepts. However, we can still generate images of the erased styles using CI. #### 4.2.3 Selective Amnesia (SA) Concept Erasure Method Details.Heng & Soh [16] pose concept erasure as a problem of continual learning, taking inspiration from Elastic Weight Consolidation (EWC) [45] and Generative Replay [46]. Consider a dataset \(\mathcal{D}\) that can be partitioned as \(\mathcal{D}=\mathcal{D}_{f}\cup\mathcal{D}_{r}=\{(x_{f}^{(n)},c_{f}^{(n)})\}_{n =1}^{N_{f}}\cup\{(x_{r}^{(n)},c_{r}^{(n)})\}_{n=1}^{N_{r}}\), where \(\mathcal{D}_{f}\) is the data to forget and \(\mathcal{D}_{r}\) is the data to remember. The underlying distribution of \(\mathcal{D}\) is given by \(p(x,c)=p(x|c)p(c)\). In the case of concept erasure, \(D_{f}\) contains the images of the concept we would like to erase, and \(D_{r}\) consists of images we want to preserve the model performance. They maximize the following objective function for concept erasure: \[\mathcal{L}=-\operatorname{\mathbb{E}}_{p(x|c)p(c_{f})}[\log p(x|\theta^{*},c )]-\lambda\sum_{i}\frac{F_{i}}{2}(\theta_{i}-\theta_{i}^{*})^{2}+\operatorname {\mathbb{E}}_{p(x|c)p(x_{r})}[\log p(x|\theta^{*},c)], \tag{2}\] where \(F\) is the Fisher information matrix and the third term is a generative replay term to prevent model degradation on samples that do not contain the erased concept. In practice, the authors optimize Eq. 2 by substituting the likelihood terms with the standard ELBOs. Moreover, they observe that directly minimizing the ELBO can lead to poor results. Hence, they propose to _maximize_ the log-likelihood of a surrogate distribution of the concept to forget, \(q(x|c_{f})\neq p(x|c_{f})\). This is done by Figure 4: **Qualitative results of Concept Inversion for ID concept: Columns 3, 5, 7, and 9 demonstrate the effectiveness of concept erasure methods in not generating Brad Pitt and Angelina Jolie. However, we can still generate images of the erased IDs using CI.** Figure 5: **Quantitative results of Concept Inversion for NSFW concept: On average, the number of detected body parts from SD 1.4 and the erased models is 99.75 (across 4703 images) and 26.21 (across 4703 images and 7 erasure methods), respectively. Using CI, the average number of detected body parts is 170.93 across 7 methods.** replacing \(-\operatorname{\mathbb{E}}_{p(x|c)p(c_{f})}[\log p(x|\theta^{*},c)]\) with \(\operatorname{\mathbb{E}}_{q(x|c)p(c_{f})}[\log p(x|\theta^{*},c)]\). Intuitively, this fine-tuning will result in a generative model that will produce images according to the surrogate distribution when conditioned on \(c_{f}\). Concept Inversion Method.We employ standard Textual Inversion [36] on fine-tuned Stable Diffusion models from [16] to learn a new word embedding that corresponds to the concept of the training images. The authors provide training scripts for Brad Pitt, Angelina Jolie, and NSFW content concepts. In particular, they use images of clowns and middle-aged people as the surrogate dataset for ID concepts, and images of people wearing clothes for NSFW content concept. For other concepts, we train our own models by appropriately modifying their training scripts prior to CI. #### 4.2.4 Forget-Me-Not (FMN) Concept Erasure Method Details.Zhang _et al._[17] propose fine-tuning the cross-attention layers of Stable Diffusion's U-Net to map the erased concept to that of a set of reference images. The authors first locate the word embeddings associated with the forgetting concept. This can be done by tokenizing a pre-defined prompt or through Textual Inversion. They then compute the attention maps between the input features and these embeddings, and minimize the Frobenius norm of attention maps and backpropagate the network. Algorithm 1 describes the concept erasure process. ``` Context embeddings \(\mathcal{C}\) containing the forgetting concept, embedding locations \(\mathcal{N}\) of the forgetting concept, reference images \(\mathcal{R}\) of the forgetting concept, diffuser \(G_{\theta}\), diffusion step \(T\). repeat\(t\sim\text{Uniform}([1...T]);\epsilon\sim\mathcal{N}(0,I)\)\(r_{i}\sim\mathcal{R};e_{j}\sim\mathcal{C};n_{j}\sim\mathcal{N}\)\(x_{0}\gets r_{i}\)\(x_{t}\leftarrow\sqrt{\alpha_{t}}x_{0}+\sqrt{1-\overline{\alpha}_{t}}\epsilon\)\(\triangleright\)\(\overline{\alpha}_{t}:\) noise variance schedule\(x_{t-1},A_{t}\gets G_{\theta}(x_{t},e_{j},t)\)\(\triangleright\)\(A_{t}:\) all attention maps \(\mathcal{L}\leftarrow\sum_{\alpha_{t}\in A_{t}}\|a_{t}^{(p)}\|^{2}\)\(\triangleright\)\(\mathcal{L}:\) attention resteering loss Update \(\theta\) by descending its stochastic gradient \(\nabla_{\theta}\mathcal{L}\) until Concept forgotten ``` **Algorithm 1** Forget-Me-Not on diffuser Concept Inversion Method.We employ standard Textual Inversion [36] on fine-tuned Stable Diffusion models from [17] to learn a new word embedding that corresponds to the concept of the training images. Zhang _et al._[17] only provides training scripts for ID concepts. Hence, train our models on other concepts using their public code prior to CI. #### 4.2.5 Ablating Concepts (AC) Concept Erasure Method Details.Kumari _et al._[18] perform concept erasure by overwriting the target concept with an anchor weight, which can be a superset or a similar concept. The authors propose two variants to erase the target concept, namely Model-based concept ablation and Noise-based concept ablation. In the former method, the authors fine-tune the pre-trained Stable Diffusion U-Net model by minimizing the following objective function: \[\operatorname*{arg\,min}_{\theta^{*}}\operatorname{\mathbb{E}}_{z\sim \mathcal{E},z^{*}\sim\mathcal{E}(x^{*}),c,c^{*},e\sim\mathcal{N}(0,1),t}\Big{[} w_{t}\|\epsilon_{\theta^{*}}(z_{t},c,t).sg()-\epsilon_{\theta^{*}}(z_{t}^{*},c^{*},t) \|_{2}^{2}\Big{]}.\] where \(w_{t}\) is a time-dependent weight, \(\theta^{*}\) is initialized with the pre-trained weight, \(x^{*}\) is the (generated) images with the anchor concept \(c^{*}\), and \(.sg()\) is the stop-gradient operation. For the second variant, the authors redefine the ground truth text-image pairs as \(<\)_a target concept text prompt, image of the anchor concept\(>\)_. The authors then fine-tune the model on the redefined pairs with the standard diffusion training loss. In addition, they add an optional standard diffusion loss term on the anchor concept image and corresponding texts as a regularization as the target text prompt can consist of the anchor concept. In both variants, the authors propose to fine-tune on different parts of Stable Diffusion: (1) cross-attention, (2) embedding: the text embedding the text transformer, (2) full weights: all parameters of U-Net. Concept Inversion Method.We employ standard Textual Inversion [36] on fine-tuned Stable Diffusion models from AC [18] to learn a new word embedding that corresponds to the (identity) concept of the training images. Kumari _et al._[18] provide training scripts for art style and object concepts. Consequently, we extend their public code to erase the remaining concepts. #### 4.2.6 Negative Prompt (NP) Concept Erasure Method Details.Negative Prompt (NP) is a guiding inference technique used in the Stable Diffusion community [14]. Instead of updating the weights of the original model, it replaces the unconditional score with the score estimate conditioned on the erased concept in classifier-free guidance. Gandikota _et al._[15] illustrate how NP can prevent the image generation of targeted artistic concepts. Concept Inversion Method.In our experiments, vanilla Textual Inversion was not able to circumvent NP. We modify the objective function for Textual Inversion to: \[v_{*}=\operatorname*{arg\,min}_{v}\mathbb{E}_{z\sim\mathcal{E}( x),c,\epsilon\sim\mathcal{N}(0,1),t}\left[\|(\epsilon_{\theta}(z_{t},t)+\alpha( \epsilon_{\theta}(z_{t},c,t)-\epsilon_{\theta}(z_{t},t))\right.\] \[\left.-(\epsilon_{\theta}(z_{t},c,t)+\alpha(\epsilon_{\theta}(z _{t},c_{*},t)-\epsilon_{\theta}(z_{t},c,t))\|_{2}^{2}\right],\] where \(c\) is the target concept. Our method learns a word embedding associated with the special string \(c_{*}\) such that the predicted noise from NP equals the true predicted noise using classifier-free guidance. #### 4.2.7 Safe Latent Diffusion (SLD) Concept Erasure Method Details.Safe Latent Diffusion [13] is an inference guiding method that is a more sophisticated version of NP, where the second unconditional score term is replaced with a safety guidance term. Instead of being constant like NP, this term is dependent on: (1) the timestep, and (2) the distance between the conditional score of the given prompt and the conditional score of the target concept at that timestep. In particular, SLD modifies the score estimates during inference as \(\overline{\epsilon}_{\theta}(x_{t},c,t)\leftarrow\epsilon_{\theta}(x_{t},t)+ \mu[\epsilon_{\theta}(x_{t},c,t)-\epsilon_{\theta}(x_{t},t)-\gamma(z_{t},c,c _{S})]\). We refer the reader to the Appendix B and the original work by Schramowski _et al._[13] for a more detailed explanation of \(\gamma(z_{t},c,c_{S})\). Note that SLD does not modify the weights of the original diffusion models, but only adjusts the sampling process. By varying the hyperparameters of the safety guidance term, the authors propose 4 variants of SLD: SLD-Weak, SLD-Medium, SLD-Strong, and SLD-Max. A more potent variant of SLD yields greater success in erasing undesirable concepts. Concept Inversion Method.In our experimentation, we encountered an issue akin to the one with Negative Prompt when applying vanilla Textual Inversion. Furthermore, the guidance term of SLD at timestep \(t\) depends on that at the previous timestep. This recursive dependency implies that in order to calculate the guidance term within our inversion algorithm, it becomes necessary to keep a record of the preceding terms for optimization purposes. Given the high number of denoising steps involved, such a process could result in significant memory usage, presenting an efficiency problem. To address this issue, we propose a new strategy to perform Concept Inversion. Instead of having a constant safety guidance term, SLD requires storing all the guidance terms from step 1 to step \(t-1\) to calculate the one at timestep \(t\). Since doing so will be memory-intensive, we instead approximate it by calculating only a subset of guidance terms at evenly spaced timesteps between 1 and \(t\). We can then learn a word embedding that counteracts the influence of the safety guidance term. The pseudocode for our CI scheme can be found in Appendix B. \begin{table} \begin{tabular}{l c c c c c c c} \cline{2-9} & SD 1.4 & ESD & FMN & UCE & AC & NP & SLD-Med & SA \\ \hline Brad Pit & 90.2 & 0.0 / 61.2 & 0.6 / 52.8 & 0.0 / 59.4 & 3.2 / 73.6 & 43.2 / 71.4 & 4.8 / 71.8 & 0.0 / 66.6 \\ Angelina Jolie & 91.6 & 0.8 / 60.1 & 0.0 / 41.2 & 0.0 / 65.2 & 0.6 / 79.6 & 46.2 / 75.2 & 5.2 / 72.8 & 9.6 / 67.7 \\ \hline Average & 90.9 & 0.4 / 60.7 & 0.3 / 47.0 & 0.0 / 62.3 & 1.9 / 76.6 & 44.7 / 73.2 & 5.0 / 72.3 & 4.8 / 67.1 \\ \end{tabular} \end{table} Table 3: **Quantitative results of Concept Inversion for ID concept (Acc. % of erased model / Acc. % of CI):** Concept erasure methods can cleanly erase images of Brad Pitt and Angelina Jolie from SD 1.4, evidenced by a significant drop in classification accuracy. CI can recover images of the erased IDs, which can be seen by an increase in average accuracy across all methods. ### Results and Discussion On the plus side, our experiments confirm that whenever the target concept is explicitly mentioned in the input prompt, all seven concept erasure methods are effective. Therefore, these methods can indeed provide protection against obviously-offensive text inputs. We confirm this even for concept categories that the methods did not explore in their corresponding publications. However, on the minus side, all seven methods can be fully circumvented using our CI attacks. In other words, these erasure methods are only effective against their chosen inputs. For artistic concepts, our human study in Figure 2 shows an average (across all methods) score of 1.31 on Likert rating on images generated by the erased model. This score expectedly increases to 3.7 when Concept Inversion is applied. Figure 3 displays images generated from the erased model and CI side-by-side, which shows that CI is effective in recovering the erased style. For object concepts, the average accuracy across all methods of the pre-trained classifier in predicting the erased concept increases from 9.89 to 57.94 using our attack (Table 1). This is supported by qualitative results shown in Figure 6. For ID concepts, the average accuracy across all methods of the GIPHY detector increases from 8.15 to 67.13 in Table 3. For NSFW concepts, Figure 5 suggests that CI can recover the NSFW concept, which is shown by an increase from 26.2 to 170.93 in the average number of detected exposed body parts. Among the 7 concept erasure methods, the most challenging one to circumvent is SLD. Our Concept Inversion technique manages to counteract the influence of SLD-Weak, SLD-Medium, and even SLD-Strong under most circumstances. Among the variants, SLD-Max proves to be the most challenging to circumvent. However, this variant comes with a drawback: it has the potential to entirely transform the visual semantics of the generated images. We provide several more results of variants of SLD in the Appendix B. In our proposed CI scheme for SLD, we observed that more GPU memory can give us better approximations of the safety guidance terms and therefore counteract their influence. ### Transferability and Useability of Our Attacks Intriguingly, we show that the learned special tokens derived from CI can be applied to the _original_ SD 1.4 model to generate images of the erased concept. Figure 7 demonstrates our results. This lends further evidence that current concept erasure methods are merely remapping the concept in token space, rather than fully excising the concept from the original model. Additionally, we provide evidence that the learned word embeddings through CI are useable in practice. Following Gal _et al._[36] and study the reconstruction effectiveness and editability of these embeddings. In particular, we generate two sets of images using CI with each concept erasure method: first, a set of generated images using the inverted concept; second, a set of generated images of the inverted concept in different scenes. We evaluate using CLIP [47] how well the inverted concepts are produced with the erased models, and how transferable are they to different scenes. In both cases, the erased models achieve performance similar to that of the original Stable-Diffusion model, pointing to the models are in principle still being able to produce the seemingly erased concepts. Figure 6: **Qualitative results of Concept Inversion for object concept: Columns 3, 5, 7, and 9 demonstrate the effectiveness of concept erasure methods in not generating the targeted object concepts. However, we can still generate images of the object using CI. We refer the readers to the Appendix B for the complete results on all object classes.** ## 5 Conclusions As text-to-image generative AI models continue to gain popularity and usage among the public, issues surrounding the ability to generate proprietary, sensitive, or unsafe images come to the fore. Numerous methods have been proposed in the recent past that claim to erase target concepts from trained generative models, and ostensibly make them safe(t) for public consumption. In this paper, we take a step back and scrutinize these claims. We show that post-hoc erasure methods have not excised the targeted concepts; fairly straightforward "attack" procedures can be used to design special prompts that regenerate the unsafe outputs. As future work, it might be necessary to fundamentally analyze why the "input filtering" phenomenon seems to be occurring in all these recent methods, despite the diversity of algorithmic techniques involved in each of them. Such an understanding could facilitate the design of better methods that improve both the effectiveness and robustness of concept erasure.
2301.03614
Fountain-driven gas accretion feeding star formation over the disc of NGC 2403
We use a dynamical model of galactic fountain to study the neutral extraplanar gas (EPG) in the nearby spiral galaxy NGC 2403. We have modelled the EPG as a combination of material ejected from the disc by stellar feedback (i.e. galactic fountain) and gas accreting from the inner circumgalactic medium (CGM). This accretion is expected to occur because of cooling/condensation of the hot CGM (corona) triggered by the fountain. Our dynamical model reproduces the distribution and kinematics of the EPG H$\mathrm{\scriptsize{I}}$ emission in NGC 2403 remarkably well and suggests a total EPG mass of $4.7^{+1.2}_{-0.9}\times10^8\mathrm{M}_\odot$, with a typical scale height of around 1 kpc and a vertical gradient of the rotation velocity of $-10.0\pm2.7\,\mathrm{km\,s^{-1}\,kpc^{-1}}$. The best-fitting model requires a characteristic outflow velocity of $50\pm10\,\mathrm{km\,s^{-1}}$. The outflowing gas starts out mostly ionised and only becomes neutral later in the trajectory. The accretion rate from the condensation of the inner hot CGM inferred by the model is 0.8$\,\mathrm{M}_\odot\,\mathrm{yr}^{-1}$, approximately equal to the star formation rate in this galaxy (0.6$\,\mathrm{M}_\odot\,\mathrm{yr}^{-1}$). We show that the accretion profile, which peaks at a radius of about 4.5$\,$kpc, predicts a disc growth rate compatible with the observed value. Our results indicate that fountain-driven corona condensation is a likely mechanism to sustain star formation as well as the disc inside-out growth in local disc galaxies.
Anqi Li, Filippo Fraternali, Antonino Marasco, Scott C. Trager, Gabriele Pezzulli, Pavel E. Mancera Piña, Marc A. W. Verheijen
2023-01-09T19:00:02Z
http://arxiv.org/abs/2301.03614v1
# Fountain-driven gas accretion feeding star formation over the disc of NGC 2403 ###### Abstract We use a dynamical model of galactic fountain to study the neutral extraplanar gas (EPG) in the nearby spiral galaxy NGC 2403. We have modelled the EPG as a combination of material ejected from the disc by stellar feedback (i.e. galactic fountain) and gas accreting from the inner circumgalactic medium (CGM). This accretion is expected to occur because of cooling/condensation of the hot CGM (corona) triggered by the fountain. Our dynamical model reproduces the distribution and kinematics of the EPG H i emission in NGC 2403 remarkably well and suggests a total EPG mass of \(4.7^{+1.2}_{-0.9}\times 10^{8}\) M\({}_{\odot}\), with a typical scale height of around 1 kpc and a vertical gradient of the rotation velocity of \(-10.0\pm 2.7\) km s\({}^{-1}\) kpc\({}^{-1}\). The best-fitting model requires a characteristic outflow velocity of \(50\pm 10\) km s\({}^{-1}\). The outflowing gas starts out mostly ionised and only becomes neutral later in the trajectory. The accretion rate from the condensation of the inner hot CGM inferred by the model is \(0.8\) M\({}_{\odot}\) yr\({}^{-1}\), approximately equal to the star formation rate in this galaxy (\(0.6\) M\({}_{\odot}\) yr\({}^{-1}\)). We show that the accretion profile, which peaks at a radius of about 4.5 kpc, predicts a disc growth rate compatible with the observed value. Our results indicate that fountain-driven corona condensation is a likely mechanism to sustain star formation as well as the disc inside-out growth in local disc galaxies. keywords: galaxies: haloes - galaxies: ISM - galaxies: evolution - galaxies: intergalactic medium - ISM: structure - ISM: kinematics and dynamics ## 1 Introduction Nearby spiral galaxies have been forming stars, across their lifetimes, at an approximately constant or gently declining rate, despite the fact that the gas in their interstellar medium (ISM) would, without replenishment, be consumed in a few Gyr (Aumer and Binney, 2009; Tacconi et al., 2018). An external gas reservoir is therefore needed from which galaxies accrete gas at a rate compatible with their SFR (e.g. Fraternali and Tomassetti, 2012). Gas-rich mergers are not providing a sufficient contribution, at least in the local Universe (Sancisi et al., 2008; Di Teodoro and Fraternali, 2014). Therefore the majority of the accretion must come from the diffuse gas that resides outside galaxies. The multi-phase circumgalactic medium (CGM) is expected to host a significant fraction of the baryons associated with dark matter halos in normal spiral galaxies (e.g. Crain et al., 2007; Tumlinson et al., 2011; Li et al., 2018), which makes it the most probable gas reservoir eligible for accretion. A prominent component of the CGM is hot gas (\(T\sim 10^{6-7}\) K) in the form of a diffuse 'corona' at nearly the virial temperature and in nearly hydrostatic equilibrium with the dark matter potential (e.g. White and Frenk, 1991; Pezzulli et al., 2017). Galactic coronae are thought to surround galaxies and to be extended to their virial radii (Fukugita and Peebles, 2006; Faerman et al., 2020). Direct detection of the hot coronae in X-rays is limited to the innermost few tens of kpc in massive galaxies with stellar mass beyond \(10^{11}\) M\({}_{\odot}\)(e.g. Anderson and Bregman, 2011; Walker et al., 2015; Anderson et al., 2016), while indirect evidence of their presence extends further (e.g. Gatto et al., 2013; Putman et al., 2021). Cool CGM (\(T\sim 10^{4}\) K) gas has also been detected, mostly in absorption along quasar sightlines, in several studies (e.g. Heckman et al., 2017; Rubin et al., 2018; Zahedy et al., 2019). Like the hot corona, also these cool absorbers extend to large distances (up to and sometimes beyond the virial radius) and their origin and fate remain debated (Rubin et al., 2010; Schroetter et al., 2019; Pointon et al., 2019; Afruni et al., 2021). Although gas accretion from the CGM is crucial to feed star formation (Hopkins et al., 2008; Sancisi et al., 2008; Keres et al., 2009), how precisely it takes place is still unknown. One possible accretion scenario is that cold filaments reach the outer disc (Lagos et al., 2017; El-Badry et al., 2018; Trapp et al., 2022) and are transported into the inner star-forming regions via radial motions, although Di Teodoro and Peek (2021) found that radial inflows in nearby galaxies alone could not sustain the star formation rates. Other possible mechanisms in clude cold gas filaments directly feeding the inner regions of a galaxy or the cooling of the hot corona (Keres et al., 2005; Nelson et al., 2013; Voit et al., 2015). The spontaneous cooling of the corona via thermal instability is still under debate as a number of works suggest that the combination of buoyancy and thermal conduction can suppress the growth of thermal perturbations (e.g. Binney et al., 2009; Nipoti, 2010; Joung et al., 2012). Some authors have proposed that coronal condensation could be triggered by the ejection of gas from the disc due to stellar feedback, such as in supernova-powered superbubbles (Fraternali, 2017, and references therein). In this scenario, the cooling of the hot gas is due to the mixing with the cool gas ejected from the disc and occurs within the fountain cycle. This process can be detected in high-quality data as it leaves a mark in the kinematics of the ejected disc gas (Fraternali & Binney, 2008; Marasco et al., 2012). To gain insight into the gas exchange processes between the disc and the inner hot CGM, one must focus on the disc-halo interface region. Deep H i observations have shown that disc galaxies, including the Milky Way, are surrounded by a neutral gas layer extending up to a few kpcs from their disc planes (e.g. Wakker, 2001; Sancisi et al., 2008; Hess et al., 2009; Marasco & Fraternali, 2011). This gas layer, known as extraplanar gas (EPG), is nearly ubiquitous in late-type galaxies and has a mass of 10-30 per cent of the mass of the H i in the disc (Marasco et al., 2019). The kinematics of the EPG is primarily characterised by differential rotation, similar to the disc, but with a negative rotational gradient (lag) ranging from \(-10\) to \(-20\) km s\({}^{-1}\) kpc\({}^{-1}\) in the vertical direction (e.g. Oosterloo et al., 2007; Zschaechner et al., 2011). Non-circular motions, especially large-scale inflows are also often found (e.g. Fraternali et al., 2002; Barbieri et al., 2005; Marasco et al., 2019). Ionised EPG has also been detected, both in the Milky Way (Dettmar, 1990; Lehner et al., 2012, 2022) and in several other galaxies (Heald et al., 2005; Levy et al., 2019), with similar kinematics as the neutral EPG (Kamphuis et al., 2007; Li et al., 2021; Marasco et al., 2022). The similarity between EPG and disc kinematics strongly suggests that EPG originates mostly from the disc, very likely pushed out of the plane due to stellar feedback and pulled back by gravity. This phenomenon is also known as 'galactic fountain' (Shapiro & Field, 1976; Bregman, 1980). Fraternali & Binney (2006, hereafter FB06) built ballistic models of galactic fountain flows, which successfully reproduced many of the observed properties of the EPG in the two nearby galaxies NGC 891 and NGC 2403. It is worth noticing that ballistic models also describe very well the properties of the warm gas (neutral and ionised) in the hydrodynamical TIGRESS simulations (Vijayan et al., 2020). However, a pure fountain model failed to reproduce the net inward flow (instead, an outward flow was predicted) and underestimated the rotation lag compared to the observed EPG in NGC 891 and NGC 2403. Fraternali & Binney (2008, hereafter FB08) mitigated these issues by introducing an external factor that could lower the angular momentum of fountain gas: accretion from the ambient gas. Although initially introduced to reproduce the kinematics of the EPG, the net inflow rate derived from this model turned out to be consistent with the SFR of the two galaxies, suggesting that the accretion triggered by the fountain cycle could be a viable mechanism to maintain the star formation activity. An unsolved issue of the above fountain-driven accretion scenario was the source of the accretion. This has been explored by Marinacci et al. (2010) with hydrodynamical simulations. Their simulations of fountain gas clouds interacting with the hot corona indicated that the corona was a possible accretion source. During the interaction process, part of the fountain gas is stripped and mixed with the hot gas. The mixture has a typical temperature of \(T\sim 10^{5}\) K, where the cooling function peaks, and also higher metallicity and density than the hot corona. As a consequence, the cooling time is reduced to a value shorter than the travel time of fountain gas. This result has been confirmed by other simulations with increasing levels of complexity (Armillotta et al., 2016; Gronke & Oh, 2018; Kooij et al., 2021). Some studies have upgraded the approach of FB08, taking into account the results of hydrodynamical simulations, using physical properties of the EPG and the hot corona as adjustable parameters, and managed to reproduce the phase-space distribution of both neutral and ionised EPG in the disc-halo interface of Milky Way remarkably well (Marasco et al., 2013; Fraternali et al., 2013; Marasco et al., 2012, hereafter M12). The best-fitting model predicted a net inflow rate which is consistent with the SFR of the Milky Way. The aforementioned studies strongly suggest that fountain-driven accretion takes place in the Milky Way and provides a promising explanation for how galaxies like our own can sustain their star formation with time. However, so far the Milky Way remains the only galaxy for which a state-of-the-art model of the galactic fountain has been applied to the observations using a parametric fitting methodology, which is required to robustly characterise the fountain flow and to quantify the properties of the accreting gas. The earlier models in FB08 did not statistically explore the parameter space, and furthermore, did not include the condensation of the corona, since hydrodynamical simulations were not available by then. In this paper, we revisit this by applying our state-of-the-art fountain model to NGC 2403, using high-quality H i data (with a beam size of 30''\(\times\) 29''and an rms-noise of 0.19 mJy beam\({}^{-1}\)) from Fraternali et al. (2002), which were later included in the HALOGAS survey (Heald et al., 2011). Table 1 summarises the main physical properties of NGC 2403. In Section 2 we provide a description of our dynamical model of the galactic fountain. In Section 3 we discuss the customisation we have made to implement the model for the case of NGC 2403. In Section 4 we present the modelling results. In Section 5 we discuss the reliability of our results and possible implications. We summarise our analysis in Section 6. ## 2 The model In this Section, we describe the main ingredients of our model and discuss its main free parameters. Further details can be found in FB06, FB08 and M12. We consider two different types of models: a 'pure fountain' ballistic model and a 'fountain + corona accretion' model which takes the interaction of fountain clouds with the hot coronal gas into consideration. In both scenarios, the models have a quasi-stationary state and are axisymmetric. The neutral EPG in the disc-halo interface region is modelled as a collection of clouds that are ejected from the disc at different radii with a given distribution of initial velocities and angles, and whose orbits are then integrated in time and followed across the halo region until they return to the disc. Since galactic fountains are powered by stellar feedback, we assume that the amount of gas ejected from each location in the disc is proportional to the SFR surface density at that radius. In practice, we incorporate this assumption by assigning, to each of our modelled clouds, a weight proportional to the SFR surface density at the ejection radius. This weight is then factored in when creating the mock datacube to be compared with observations (see also further explanations below). In our pure fountain ballistic models, the trajectories of the fountain clouds are integrated using a numerical approximation of the galaxy gravitational potential, derived as described in Section 3.1. For foun tain + corona accretion models, hydrodynamical forces due to the interaction between the clouds and the hot corona are parameterised in simple forms described in Section 2.3. The positions and velocities of the clouds along their orbits are recorded at each time-step (0.3 Myr), projected along the line-of-sight of the observer, weighted by the local SFR surface density at the ejection radius and transferred into a synthetic datecube, which is then adapted to a specific galaxy (NGC 2403 in our case) by assuming a distance, inclination (INCL), and position angle (PA), and using the same observational setup (beam shape, spectral resolution, pixel size, etc.) of the data under consideration. The outcome of the dynamical model is therefore a synthetic datacube which can be directly compared with the observational H i data of our target galaxy. Construction of the model involves several parameters but we will focus preferentially on three (for pure fountain models) or four (only for fountain + corona accretion models) that regulate the initial outflow speed of the clouds, their neutral gas fraction, the EPG total mass and, for models that include interaction with the corona, an additional parameter that regulates the condensation efficiency. Below we discuss these parameters in detail. Other ingredients are fixed by the observations, in particular the galaxy potential (which affects the trajectory of the cloud) and the SFR surface density profile (which regulates the ejection rate), as described in Section 3. ### Outflow velocity Fountain clouds are initially located within the galaxy disc and rotate at the circular speed set by our gravitational potential1. Each cloud receives a 'kick' with a velocity \(v_{\rm K}\) at certain angles \(\theta\), which is defined as the angle between the velocity vector and the direction normal to the disc plane. The probability distribution of the ejection as a function of \(v_{\rm K}\) and \(\theta\) (assuming a uniform probability in the azimuthal direction) follows FB06 and is given by Footnote 1: They also feature an additional velocity component, with an amplitude randomly extracted from a Gaussian distribution with rms of 8 km s\({}^{-1}\) and a random (isotropic) direction, to simulate the typical velocity dispersion of the neutral ISM (Iorio et al., 2017; Bacchini et al., 2019; Mancera Píña et al., 2021). \[\mathcal{P}(v_{\rm K},\theta)\propto\exp\left(-\frac{v_{\rm K}^{2}}{2h_{\rm v }^{2}\cos^{2\Gamma}\theta}\right), \tag{1}\] where \(h_{\rm v}\) is the characteristic velocity, and \(\Gamma\) determines the level of collimation of the ejected clouds. Larger values of \(h_{\rm v}\) increase the probability that a cloud is kicked at high speed. The larger \(\Gamma\), the more collimated the ejection. FB06 have tested models with different values for \(\Gamma\) and found that more collimated ejections agree better with the data. We have therefore fixed \(\Gamma=10\) (highly collimated). The outflow velocity of a cloud affects the maximum height and the trajectory of the orbit and therefore influences the final model. We, therefore, let the characteristic velocity \(h_{\rm v}\) be a free parameter with a flat prior in the range 40-100 km s\({}^{-1}\). This range covers the typical characteristic ejection speeds of the warm gas in high-resolution hydrodynamical simulations of galactic fountains (Kim & Ostriker, 2018). It also agrees with theoretical estimates of the typical blow-out speed of individual superbubbles (e.g. Mac Low & McCray, 1988; Keller et al., 2014). ### Phase change Previous studies have found that the neutral EPG in some spiral galaxies (including the Milky Way) shows a tentative preference for vertical inflow (Marasco et al., 2019; French et al., 2021, for example), which can be interpreted as due to a change of phase during the fountain cloud orbit: gas is largely ionised when ejected from the star-forming region of the disc but later recombines and becomes visible in H i at some point during its trajectory. To account for this effect in our model, we assume that a cloud is only visible in the H i phase when \[v_{z}(t)<v_{z,0}(1-f_{\rm ion}), \tag{2}\] where \(v_{z}\) is the vertical velocity (that is, in the direction perpendicular to the disc) of the cloud, \(v_{z,0}\) is the vertical component of the initial outflow velocity and \(f_{\rm ion}\) is the ionisation fraction parameter, which we set as a free parameter with a flat prior and varies from zero to one. When \(f_{\rm ion}\) equals zero, the cloud is visible in the whole orbit, while when \(f_{\rm ion}\) equals one, the cloud is only visible when \(v_{z}<0\) (i.e., the descending stage). ### Interaction with the corona In our model, the hot corona is modelled as a smooth, volume-filling gas layer that rotates at a lower speed than the disc, which is justified on both observational (Hodges-Kluck et al., 2016) and theoretical (Pezzulli et al., 2017) grounds. We assume that the corona maintains a temperature of \(\sim 10^{6}\) K, which implicitly implies some heating by either supernova feedback (e.g. Stinson et al., 2013) or active galactic nucleus feedback (for galaxies with ongoing AGN activities; e.g. Ciotti & Ostriker, 2012). The condensation and accretion of the hot corona is triggered by the cool (\(T\sim 10^{4}\) K) fountain clouds ejected from the disc, which mix efficiently with the former and produce a mixture at \(T\sim 10^{5}\) K, dramatically reducing the cooling time of the hot corona. The above processes have been investigated in the hydrodynamical simulations of cloud-corona interactions (Marinacci et al., 2010). A follow-up analysis (Marinacci et al., 2011) indicate that there is a net transfer of momentum from the fountain to the corona until the relative velocity between these two, \(v_{\rm rel}\), reaches a certain threshold \(v_{\rm thres}\). Marinacci et al. (2011) suggested \(v_{\rm thres}\approx 75\) km s\({}^{-1}\) for initial conditions valid for the Milky Way but pointed out that \(v_{\rm thres}\) can \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Galaxy Name & RA & DEC & PA & INCL & Distance & Hubble Type & \(M_{\rm B}\) & M\({}_{*}\) & M\({}_{\rm H,EPG}\) & SFR \\ & & & [\({}^{\circ}\)] & [\({}^{\circ}\)] & [Mpc] & & & [\(10^{8}\) M\({}_{\odot}\)] & [\(10^{8}\) M\({}_{\odot}\)] & [M\({}_{\odot}\) yr\({}^{-1}\)] \\ \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\ \hline \hline NGC 2403 & \(07^{\rm h}36^{\rm m}51\aas@@fstack{s}4\) & \(+65^{\circ}36^{\prime}09\aas@@fstack{s}2\) & 124.6 & 62.5 & 3.2 & SAcd & \(-19.68\) & 71.9 & 5.9 & 0.6 \\ \hline \end{tabular} \end{table} Table 1: Galaxy properties. Columns: (1) Galaxy name. (2)–(3): Coordinates (J2000), (4)–(5): Position-angle and inclination. (6) Distance. (7) Hubble type. (8) Absolute magnitude in the \(B\)-band. (9) Stellar mass (see Pezzulli et al., 2015). (10) Total mass of H i extraplanar gas. (11) Total star formation rate of the galaxy. Values in this table are taken from Marasco et al. (2019) unless otherwise mentioned. vary in the range 45-105 km s\({}^{-1}\)(see also Fraternali, 2017). As soon as \(v_{\rm rel}\) becomes smaller than this threshold \(v_{\rm thres}\), the net momentum transfer ceases as the condensation of corona recaptures angular momentum lost by fountain gas. For this reason, we set the azimuthal speed of the corona to be always lower than the local circular speed \(v_{\rm c}\) by \(v_{\rm thres}\), and in this case, \(v_{\rm c}-75\) km s\({}^{-1}\). In Section 5.1 we explore models with different value of \(v_{\rm thres}\), corresponding to different rotational speeds for the coronal gas. In the above scenario, the cloud acceleration due to interaction with the corona is defined as \[\dot{\bf v}=\left\{\begin{array}{ll}-\frac{C_{\rm photo}\sigma_{\rm cloud}( v_{\rm tot}-v_{\rm thres})}{M_{\rm cloud}}\,\,\mathbf{v_{\rm rel}}- \alpha\mathbf{v_{\rm rel}},&\mathbf{v_{\rm rel}}\,\geq\,v_{ \rm thres}\\ -\alpha\mathbf{v_{\rm rel}},&\mathbf{v_{\rm rel}}\,<\,v_{ \rm thres},\end{array}\right. \tag{3}\] where \(v_{\rm rel}\) is the cloud-corona relative velocity vector, \(v_{\rm rel}\) is the modulus of \(v_{\rm rel}\). \(M_{\rm cloud}\) and \(\sigma_{\rm cloud}\) are the mass and the cross-section of the cloud (defined as \(\pi R_{\rm cloud}^{2}\), with \(R_{\rm cloud}\) the radius of the cloud), \(\rho_{\rm hot}\) is the density of the corona, C is a dimensionless constant of order unity (in our model C=1) to account for the geometry of the cloud, and \(\alpha\) is the condensation rate of the coronal gas onto the cloud, such that the mass of the cloud \(M_{\rm cloud}\) grows with time as \(\dot{M}_{\rm cloud}=\alpha M_{\rm cloud}\). We assume a corona density of \(10^{-3}\) cm\({}^{-3}\), a cloud radius of 100 pc and an initial mass of \(2\times 10^{4}\) M\({}_{\odot}\), consistent with typical values of fountain clouds suggested by observations (Hsu et al., 2011). The first term on the right-hand side of equation 3 represents the drag experienced by the fountain cloud as it moves through the coronal gas: the cloud speed decreases as long as its velocity stays above \(v_{\rm thres}\). The second term is due to the condensation of coronal gas onto the cloud: as the total mass of the cloud increases, conservation of the total momentum implies lower velocity (see Fraternali & Binney, 2008). We have also derived the drag timescale \(t_{\rm drag}=724\) Myr using equation(6) in Fraternali (2017), which is larger than the fountain orbit time (\(\sim\)100 Myr), we therefore expect that drag only has a minor effect. In fountain + corona accretion models, we let \(\alpha\) be a free parameter with a flat prior in the range \(\alpha=0\)-6 Gyr\({}^{-1}\). ### EPG mass The normalisation of the H i flux presented in the final galactic fountain model sets the total H i EPG mass, which is another free parameter. We use a fiducial EPG mass of \(5.9\times 10^{8}\) M\({}_{\odot}\) from Marasco et al. (2019) as an initial guess, but allow the EPG mass to vary, multiplying the fiducial value by a normalisation scaling factor in the range 0.1-10. ## 3 Implementation of the model In this section, we describe the gravitational potential and the SFR surface density radial profile for NGC 2403, as they are necessary ingredients to construct our dynamical models. We then describe how we fit the model parameters to the data. ### The gravitational potential We use the gravitational potential grid derived by FB06 for NGC 2403 without modification. Below we briefly describe how the potential model is built. The gravitational potential was derived from an axisymmetric mass model, which consists of three components: a stellar disc, a gaseous disc, and an NFW dark matter halo (Navarro et al., 1997). FB06 performed a mass decomposition of the H i rotation curve of NGC 2403 (Fraternali et al., 2002) using the three components mentioned above. The stellar and the gaseous discs' density distributions were given by exponential profiles, along both the radial (\(R\)) and the vertical (\(z\)) direction. The scale length of the stellar (gaseous) disc \(R_{*}\) (\(R_{\rm gas}\)) was derived by fitting an exponential profile to the stellar (gaseous) surface brightness radial profile. The scale height of the stellar disc was set to one-fifth of its scale length (see van der Kruit & Freeman, 2011 and references therein), and the scale height of the gaseous disc was set to 100 pc (typical of the inner gaseous disc, see Marasco et al., 2017; Bacchini et al., 2019; Mancera Pina et al., 2022). The mass-to-light ratio of the stellar disc was derived via the rotation curve decomposition. The above parameters of the mass model are listed in Table 2. Once the parameters of all components are decided, the galactic potential and forces are calculated numerically in the \((R,z)\) cylindrical coordinate system, using a grid with a cell size of 0.1 kpc within \(R<25\) kpc and \(z<5\) kpc, and of 0.5 kpc for \(25<R<100\) kpc and \(5<z<100\) kpc. Potential and forces are determined at any (\(R_{*}\)z) via a bilinear interpolation of these grids (see FB06 for details). ### Star-formation-rate surface-density profiles In this paper, we directly use the SFR surface density radial profiles from previous observations, as opposed to FB06, which used the Schmidt-Kennicutt law (Kennicutt, 1989), and M12, which used another empirical star formation law (directly derived from 17 galaxies with known gas and SFR surface densities) to estimate the SFR. The SFR surface-density profile of NGC 2403 is mainly taken from Leroy et al. (2008), which derived the SFR using a combination of far ultraviolet (FUV) and 24 \(\mu\)m data, and is then complemented with the SFR surface density profile from Bigiel et al. (2010), which is derived from FUV data with a lower resolution but larger radial extent compared to Leroy et al. (2008). We refer the readers to Bacchini et al. (2019, 2020) for more details about collecting SFR data of NGC 2403. Fig. 1 shows the SFR surface-density data and the interpolated profile (in steps of 0.5 kpc) which we used as an input for our fountain models. ### Separation of the EPG emission Before modelling the EPG in the NGC 2403 datacube, we first need to isolate its emission from the underlying disc and from external regions (foreground and background emission) that are clearly not associated with the galaxy. For this purpose, we follow the procedure described in Marasco et al. (2019). The emission from regions external to the galaxy is filtered out by spatially smoothing the datacube by a 2D Gaussian kernel with a full width half maximum (FWHM) of \(64\aas@@fstack{\prime\prime}5\times 54\aas@@fstack{\prime\prime}6\), which is five times \begin{table} \begin{tabular}{c c c c c c} \hline \((M/L)_{*}\) & \(R_{*}\) & \(h_{*}\) & \(R_{\rm gas}\) & \(h_{\rm gas}\) & \(\rho_{\rm 0,DM}\) & \(r_{\rm s}\) \\ & [kpc] & [kpc] & [kpc] & [kpc] & [\(\rm M_{\odot}\,kpc^{-3}\)] & [kpc] \\ \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline \hline 1.70 & 2.0 & 0.4 & 5.7 & 0.1 & \(3.1\times 10^{7}\) & 4.5 \\ \hline \end{tabular} \end{table} Table 2: Mass models for NGC 2403. Columns: (1) Mass-to-light ratio in the \(B\)-band of the stellar disc. (2)–(3): Scale length and scale height of the stellar disc. (4)–(5): Scale length and scale height of the gaseous disc. (6)–(7) Central density and scale radius of the NFW dark matter halo. larger than the spatial resolution of the data, calculating a smoothed rms noise level, and then sigma-clipping at \(\rm S/N=4\). This produces a mask that is applied to the original (not smoothed) data to exclude the regions external to the main galaxy. In intermediate-inclination galaxies like NGC 2403, the emission from the EPG overlaps spatially with that from the regularly rotating disc but can be (at least in part) separated from the latter in the velocity space, provided that the velocity resolution is sufficient. Here, we employ the disc-EPG separation method introduced by Fraternali et al. (2002), which works as follows. For any given H i velocity profile at a certain location in the sky, the disc component is assumed to be described by a Gaussian profile. The EPG adds a wing to the profile, which is typically due to the lagging of EPG and located toward the systemic-velocity side; although wings on both sides can be seen at some spatial locations across the disc due to other non-circular (mostly vertical) motions (see also Boomsma et al., 2008). Despite the disc and EPG profiles are blended together, it is reasonable to neglect the contribution of the EPG around the peak of each velocity profile since EPG mass is only a small percentage (\(\sim 20\) per cent for NGC 2403, Marasco et al., 2019) of the total H i mass. We therefore use the 'peak' region to fit the disc emission by performing a Gaussian fit using only the upper 40 per cent (in intensity) of the line profile. This Gaussian profile is considered to be the contribution of emission from the disc component alone. Pixels with disc emission (estimated from the Gaussian profile) larger than \(N\) times the rms noise are clipped (see Marasco et al., 2019 and Li et al., 2021 for a more detailed explanation of this methodology). The scaling factor \(N\) is decided empirically as a compromise between keeping enough EPG emission for the modelling and alleviating the disc contamination. We set \(N=2\) for NGC 2403. Some peculiar features in NGC 2403, in particular, a long filament of unknown origin (see also de Blok et al., 2014) have also been manually filtered out (see blank regions in Figs. 2 and 3). We discuss this further in Section 5.1. After passing through the above mask, only EPG emission and noise remain in the datacube. We then implement sigma-clipping at \(\rm S/N=2\) to mask the random noise. For consistency, the same mask has also been applied to the model datacube that we describe below. ### Model construction and evaluation Our EPG models have three or four free parameters: the characteristic outflow velocity \(h_{\rm v}\), the ionisation fraction \(f_{\rm ion}\), the condensation rate \(\alpha\) (for fountain + corona accretion models), and the EPG mass \(\rm Mg_{P}\). We build three(four)-dimensional grids for pure fountain (fountain + corona accretion) models with \(h_{\rm v}\) varying from 40 to 100 \(\rm km\,s^{-1}\) in steps of 10 \(\rm km\,s^{-1}\), \(f_{\rm ion}\) varying from 0.0 to 1.0 in steps of 0.2, \(\alpha\) varying from 0 to 6 Gyr\({}^{-1}\) in steps of 0.6 Gyr\({}^{-1}\), and scaling factor of the initial EPG mass varying from 0.1 to 10 in steps of factor of \(10^{0.2}\). The ranges and steps of the free parameters are summarised in Table 3. The best-fitting parameters are estimated by a Bayesian approach. For each cell in our 3D (4D) parameter grid, we compute the posterior probability of our model. For a chosen parameter vector \(\mathbf{x}\) and given our data \(\mathcal{D}\), the posterior probability \(\mathcal{P}\) is given by \[\mathcal{P}(\mathbf{x}|\mathcal{D})\propto\mathcal{P}(\mathcal{D}|\mathbf{x}) \mathcal{P}(\mathbf{x}), \tag{4}\] where \(\mathcal{P}(\mathcal{D}|\mathbf{x})\) is the likelihood function and \(\mathcal{P}(\mathbf{x})\) is the prior. The prior for each parameter is uniform within the parameter space (uniform in the logarithmic scale for the normalisation parameter). The likelihood function is given by \[\mathcal{P}(\mathcal{D}|\mathbf{x}) \propto \prod_{n.v.oxels}\exp\left(-\frac{|\mathcal{M}(\mathbf{x})- \mathcal{D}|}{\varepsilon}\right) \tag{5}\] \[= \exp\left(-\sum_{n,v.oxels}\frac{|\mathcal{M}(\mathbf{x})- \mathcal{D}|}{\varepsilon}\right)\] \[= \exp[-\mathcal{R}(\mathbf{x})/\varepsilon],\] where \(\mathcal{M}\) represents the model datacube built from parameter vector \(\mathbf{x}\), \(\varepsilon\) is the uncertainty of the data, and \(\mathcal{R}\) is the sum of the absolute residuals between the data and the model, which is defined as the sum of absolute difference in each pixel: \(\rm Res=\sum|data-model|\). Note that both the model and the data have been masked using the method described in Section 3.3, i.e, the voxels where EPG emission is detected at more than \(2\sigma\) are considered in the determination of the residuals. In equation 5, \(\varepsilon\) regulates how rapidly the likelihood drops when our model deviates from the data. Assuming \(\varepsilon\) equal to the rms-noise of the data is a poor choice, which leads to very narrow posterior probability distributions and severely underestimates the uncertainties in our model parameters. This occurs because our model is smooth and axisymmetric, and cannot possibly capture the complexity of the data down to the noise level. Numerical solutions to this problem can be worked out (see Section 2.5 in Marasco et al., 2019), but in this work, we prefer to set \(\varepsilon\) a posteriori, in a way that the 2-\(\sigma\) uncertainty on the derived parameters corresponds to models that look very different from the data by visual inspection. In the end, we assume \(\varepsilon=0.38\,\rm Jy\,beam^{-1}\). We marginalise the multi-dimensional posterior distribution to determine the probability distribution of individual parameters. Best-fitting values are defined as the median of these marginalised posterior distributions, and the uncertainties are taken as half the difference between the 84th and 16th percentiles of the distribution. ## 4 Results ### Residuals and position-velocity diagrams In this Section, we show the best-fitting results of the pure fountain and the fountain + corona accretion models. The 2D marginalised posterior probability distributions are shown in Appendix A. The Figure 1: Star formation rate surface density versus galactocentric distance in NGC 2403. Blue dots represent data from Leroy et al. (2008) while orange points are from Bigiel et al. (2010). The green curve shows the interpolated profiles with steps of 0.5 kpc and is used as an input for our fountain model. best-fitting values and uncertainties, obtained with the method described in Section 3.4, are listed in Table 4. The position-velocity (pv) slices of the best-fitting models are compared with the data in Figs. 2 and 3. In general, both the pure fountain and fountain + corona accretion models recover the EPG emission, but we find that the former reproduces the data poorly for pv slices parallel to the minor axis. Instead, the fountain + corona accretion model performs better in the same locations. This is better shown in Fig. 4 where we compare the two models for a pv slice parallel to the minor axis with an offset 4\({}^{\prime}\) from the centre. The best-fitting pure fountain model fails to reproduce the emission marked out by the red arrow and predicts extra emission in the blank region marked out by the black arrow. Instead, the best-fitting fountain + corona accretion model generates the same asymmetry shown by the data. Previous studies (Fraternali et al., 2002; Marasco et al., 2019) have shown that this asymmetric feature can be produced by radial inflows. In a fountain model, EPG emission shows outward radial flows, but accretion from low-angular momentum material can invert this trend and produce an inward flow (especially evident for clouds ejected from the outer regions of the disc; Fraternali, 2017), which is required to best reproduce the data. The above visual comparison prefers the fountain + corona accretion model. This result has been already inferred by FB08, but we now have its statistical confirmation using the likelihood values derived by equation 5. We find \(-\ln\left[\mathcal{P}(\mathcal{D}|\mathbf{x})\right]=232.6\) for the best-fitting pure fountain model, while \(-\ln\left[\mathcal{P}(\mathcal{D}|\mathbf{x})\right]=224.5\) for the best-fitting fountain + corona accretion model, as shown in Table 4. We use the Bayesian information criterion (BIC; Schwarz, 1978) to infer which of the two different scenarios (pure fountain or fountain + corona accretion) is statistically preferred by the data, given that they make use of a different number of free parameters. The BIC is derived as \[\mathbf{BIC}=-2\ln\mathcal{L}+k\ln\mathcal{N}, \tag{6}\] where \(\mathcal{L}\) is the likelihood of the model (equation 5), \(k\) is the number of parameters estimated by the model, and \(\mathcal{N}\) is the number of independent data points used in the fit. When comparing similar models with different numbers of free parameters, a model with a lower BIC is to be preferred, as the BIC penalises extra parameters that do not significantly lower the likelihood. The \(\mathbf{BIC}\) for the pure fountain model is 490.6 while for the accretion model is 482.9, indicating that the fountain + corona accretion model is statistically preferred by BIC. The above results show that the H i EPG of NGC 2403 is constituted by a combination of material ejected from the disc by stellar feedback and gas cooling from the inner hot CGM and accreting onto the disc. This is also consistent with previous indication from kinematic modelling of the EPG which shows radial and vertical inflow (Marasco et al., 2019). The best-fitting fountain + corona accretion model requires an outflow with a characteristic velocity of \(50\pm 10\) km s\({}^{-1}\), starting out mostly ionised and becoming neutral when the vertical velocity has been reduced by around 40%. The inferred H i total mass of the EPG (\(4.7^{+1.2}_{-0.9}\times 10^{8}\) M\({}_{\odot}\)) is similar to that derived in Marasco et al. (2019) (\(5.9\times 10^{8}\) M\({}_{\odot}\)). The accretion rate given by our best-fitting model (\(0.8^{+0.4}_{-0.2}\) M\({}_{\odot}\) yr\({}^{-1}\)) is compatible with the star formation rate of NGC 2403 (\(0.6\) M\({}_{\odot}\) yr\({}^{-1}\); Heald et al., 2012)2, indicating that the mechanism of fountain-driven gas accretion can sustain the ongoing star formation in NGC 2403. It is noteworthy that the values of both outflow speed and accretion rate found with our statistical analysis are in agreement with those found by FB08 by trial and error. The present analysis, however, allows us to further our understanding of fountain-driven accretion in NGC 2403. Footnote 2: This estimate has an uncertainty of around \(\pm 0.3\) dex or better, based on the algorithm Héald et al. (2012) used to derive the SFR (Kememicutt et al., 2009). ### Properties of the extraplanar gas layer in NGC 2403 This is the first time that a dynamical fountain model including corona condensation has been applied to an external galaxy with a statistical fitting method. The best-fitting fountain + corona accretion model reproduces most of the EPG features in NGC 2403. Assuming our model is reliable and correct (see discussion in Section 5.1), we can therefore extract physical properties of the EPG layer, as well as a predicted gas accretion profile, from the model. #### 4.2.1 Thickness of the neutral extraplanar gas layer We determine the thickness of the EPG layer in our best-fitting model by fitting the vertical density profiles at different radii with exponential functions. Fig. 5 shows the scale height of the EPG in our best-fitting fountain + corona accretion model as a function of radius. The scale height is calculated only out to \(R=12.5\) kpc, as fountain clouds beyond this radius are too rare to provide a reliable vertical profile. Overall, the thickness of the gas layer increases slightly with radius, which is what we would expect given that the gravitational potential is shallower in the outer parts of the galaxy (we have assumed that \(h_{v}\) is constant with radius for simplicity, see also Section 5.1). This makes the orbits more extended in the outer region than in the inner region. The flux-weighted average scale height of our EPG model is 0.93\(\pm\)0.003 kpc, compatible with the scale height derived in the kinematic model in Marasco et al. (2019). Thus, the EPG layer of NGC 2403 is significantly thicker than its H i disc, which has scale height comprised between 100 and 600 pc (Mancera Pina et al., 2022). #### 4.2.2 EPG rotational lag Fig. 6 shows the rotation curves of the EPG layer at different heights above the disc. These curves are derived from our best-fitting fountain + corona accretion model by taking the flux-weighted mean value \begin{table} \begin{tabular}{l l l l l} \hline \hline Parameter & description & range & step & units \\ \hline \(h_{v}\) & Characteristic outflow velocity (equation 1) & [40,100] & 10 & km s\({}^{-1}\) \\ \(f_{\rm ion}\) & Ionisation fraction during the ascending part of the orbits(equation 2) & [0,1.0] & 0.2 & \\ \(\alpha\) & condensation rate of coronal gas (equation 3) & [0,6.0] & 0.6 & Gyr\({}^{-1}\) \\ Norm & EPG mass scaling factor \({}^{\alpha}\) & [0,1.10] & 16\({}^{B,2}\) & \\ \hline \hline \end{tabular} \end{table} Table 3: Free parameters of our galactic fountain model. The third column lists the range explored in our residual calculations, using a grid size given by the forth column. \({}^{a}\) a value of 1 corresponds to the EPG mass determined by Marasco et al. (2019) (\(5.9\times 10^{8}\) M\({}_{\odot}\)). \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Model & \(v_{\rm thresh}\) & \(h_{\rm v}\) & \(f_{\rm non}\) & \(\alpha\) & \(\dot{m}\) & \(\dot{M}_{\rm FPG}\) & \(-\ln\,\mathcal{L}\) & BIC \\ & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & & [Gyr\({}^{-1}\)] & [M\({}_{\odot}\) yr\({}^{-1}\)] & [\(10^{8}\) M\({}_{\odot}\)] & & \\ \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline \hline pure fountain & \(N/A\) & \(50\pm 10\) & 0.6\(\pm\)0.2 & \(N/A\) & \(N/A\) & \(5.9^{+1.5}_{-1.2}\) & 232.6 & 490.6 \\ fountain + corona accretion & 75 & \(50\pm 10\) & 0.4\(\pm\)0.2 & 2.4\({}^{+1.8}_{-0.6}\) & 0.8\({}^{+0.4}_{-0.2}\) & 4.7\({}^{+1.2}_{-0.9}\) & 224.5 & 482.9 \\ fountain + corona accretion & 45 & \(50\pm 10\) & 0.4\({}^{+0.2}_{-0.4}\) & 4.2\(\pm\)1.2 & 1.1\({}^{+0.3}_{-0.3}\) & 4.7\({}^{+1.2}_{-0.9}\) & 223.5 & 480.9 \\ \hline \hline \end{tabular} \end{table} Table 4: The best-fitting values and uncertainties (obtained with the method described in Section 3.4) for our fountain (+ corona accretion) models of the EPG of NGC 2403. We focus on the first two models in this Section and further discuss the third model in Section 5.1. (1) Model type. (2) The velocity threshold for fountain + corona accretion models. The net transfer of momentum from the fountain to the corona ceases when the relative velocity between these two decreases below this threshold (see Section 2.3). (3) Characteristic outflow velocity. (4) Ionisation fraction of the fountain gas. (5) Condensation rate of the hot gas. (6) Global accretion rate of the condensed hot gas onto the disc. Note that this is not a free parameter but a value derived from the best-fitting model. (7) H\({}_{\rm I}\) EPIC mass. (8) Logarithm of the likelihood values \(\mathcal{P}\,(\mathcal{D}\,|\mathbf{s})\) of the best-fitting models, calculated in equation 5. (9) The BIC values of the best-fitting models, calculated from equation 6. Figure 3: As in Fig. 2, but for the best-fitting fountain + corona accretion model of NGC 2403. Figure 2: Position–velocity (pv) slices from the data (shown in black contours and blue colour scale) and from the best-fitting pure fountain model (red contours); from outer to inner regions, contour levels are (2, 4, 8, 16)-\(\sigma\), respectively, and a negative contour -2\(\sigma\) is shown as the dashed grey contour. The (irregular) blank region represents the disc mask and the square blank region represents the manual mask that filters out the irregular filament in NGC 2403. Top panels are pv slices parallel to the major axis with offsets \(-4^{\prime}\), \(-2^{\prime}\), \(0^{\prime}\), \(2^{\prime}\), \(4^{\prime}\). Bottom panels are pv slices parallel to the minor axis with offsets \(-4^{\prime}\), \(-2^{\prime}\), \(0^{\prime}\), \(2^{\prime}\), \(4^{\prime}\). of the azimuthal velocities of the particles in a given bin of radius and height. We find that the rotation velocity of the EPG decreases with height. At \(R=5.5\,\mathrm{kpc}\) (the half-mass radius of the EPG in NGC 2403), the velocity gradient is around \(-10.0\pm 2.7\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{kpc}^{-1}\). This gradient is consistent with the velocity gradient of \(-11.7\pm 0.5\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{kpc}^{-1}\) inferred by Marasco et al. (2019), who modelled the EPG of NGC 2403 with simplified geometric and kinematic assumptions, and therefore intrinsically differs from our dynamical model. Our results are also comparable with the velocity gradient \(-15\pm 0.5\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{kpc}^{-1}\) directly measured in the edge-on galaxy NGC 891. ### Gas flows and accretion in NGC 2403 Fig. 7 shows the inflow and outflow rates as a function of radius predicted by our best-fitting fountain + corona accretion model. The shape of the outflow rate profile strictly follows that of the SFR profile shown in Fig. 1. This is true by construction, as explained in Section 2. The mass loading factor (defined as the ratio of the mass outflow rate to the SFR and therefore is proportional to the normalisation factor free parameter in our model) is however a prediction of our model, and we find a value of around 9.5. The inflow rate at a given radius is given by the combination of fountain clouds and accreted coronal particles that fall onto the disc per unit time and area. Since fountain clouds do not fall back onto the disc at the same radius as they are ejected and collect additional gas condensed from the corona as they fall, the inflow rates do not precisely follow the outflow-rate trend but show a somewhat smoother distribution. We also present the net flow rate (where inflow is defined as positive value) as a function of radius in Fig. 7 top panel. The first Figure 4: As in Figs. 2 and 3, but focusing on the pv slice parallel to the minor axis with offset \(4^{\prime}\). Left: best-fitting pure fountain model. Right: best-fitting fountain + corona accretion model. The red arrows mark regions where EPG emission is present in the data and in the fountain + corona accretion model, but not in the pure fountain model. The black arrows mark out the region where the pure fountain model predicts extra emission with respect to the data, while the fountain + corona accretion model correctly predicts a lack of emission. Figure 5: The scale height of the EPG layer predicted by our best-fitting fountain + corona accretion model for NGC 2403. Figure 6: Rotational velocities for the EPG layer at different heights from the plane (solid/dashed/dotted lines), compared to the disc rotation curve (black squares with error bars) given by Fraternali et al. (2002). Velocities are derived from our best-fitting fountain + corona accretion model by taking the flux-weighted average of azimuthal velocity \(\nu_{\phi}\) at given \((R,z)\) locations. evident feature is that the net flow is much lower than both outflow and inflow across the disc, except for the very outer parts. Also, except for some fluctuation in the innermost region (within \(R=4\) kpc), the overall tendency is net inflow in the inner region (\(R<10.5\) kpc, the vertical dashed line in Fig. 7 top panel) and net outflow in the outer region. The net inflow is mostly due to condensation of the hot corona, while the net outflow in the outer region can be explained by the fact that the interaction between fountain gas and the corona results in inward orbits for the former: cloud particles are more likely to fall back to the plane at a radius smaller than their ejected radius (see Fig. 8 in Fraternali 2017). As we discussed in Section 1, accretion of the CGM onto the disc is crucial for feeding star formation and is also a key process in the evolution of a galaxy. The details of this process are however not well understood. Now with our best-fitting fountain + corona accretion model, we can predict the accretion rate as a function of radius, shown in the bottom panel of Fig. 7. Despite star formation being the origin of the fountain cycle, the fountain-driven accretion rate does not follow the profile of the SFR surface density (shown in Fig. 1) and in particular, it is more skewed towards larger radii compared with the SFR surface density profile. This is due to a number of effects, the most important of which is a radially increasing orbital time, which is in turn a consequence of a varying gravitational potential with radius, as also discussed in Section 4.2.1. A longer orbital time causes an increase in the total condensation along a given orbit, even with a fixed accretion efficiency per unit time (i.e. \(\alpha\)), as assumed in our model. The accretion profile has a well-defined peak at intermediate radii and its exact position is determined by an interplay between a radially declining SFR surface density and a radially increasing duration of the orbits (see also M12 for the Milky Way). The gas accretion rate that comes from corona condensation is at every radius a minor fraction of the overall gas inflow (\(\sim 10\%\); see Fig. 7). Compared to the total accretion rate of \(0.8\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\), the total inflow and outflow rates are \(6.48\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\) and \(5.69\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\), respectively. Most of the gas inflow occurs as a consequence of the return to the disc of the gas ejected by the fountain. However, the fountain cycle by itself does not add any new gas to the disc and would not help to sustain the star formation. Instead, our model predicts that the fountain flow "captures" new gas from the corona that is then added everywhere across the disc to sustain the local star formation. Remarkably, the accretion rate that is needed to reproduce the seemingly independent kinematics of the EPG in NGC 2403 turns out to be very similar to the one needed to sustain its star formation. Overall, the accretion rate peaks at around \(4.5\,\mathrm{kpc}\) and the cumulative accretion rate reaches 50 per cent of the total accretion rate at \(6.25\,\mathrm{kpc}\). As we mentioned, this distribution is shifted outwards with respect to the SFR surface density distribution, which peaks in the centre of NGC 2403 and reaches 50 per cent of the total SFR at \(3.3\,\mathrm{kpc}\). The relevance of this difference is further discussed in Section 5.2. ## 5 Discussion ### Reliability of the fountain + corona accretion model In this paper, we have investigated gas accretion as the potential mechanism to maintain star formation in NGC 2403 and found a remarkable consistency between the accretion rate predicted by our model and the SFR. However, accretion is not the only fuelling mechanism. Several studies have pointed out the importance of stellar mass loss in extending gas consumption timescales (e.g. Sandage 1986; Kennicutt et al. 1994) and sustaining star formation (e.g. Schaye et al. 2010; Leitner & Kravtsov 2011). In particular, Leitner & Kravtsov (2011, hereafter LK11) has estimated the current stellar mass loss rate of NGC 2403 to be \(0.5-0.79\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\) (depending on the underlying initial mass function), which seems to eliminate the need of gas accretion. However, this mass loss rate was calculated in LK11 assuming a SFR of \(1.3\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\), implying that the stellar mass loss can sustain at most \(60\%\) of the SFR of NGC2403, while at least \(40\%\) must be due to gas accretion. Note that the estimation of the mass loss rate is dependent on the SFR: a lower SFR would result in a lower mass loss rate (although not necessarily in proportion). Overall, we conclude that gas accretion is still necessary to sustain the SFR in NGC 2403 within the circumstances explored by the LK11 model. In Section 4 we explored four free parameters that are crucial for our EPG dynamical model. However, construction of the model also involves other parameters and ingredients for which we make specific choices. Below we discuss the limitations and reliability of our model. Figure 7: Inflow and outflow rate surface density as a function of radius predicted by our best-fitting fountain + corona accretion model of NGC 2403. Top panel: inflow rates (blue bars), outflow rates (black bars), and net flow rates (red bars: inflow–outflow); positive values indicate net inflow). The vertical dashed line at \(10.5\,\mathrm{kpc}\) marks the boundary where the net flow changes from inflow to outflow. Bottom panel: inflow rate surface density contributed by corona accretion, the integration of which gives us the global accretion rate of \(0.8\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\). The gravitational potential of NGC 2403 used in this paper is generated from a mass model consisting of three components: a stellar disc, a gaseous disc, and a dark matter halo. The parameters of the mass model are inferred via rotation curve decomposition (FB06). Given that the circular velocity generated from the mass model is consistent with the rotation curve of NGC 2403 (see FB06), we conclude that the gravitational potential is robust. The only uncertainty is related to the fraction of the stellar disc contribution to the potential, parametrised by the mass-to-light ratio. The gravitational potential used in the above analysis was based on the maximum-disc model shown in Table 2. It is however noteworthy that the minimum disc potential in FB06 is in fair agreement with those derived more recently with more sophisticated methods (Mancera Pina et al., 2022). FB06 have experimented with both maximum disc and minimum disc potentials and showed that the dynamics of the EPG does not change significantly. An assumption of our model is the existence of a uniform characteristic outflow velocity at all radii, whereas the varying stellar feedback activities might lead to outflow velocities changing with radius. Allowing spatial variations in the characteristic outflow velocity is a potential improvement for this kind of study. This has been briefly explored in FB06 to generate specific features in N2403 (e.g. the filament shown in channel 104.1 km s\({}^{-1}\) and channel 135.0 km s\({}^{-1}\) of Fig. 14 in FB06) that are otherwise not reproduced. However, exploring the variation of \(h_{\rm v}\) with radius would introduce at least one extra free parameter, which would significantly complicate our exploration of the parameter space. Overall, the global kinematics of the EPG in NGC 2403 appears to be well reproduced by a constant characteristic outflow speed across the disc. In the fountain + corona accretion scenario, the acceleration of fountain gas is directly dictated, besides by gravity, by the velocity difference between the fountain and the corona. In our model, we assume a relative azimuthal velocity of 75 km s\({}^{-1}\) between the fountain gas and the corona, based on hydrodynamical simulations (Marinacci et al., 2011). Such a high relative velocity would imply a rather slowly rotating corona in NGC 2403, given the disc rotation of around 130 km s\({}^{-1}\) (FB06). We have therefore tested models with a lower relative velocity of 45 km s\({}^{-1}\) that result in nearly identical best-fitting parameters as in Section 4.1 except for a higher condensation rate (4.2+-1.2 Gyr\({}^{-1}\)), which corresponds to a global accretion rate of \(1.1\pm 0.3^{+0.3}_{-0.2}\) M\({}_{\odot}\) yr\({}^{-1}\) (the best-fitting results are listed in Table 4). This higher rate is not surprising. In our model, as a consequence of condensation, the coronal gas joins the cold/warm phase of the fountain gas such that the velocity of a single cloud evolves as a combination (mass-weighted average) of the kinematics of the two components (cloud and condensed material). If the velocity difference between these two components is reduced, one needs a larger accretion rate (more condensed material) to produce the same effect in the combined kinematics. It is noteworthy that EPG models built with a lower relative velocity have lower velocity gradients than what we show in Fig. 6. However, the difference (1.0 km s\({}^{-1}\) kpc\({}^{-1}\)) is negligible, given that the uncertainty for our measurement is 2.7 km s\({}^{-1}\) kpc\({}^{-1}\). The separation of EPG emission from the datacube is an important ingredient of our method. The reliability of our strategy for masking the disc emission has been verified in several previous studies (e.g. Fraternali et al., 2002; Marasco et al., 2019; Li et al., 2021). We have tested the robustness of our results by fitting the data without masking the peculiar H i filament of NGC 2403, finding the same normalisation factor as shown in Table 4, but an \(h_{\rm v}\) of 60 km s\({}^{-1}\), an \(f_{\rm ion}\) of 0, a condensation rate of 4.8 Gyr\({}^{-1}\), leading to an accretion rate of 1.28 M\({}_{\odot}\) yr\({}^{-1}\) (all parameters are compatible with those of our fiducial model within the errors.). Thus models with slightly higher outflow velocities and condensation rates are preferred to account for the filament in NGC 2403, but the overall validity of our results is not particularly affected by our masking. In conclusion, the construction of our dynamical model is robust. The variation of certain ingredients leads to small changes in the model best-fitting parameters but does not alter our main conclusion: the EPG of NGC 2403 is produced by a combination of galactic fountain clouds and gas accretion from the condensation of the hot CGM at a rate compatible with the SFR of the galaxy. ### Can the fountain + corona accretion sustain the inside-out growth of the disc? Since accretion is a key source to fuel further star formation, the outward shift of the accretion (compared to the SFR) shown in Section 4.3 suggests a potential inside-out redistribution of gas and star formation activities in the future, which has been predicted by cosmological simulations (e.g. Grand et al., 2017) and supported by many observations (e.g. Wang et al., 2011; van der Wel et al., 2014; Pezzulli et al., 2015). Pezzulli et al. (2015) also provided measurements of the specific radial growth rate, \(\nu_{R}\equiv(1/R_{*})\times{\rm d}R_{*}/{\rm d}t\), where \(R_{*}\) is the scale length of the stellar disc, for a sample of galaxies including NGC 2403. Furthermore, a cosmological/zoom-in simulation (Grand et al., 2019) also found that fountain clouds can acquire angular momentum via interaction with the CGM. To verify whether the gas accretion due to a galactic fountain can be deemed responsible for this growth, we calculated the variation in time of the specific angular momentum \({\rm d}j/{\rm d}t\) of the stellar disc (a direct tracer of disc growth; Mo et al., 1998; Posti et al., 2019) due to accretion, under the simplifying assumption that the next generation of stars will be formed out of the newly accreted gas. This gives \[\frac{{\rm d}j}{{\rm d}t} = \frac{{\rm d}(J/M)}{{\rm d}t} \tag{7}\] \[= \frac{1}{M}\frac{{\rm d}J}{{\rm d}t}-\frac{J}{M^{2}}\frac{{\rm d }M}{{\rm d}t},\] where \(J\) and \(M\) (\(7.2\times 10^{9}\) M\({}_{\odot}\)) are the angular momentum and mass of the stellar disc. We estimate \(J\) as \(J=2MV_{\rm flat}R_{*}\)(Romanowsky and Fall, 2012), where \(V_{\rm flat}\) is the rotational velocity of the flat part of the rotation curve (130 km s\({}^{-1}\)) and \(R_{*}=2.0\) kpc (values from Fraternali et al., 2002). The time derivative of the angular momentum \({\rm d}J/{\rm d}t\) is given by \[\frac{{\rm d}J}{{\rm d}t} = \frac{{\rm d}J_{\rm in}}{{\rm d}t}-\frac{{\rm d}J_{\rm out}}{{\rm d }t} \tag{8}\] \[= 2\pi\int_{0}^{\rm R}R^{\prime 2}{\cal F}_{\rm in}(R^{\prime}) \overline{V_{\rm in}(R^{\prime})}\,{\rm d}R^{\prime}\] \[-2\pi\int_{0}^{\rm R}R^{\prime 2}{\cal F}_{\rm out}(R^{\prime}) \overline{V_{\rm out}(R^{\prime})}\,{\rm d}R^{\prime},\] where \({\cal F}_{\rm in}\) (\(\overline{V_{\rm out}}\)) is the inflow (outflow) surface density rate given in Section 4.3, \(V_{\rm in}(R^{\prime})\) (\(\overline{V_{\rm out}(R^{\prime})}\)) is the average rotational velocity of all cloud particles falling onto (ejected from) the disc at radius \(R^{\prime}\), obtained from our model by tracking the outflow and inflow radius and velocity of all fountain clouds. The time derivative of the mass, \({\rm d}M/{\rm d}t\), is by definition the accretion rate of new gas given by the model. Implementing the above equation to our best-fitting model, we have \({\rm d}j/{\rm d}t=-2.6\times 10^{-8}\) km s\({}^{-1}\) kpc yr\({}^{-1}\). This would indicate that the gas accreted through the fountain cannot be solely responsible for the observed inside-out growth of the disc. Part of this growth should then be ascribed to gas that is already present in the disc. This is a viable option, as the gas in the disc is known to be located, on average, at larger radii compared to the stellar component (e.g. Fraternali et al., 2002). This solution is, however, only partly satisfactory, as the gas reservoir at these large radii would, without replacement, be consumed on a relatively short timescale (a few Gyr; see e.g. Fraternali & Tomassetti, 2012), implying that the growth of the disc would not be sustainable in the long term. With these considerations in mind, we stress that our calculation of \(\mathrm{d}j/\mathrm{d}t\), presented above, very much depends on the value that we are assuming for the rotational speed of the corona, which is, as we discussed above, very uncertain. Interestingly, when assuming the rotational lag between the fountain and the hot gas is \(45\,\mathrm{km\,s^{-1}}\) (the third model in Table 4), we have \(\mathrm{d}j/\mathrm{d}t=1.5\times 10^{-8}\,\mathrm{km\,s^{-1}}\,\mathrm{kpc\,yr^{-1}}\), which indicates an inside-out growth. Combining the current value of the specific angular momentum \(j\) and its derivative \(\mathrm{d}j/\mathrm{d}t\), we can easily derive the specific angular momentum growth rate, which we define (following Pezzulli et al., 2015) as \(\nu_{j}\equiv(1/j)\times\mathrm{d}j/\mathrm{d}t\). We find a value of \(\nu_{j}=2.88\times 10^{-2}\,\mathrm{Gyr^{-1}}\), in excellent agreement with the specific radial growth rate \(\nu_{R}=(2.93\pm 0.16)\,\times 10^{-2}\,\mathrm{Gyr^{-1}}\) measured by Pezzulli et al. (2015) for NGC 2403. The two quantities \(\nu_{j}\) and \(\nu_{\mathrm{R}}\) are comparable and are in fact expected to be equal, as long as the rotation curve of the galaxy can be considered approximately stationary with time3. We have therefore found that our model with a reduced rotational lag is in remarkable quantitative agreement with the galactic fountain being the main source of the observed inside-out growth in NGC 2403. Footnote 3: This is immediately seen by taking the time derivative of the equation \(j=2V_{\mathrm{H}\mathrm{I}\mathrm{R}_{*}}\). It is important to note that in the absence of triggered condensation, a galactic corona will be expected to cool in the very inner parts, where its density tends to be higher, thus producing the accretion of low angular momentum gas that then would need to be expelled via strong feedback (e.g. Brook et al., 2012). Instead, when the cooling is triggered by the fountain, the location of the bulk of the gas accretion is naturally shifted to outer radii for the reasons described in Section 4.3. This phenomenon had been indicated as plausibly compatible with the inside-out growth of discs (Pezzulli & Fraternali, 2016), but this is the first time that quantitative evidence is provided. ## 6 Conclusion In this work, we have modelled the distribution and kinematics of the neutral extra-planar gas (EPG) in the late-type nearby galaxy NGC 2403 using a dynamical model of galactic fountain. In this model, stellar feedback activities continuously eject gas from the galaxy disc, which travels through the halo and falls back to the disc. This gas cycle brings metal-rich and cold/warm gas to mix and interact with the hot corona, significantly reducing its cooling time, and leading to condensation and accretion of some coronal gas onto the disc. Due to angular momentum exchange between the fountain clouds and the corona, this interaction is expected to leave a signature in the kinematics of the H i gas at the disc-halo interface. The application of our models to the data leverage this signature to infer, along with other parameters, the efficiency of the condensation process and the accretion rate of coronal gas onto the disc. While these models have been applied extensively to the EPG of the Milky Way (Marasco et al., 2012, 2013; Fraternali et al., 2013, 2015), so far applications to external galaxies were limited to the preliminary studies of FB06 and FB08, which did not include a rotating corona nor a statistically meaningful exploration of the parameter space. This study presents the first detailed application of the current fountain accretion framework to an external galaxy. Our results are summarised as follows: 1. The galactic fountain framework can reproduce most of the neutral EPG features in NGC 2403. A model where the fountain clouds interact with the hot corona is statistically preferred compared to a pure fountain model without interaction with the hot CGM. 2. The best-fitting model requires a fountain with a characteristic outflow velocity of \(50\pm 10\,\mathrm{km\,s^{-1}}\), with the gas being ionised for some time after ejection and then recombining. Recombination appears to occur on average when its vertical velocity has been reduced by about 40 per cent. 3. The H i EPG in NGC 2403 inferred from the best-fitting model has a total EPG mass of \(4.7^{+1.2}_{-0.9}\times 10^{8}\,\mathrm{M_{\odot}}\), with an average scale height of \(0.93\pm 0.003\,\mathrm{kpc}\) and a vertical gradient in rotational velocity of \(-10.0\pm 2.7\,\mathrm{km\,s^{-1}}\,\mathrm{kpc^{-1}}\). Our values are compatible with a previous estimate of Marasco et al. (2019), which was derived with simpler phenomenological approaches. 4. Our model predicts a condensation rate of \(2.4\,\mathrm{Gyr^{-1}}\) (\(4.2\,\mathrm{Gyr^{-1}}\) ) for the hot CGM, leading to a total accretion rate of \(0.8\,\mathrm{M_{\odot}\,yr^{-1}}\) (\(1.1\,\mathrm{M_{\odot}\,yr^{-1}}\)) when assuming the rotational lag between the fountain and the hot gas is \(75\,\mathrm{km\,s^{-1}}\) (\(45\,\mathrm{km\,s^{-1}}\)), similar to the star formation rate \(0.6\,\mathrm{M_{\odot}\,yr^{-1}}\) of NGC 2403, suggesting corona accretion as a viable mechanism to maintain the star-formation rate in this galaxy. 5. The accretion rate surface density profile predicted by our model is radially more extended than the star-formation-rate surface density. We have also shown that, if the rotation velocity of the corona is larger than a certain threshold, the specific angular momentum growth rate predicted by our model is in excellent agreement with the observed inside-out growth rate in NGC 2403. The fountain-driven accretion process can therefore be responsible for the inside-out growth of its stellar disc. ## Acknowledgements The authors would like to thank an anonymous referee for helpful comments and Cecilia Bacchini for collecting and providing the H i, H\({}_{2}\), and star-formation-rate data of NGC 2403. AL was supported by the Netherlands Research School for Astronomy (Nederlandse Onderzoekschool voor Astronomie, NOVA), Network 1, Project 10.1.5.9 WEAVE. GP acknowledges support from the Netherlands Research School for Astronomy (Nederlandse Onderzoekschool voor Astronomie, NOVA) through project 10.1.5.18. ## Data Availability The data underlying this article were obtained by Fraternali et al. (2002) with the CS configuration of the VLA and were later included in the HALOGAS survey, which is available at [https://www.astron.nl/halogas](https://www.astron.nl/halogas).
2308.00616
The role of frequency and impedance contrasts in bandgap closing and formation patterns of axially-vibrating phononic crystals
Bandgaps, or frequency ranges of forbidden wave propagation, are a hallmark of Phononic Crystals (PnCs). Unlike their lattice counterparts, PnCs taking the form of continuous structures exhibit an infinite number of bandgaps of varying location, bandwidth, and distribution along the frequency spectrum. While these bandgaps are commonly predicted from benchmark tools such as the Bloch-wave theory, the conditions that dictate the patterns associated with bandgap symmetry, attenuation, or even closing in multi-bandgap PnCs remain an enigma. In this work, we establish these patterns in one-dimensional rods undergoing longitudinal motion via a canonical transfer-matrix-based approach. In doing so, we connect the conditions governing bandgap formation and closing to their physical origins in the context of the Bragg condition (for infinite media) and natural resonances (for finite counterparts). The developed framework uniquely characterizes individual bandgaps within a larger dispersion spectrum regardless of their parity (i.e., odd vs even bandgaps) or location (low vs high-frequency), by exploiting dimensionless constants of the PnC unit cell which quantify the different contrasts between its constitutive layers. These developments are detailed for a bi-layered PnC and then generalized for a PnC of any number of layers by increasing the model complexity. We envision this mathematical development to be a future standard for the realization of hierarchically-structured PnCs with prescribed and finely tailored bandgap profiles.
Hasan B. Al Ba'ba'a, Mostafa Nouh
2023-07-13T21:33:41Z
http://arxiv.org/abs/2308.00616v3
The role of frequency and impedance contrasts in bandgap closing and formation patterns of axially-vibrating phononic crystals ###### Abstract Bandgaps, or frequency ranges of forbidden wave propagation, are a hallmark of Phononic Crystals (PnCs). Unlike their lattice counterparts, PnCs taking the form of continuous structures exhibit an infinite number of bandgaps of varying location, bandwidth, and distribution along the frequency spectrum. While these bandgaps are commonly predicted from benchmark tools such as the Bloch-wave theory, the conditions that dictate the patterns associated with bandgap symmetry, attenuation, or even closing in multi-bandgap PnCs remain an enigma. In this work, we establish these patterns in one-dimensional rods undergoing longitudinal motion via a canonical transfer-matrix-based approach. In doing so, we connect the conditions governing bandgap formation and closing to their physical origins in the context of the Bragg condition (for infinite media) and natural resonances (for finite counterparts). The developed framework uniquely characterizes individual bandgaps within a larger dispersion spectrum regardless of their parity (i.e., odd vs even bandgaps) or location (low vs high-frequency), by exploiting dimensionless constants of the PnC unit cell which quantify the different contrasts between its constitutive layers. These developments are detailed for a bi-layered PnC and then generalized for a PnC of any number of layers by increasing the model complexity. We envision this mathematical development to be a future standard for the realization of hierarchically-structured PnCs with prescribed and finely tailored bandgap profiles. keywords: phononic crystals, wave dispersion, bandgap, symmetry + Footnote †: journal: Journal of Sound and Vibration ## 1 Introduction A bandgap, in solid-state physics, is an energy gap in the electronic band structure in which no electronic states exist [1]. Nearly four decades ago, the birth of photonic crystals gave way to photonic bandgaps, frequency ranges in which all optical modes are absent [2; 3]. Several years later, phononic crystals--a class of periodic elastoacoustic structures exhibiting forbidden wave propagation within given frequency regimes--extended the definition of bandgaps to the structural dynamics field [4]. Ever since, phononic bandgaps have played a central role in several engineering applications ranging from vibroacoustic control [5] and tunable materials [6], to topological mechanics [7] and nonreciprocal wave phenomena [8]. In its basic form, a phononic crystal (PnC) is a multi-layered composite where the layers self-repeat over an extended spatial domain. Rooted in the origins of periodic structure theory, studies depicting the unique wave propagation properties of PnCs predate the use of the term itself [9]. The most common one-dimensional PnC configuration involves two alternating materials (or a single material with alternating cross sections) forming a unit cell, often denoted as a diatomic or bi-layered PnC, in which bandgaps arise from Bragg scattering effects at the material (or geometric) interfaces. For an infinite medium, these Bragg bandgaps are a direct function of the structural periodicity and span one or more well-defined frequency ranges which can be predicted by a Bloch-wave analysis of the unit cell [10]. Increasing the number of unit cell layers (or atoms) gives rise to additional features which are uniquely defined by the sequencing and permutations of these individual layers [11]. Bandgap engineering, the science of manipulating phononic parameters within the infinite design space to achieve bandgaps of prescribed characteristics (e.g., bounds, location, attenuation level, targeted modes, directionality, and topological nature, among others) has significantly evolved [12]. In pursuit of such goal, studies have utilized geometric properties [13; 14], material anisotropy [15; 16], damping [17; 18], viscoelasticity [19], inertance [20], pillared surfaces [21], topology optimization [22], and machine learning [23] as tunable knobs in an attempt to tune and achieve maximum control over the bandgap emergence process. While the applications and utility of these bandgaps in novel and imaginative realizations of PnCs remain an active research area, especially with recent advances in manufacturing and fabrication, the physics underpinning the existence, formation mechanisms, and evolution of phononic bandgaps show intriguing phenomena which continue to be separately explored. Notable among these is the underlying connection between the dispersion relation of an infinite PnC relating the wavenumber of a wave to its frequency, and dictating the frequency-dependent phase and group velocities of a dispersive medium (of which a PnC is one) [24], and the structural resonances of a finite PnC where size and boundary effects become intrinsic to the dynamical problem [25; 26]. This interplay between the mathematical description of infinite and finite media, and the ability to recover one from the other [27], was used to develop the theory of truncation resonances in finite PnCs by identifying a set of unique natural frequencies which avert dispersion branches at the infinite limit of the constitutive unit cell [28; 29]. Furthermore, understanding the origination process of bandgaps in PnCs and the different ways in which wave attenuation manifests itself in finite periodic media has enabled phononic bandgaps to be artificially emulated in non-periodic lattices [30], or generated through radically different mechanisms such as inertial amplification [31; 32]. Phononic bandgaps are accurately predicted from the conventional Bloch-wave analysis. However, PnCs made of solid continua exhibit a large number of bandgaps which vary in width, strength, and distribution, thus giving rise to the notion of "bandgap patterns". As this work will show, these patterns are not random and bandgap arrangements in continuous PnCs are far from arbitrary. More importantly, bandgaps that obey certain conditions can be made to vanish (i.e., close by virtue of the preceding and following dispersion branches touching each other), thus rendering the mere existence of such bandgaps in phononic crystals not guaranteed. Instead of deploying numerical tools to seek bandgaps of desirable parameters, this work develops a generalized analytical framework which derives and unravels bandgap patterns and closing conditions in one-dimensional PnC rods undergoing longitudinal motion. This framework is then used to establish general rules which govern bandgap widths and folding frequencies, and connects deformational mode shapes of the culminating PnC to its constitutive layers. In doing so, we explain the conditions driving bandgap closing and connect them to physical origins in the context of the Bragg condition (for infinite media) and natural resonances (for finite counterparts). These developments are detailed for a bi-layered PnC and then generalized for a PnC of any number of layers by increasing the mathematical complexity, while retaining the fully-analytical nature of the model. The need to tailor phononic dispersion profiles have already been shown to play an instrumental role in metamaterial applications [33; 34; 35; 36]. As such, we envision this mathematical development to be a future standard for designing bandgaps in PnCs with versatile and precisely targeted bandgap profiles. ## 2 Mathematical foundation ### PnC configuration Starting with the most general case, we consider a continuum PnC in the form of a one-dimensional rod which consists of self-repeating unit cells, where each cell is comprised of \(L\) material layers, as depicted in Figure 1(a). In this work, we exclude any flexural and torsional waves, and focus on longitudinal motion described by the continuous function \(u(x,t)\). In the proposed PnC, each layer has unique geometrical and mechanical properties that do not necessarily match the rest. The \(s^{\text{th}}\) layer of the unit cell has a mechanical impedance \(z_{s}=A_{s}\sqrt{E_{s}\rho_{s}}\) and a sonic speed \(c_{s}=\sqrt{E_{s}/\rho_{s}}\), where \(E_{s}\), \(\rho_{s}\), and \(A_{s}\) are the elastic modulus, density, and cross-sectional area, respectively (\(s=1,2,\dots,L\)). The lumped parameter (spring-mass) description of this model is commonly referred to as a polyatomic PnC [11], with each layer within the unit cell denoted as an "atom". The unit cell's total length is \(\ell=\sum_{s}\ell_{s}\), which is analogous to the lattice constant of a one-dimensional polyatomic PnC. ### Transfer matrix method The dispersion relation of the aforementioned unit cell can be analytically derived via the transfer matrix method (TMM). The transfer matrix \(\mathbf{T}\) obtains the displacement \(u\) and force \(f\) at the end of cell \(i\) from their counterparts at the end of cell \(i-1\), such that: \[\begin{cases}u_{i}\\ f_{i}\end{cases}=\mathbf{T}\begin{cases}u_{i-1}\\ f_{i-1}\end{cases} \tag{1}\] where \(\mathbf{T}\) is computed from a series multiplication of the transfer matrices of the individual layers: \[\mathbf{T}=\mathbf{T}_{L}\mathbf{T}_{L-1}\dots\mathbf{T}_{1} \tag{2}\] Starting with the one-dimensional wave equation which describes axial waves in the rod, the transfer matrix of the \(s^{\text{th}}\) layer \(\mathbf{T}_{s}\) in Eq. (2) can be derived as [37]: \[\mathbf{T}_{s}=\begin{bmatrix}\cos(k_{s}\ell_{s})&\frac{1}{z_{s}\omega}\sin( k_{s}\ell_{s})\\ -z_{s}\omega\sin(k_{s}\ell_{s})&\cos(k_{s}\ell_{s})\end{bmatrix} \tag{3}\] where \(k_{s}=\omega/c_{s}\) denotes the wavenumber within an individual layer, which is a function of the angular frequency \(\omega\). ## 3 Bi-layered PnCs ### Dispersion Analysis A special case of the aforementioned structure is the bi-layered PnC rod (i.e., \(L=2\)), which will be considered here in detail. In such a case, the transfer matrix of a unit cell \(\mathbf{T}=\mathbf{T}_{2}\mathbf{T}_{1}\) is the product of the transfer matrices of the two layers. The resultant \(\mathbf{T}\) can be simplified by introducing \(\omega_{s}=c_{s}/\ell_{s}\) (where \(s=1,2\)) and two non-dimensional parameters, namely the frequency and impedance contrasts, respectively, as follows: \[\alpha=\frac{\frac{1}{\omega_{1}}-\frac{1}{\omega_{2}}}{\frac{1}{\omega_{1}}+\frac {1}{\omega_{2}}} \tag{4a}\] \[\beta=\frac{z_{1}-z_{2}}{z_{1}+z_{2}} \tag{4b}\] which both range from \(-1\) to \(1\) depending on the choice of unit cell parameters. By defining an average impedance of both layers \(z=(z_{1}+z_{2})/2\) and using the definition of the impedance contrast \(\beta\), the impedance of each layer can be written as: \[z_{1,2}=z(1\pm\beta) \tag{5}\] where \(+\) (\(-\)) is for the first (second) layer. We also define a non-dimensional frequency \(\Omega=\omega/\omega_{0}\), where the normalization constant \(\omega_{0}\) represents the harmonic mean of \(\omega_{1}\) and \(\omega_{2}\), and is given by: \[\omega_{0}=\frac{2}{\frac{1}{\omega_{1}}+\frac{1}{\omega_{2}}} \tag{6}\] The harmonic mean can be then combined with the definition of \(\alpha\) to give: \[\frac{1}{\omega_{1,2}}=\frac{1}{\omega_{0}}(1\pm\alpha) \tag{7}\] where, once again, \(+\) (\(-\)) is for the first (second) layer. Using these definitions, the transfer matrix \(\mathbf{T}\) of the bi-layered cell can be rewritten as: \[\mathbf{T}=\begin{bmatrix}d_{-}&\frac{1}{\sin(1-\beta^{2})}t_{-}\\ -z\omega t_{+}&d_{+}\end{bmatrix} \tag{8}\] where \[d_{\pm}=\frac{1}{1\pm\beta}\Big{(}\cos(2\Omega)\pm\beta\cos(2\Omega\alpha) \Big{)} \tag{9a}\] \[t_{\pm}=\sin(2\Omega)\pm\beta\sin(2\Omega\alpha) \tag{9b}\] Finally, the dispersion relation can be found from \(\mathbf{T}\) via \(\mathrm{tr}(\mathbf{T})=2\cos(q)\), where \(\mathrm{tr}(\mathbf{T})\) is the trace of the matrix \(\mathbf{T}\) (A detailed process outlining the origin of the dispersion relation is provided in **Appendix A**). Also, \(q=\tilde{q}\ell=q_{\mathrm{R}}+\mathbf{i}q_{\mathrm{I}}\) is the non-dimensional wavenumber of the PnC rod which is a product of the wavenumber \(\tilde{q}\) and the unit cell length \(\ell\), and \(q_{\mathrm{R}}\) and \(q_{\mathrm{I}}\) denote its real and imaginary components, respectively. Using Eq. (9a), the dispersion relation is obtained as: \[\cos(2\Omega)-\beta^{2}\cos(2\alpha\Omega)-(1-\beta^{2})\cos(q)=0 \tag{10}\] Note that a positive or negative value of \(\alpha\) or \(\beta\) does not change the resulting dispersion relation as long as their magnitudes remain the same. This fact can be inferred from the dispersion relation in Eq. (10), where \(\alpha\) is in the argument of the even cosine function and \(\beta\) is squared. Equation (10) can be depicted analytically by reformulating it as \(q=\cos^{-1}[\Phi(\Omega)]\), where \[\Phi(\Omega)=\frac{1}{(1-\beta^{2})}\left[\cos(2\Omega)-\beta^{2}\cos(2\Omega \alpha)\right] \tag{11}\] An interesting feature of the function \(\Phi(\Omega)\) is its association with the frequencies of maximum attenuation inside Bragg bandgaps, which can be found by evaluating the roots of the derivative \(\partial\Phi(\Omega)/\partial\Omega=0\), analogous to lumped PnCs [11; 25], yielding: \[\sin(2\Omega)-\beta^{2}\alpha\sin(2\alpha\Omega)=0 \tag{12}\] Figure 1(b) shows two dispersion diagrams for a bi-layered PnC rod (with \(\alpha=2/\pi\) and \(\beta=0.8\)) and a uniform rod with two identical layers (i.e., \(\alpha=\beta=0\)). The uniform rod exhibit linear dispersion, a hallmark feature of longitudinal elastic waves in rods [38], while the PnC's dispersion relation is nonlinear culminating in a dispersive behavior. The bandgaps in the PnC case align with \(q_{\mathrm{I}}\neq 0\) regions. Odd and even-numbered bandgaps are color-coded for easier interpretation. Their analytical derivation and emergence conditions are established next. ### Bandgap closing in bi-layered PnCs Bandgap limits for odd and even-numbered bandgaps can be found by substituting \(q=\pi\) and \(q=0\), respectively, Figure 1: (a) Single unit cell of a multi-layered PnC rod of \(L\) layers with the geometric and material parameters indicated. (b) Dispersion diagram of a bi-layered (\(L=2\)) PnC rod with \(\alpha=2/\pi\) and \(\beta=0.8\). The linear dispersion diagram of a uniform rod (\(\alpha=\beta=0\)) is also provided for reference. Bandgap regions are shaded and color-coded depending on their parity, i.e., odd versus even. in the dispersion relation shown in Eq. (10), resulting in the following expressions: \[\big{(}\cos(\Omega)-\beta\cos(\alpha\Omega)\big{)}\big{(}\cos( \Omega)+\beta\cos(\alpha\Omega)\big{)}=0 \tag{13a}\] \[\big{(}\sin(\Omega)-\beta\sin(\alpha\Omega)\big{)}\big{(}\sin( \Omega)+\beta\sin(\alpha\Omega)\big{)}=0 \tag{13b}\] which amount to a multiplication of two terms, each of which giving one bandgap limit. It is also evident that a bandgap only emerges if the roots of each of these terms are different. Thus, by equating both terms in each equation, the conditions that render a bandgap width equal to zero can be obtained (commonly referred to as zero-width bandgaps [39]). These conditions are summarized by the following equations: \[\beta\cos(\alpha\Omega)=0 \tag{14a}\] \[\beta\sin(\alpha\Omega)=0 \tag{14b}\] for odd and even-numbered bandgaps, respectively. Consider the cases that satisfy Eq. (14) when \(\beta\neq 0\). Starting with odd-numbered bandgaps, a zero-width bandgap needs to satisfy \(\cos(\alpha\Omega)=0\), a condition which is guaranteed at the following frequencies: \[\Omega=\frac{(2r-1)\pi}{2\alpha};\ \ \text{for}\ \alpha\neq 0 \tag{15}\] for \(r\in\mathbb{N}^{+}\), where \(\mathbb{N}^{+}\) are all natural numbers without zero. The second requirement is that such frequencies in Eq. (15) have to satisfy the dispersion relation at \(q=\pi\), which can be checked by plugging Eq. (15) in (10), and setting \(q=\pi\), which, after simplification, gives: \[\cos\Big{(}\frac{\pi}{\alpha}(2r-1)\Big{)}=-1 \tag{16}\] Solving for \(\alpha\), we arrive at the following expression: \[\alpha=\frac{\alpha_{n}}{\alpha_{d}}=\frac{(2r-1)}{(2p-1)} \tag{17}\] for \(p\in\mathbb{N}^{+}\), indicating that some odd-numbered bandgaps close when \(\alpha\) is a rational number of odd integers. The frequencies at which the bandgap closes can be found by combining Eqs. (17) and (15), yielding: \[\Omega_{p}=\frac{\pi}{2}(2p-1) \tag{18}\] It should be noted, however, that if a specific ratio of \(\alpha=\alpha_{n}/\alpha_{d}\) is imposed, only select combinations of \(r\) and \(p\) will maintain such ratio. In this scenario, not every \(p\in\mathbb{N}^{+}\) is guaranteed to fulfill this ratio of \(\alpha\), and Eq. (18) needs to be updated to: \[\Omega_{p}=\frac{\pi}{2}\alpha_{d}(2p-1);\ \ \text{for}\ p\in\mathbb{N}^{+} \tag{19}\] Figure 2: Dispersion diagram (left) and the corresponding width (\(\Delta\Omega\)) of the first 11 bandgaps (right) for a bi-layered PnC rod with (a) \(\alpha=0\), (b) \(\alpha=1/3\), (c) \(\alpha=1/2\), and (d) \(\alpha=2/3\). \(\beta=-0.75\) is used for all cases. Regions of the same shading color indicate bandgaps of identical width at a given \(\alpha\), while the labeled frequencies (e.g., \(\pi\), \(2\pi\), etc.) denote bandgap closings. to guarantee bandgap closing. To illustrate, consider the case where \(\alpha=1/3\) (i.e., \(\alpha_{d}=3\)). A combination of \(r=1\) and \(p=2\) therefore closes one bandgap at \(\Omega_{p}=3\pi/2\), while a combination of \(r=2\) and \(p=5\) closes another bandgap at \(\Omega_{p}=9\pi/2\). However, even though \(p=3\) belongs to \(p\in\mathbb{N}^{+}\), there exists no \(r\in\mathbb{N}^{+}\) that satisfies the chosen \(\alpha\). As a result, plugging \(p=3\) into Eq. (18) would **not** result in a bandgap closing frequency for the PnC described by this \(\alpha\), making it prudent to use Eq. (19) instead. Similarly, we analyze even-numbered bandgaps to find \(\alpha\) values at which they vanish. Knowing that \(\sin(\alpha\Omega)=0\) must be met for such a case, we have \(\alpha=0\) and \[\Omega=\frac{r\pi}{\alpha};\ \ \text{for}\ \alpha\neq 0 \tag{20}\] Plugging Eq. (20) back in (10), and setting \(q=0\), the values of \(\alpha\) that correspond to zero-width even-numbered bandgaps are also rational and given by: \[\alpha=\frac{\alpha_{n}}{\alpha_{d}}=\frac{r}{p} \tag{21}\] It should be noted that all rational values of \(\alpha\) will close even-numbered bandgaps, occurring at the following frequencies: \[\Omega_{p}=\pi p\alpha_{d};\ \ \text{for}\ p\in\mathbb{N}^{+} \tag{22}\] Figure 2 show examples of dispersion relations with \(\beta=-0.75\) and different values of \(\alpha\), namely, (a) \(\alpha=0\), (b) \(\alpha=1/3\), (c) \(\alpha=1/2\), and (d) \(\alpha=2/3\). The left panel of the figure displays the full dispersion diagrams, while the right panel graphically represents the bandgap width of all eleven bandgaps in the range \(\Omega\in[0,6\pi]\). In all of the shown cases, even-numbered bandgaps that occur at the frequencies described in Eq. (22) are expected to close for all \(\alpha\) values that are rational. The first case of \(\alpha=0\), however, is a special case where all even-numbered bandgaps close, while all odd-numbered bandgaps remain open and maintain identical widths, as can be inferred from the right subplot of Figure 2(a). Note that for \(\alpha=0\) and \(\beta\neq 0\), the maximum attenuation can be found in closed-form at the frequencies \(\Omega=r\pi/2\) (See **Appendix B** for more details on the \(\alpha=0\) case). For the second case of \(\alpha=1/3\), the numerator and denominator of \(\alpha\) are odd integers. It is therefore expected to see odd and even-numbered bandgaps calculated from Eqs. (19) and (22) closed. The closing takes place at multiples of \(\Omega=1.5\pi\). The third and fourth cases of \(\alpha=1/2\) and \(\alpha=2/3\), respectively, have numerator and denominator values with different parities. As a result, only even-numbered bandgaps are expected to close, which takes place at \(\Omega=2\pi,4\pi\) and \(\Omega=3\pi\), respectively. As can be observed from the right panel of Figure 2, bandgap width profiles exhibit a wave-like behavior for all considered values of \(\alpha\), which perfectly repeats itself. Additionally, these profiles are noted to be mirror-symmetric around the closing points. Finally, we emphasize that the order of the bandgaps that close are always related to \(\alpha\), except for the special case of \(\alpha=0\). Specifically, the order of closed bandgaps is equal to multiples of \(\alpha_{d}\) if both numerator and denominator are odd, while equal to twice the multiples of \(\alpha_{d}\) otherwise. ### Rational versus irrational \(\alpha\) values Following this discussion of the role of rational \(\alpha\) values in bandgap closure, it is imperative to understand the different consequences of PnCs with rational and irrational \(\alpha\) values that are close in magnitude. Consider two bi-layered unit cells of an identical impedance contrast \(\beta=-0.75\) with \(\alpha=2/3\) for the first, which is the rational value that corresponds to the dispersion diagram shown in Figure 2(d), and \(\alpha=2/\pi\) for the second, which Figure 3: Bandgap width for the first 120 bandgaps of a bi-layered PnC rod with: (a) a rational value of \(\alpha=2/3\) and (b) an irrational value of \(\alpha=2/\pi\). \(\beta=-0.75\) is used for both cases. The PnC with the rational \(\alpha\) maintains a perfectly periodic pattern of bandgap widths that repeats itself every 6 bandgaps. The PnC with the irrational \(\alpha\) has an aperiodic profile of bandgap widths which can be observed by tracking the changes in the widths of the \(5^{\text{th}}\) (light blue marker), \(6^{\text{th}}\) (dark blue marker), and \(11^{\text{th}}\) (orange marker) every cycle of 11 bandgaps. is the irrational value used to construct the dispersion diagram shown in Figure 1(b). The bandgap widths for the first 120 bandgaps of both PnCs is computed in Figure 3. It is immediately noticed that the rational (\(\alpha=2/3\)) case maintains a perfectly periodic pattern of bandgap widths that repeats itself every 6 bandgaps (which is twice \(\alpha_{d}\) as explained earlier). On the other hand, the bandgap widths corresponding to the irrational (\(\alpha=2/\pi\)) case clearly move further away from zero as the bandgap number increases, indicating the absence of a bandgap closing pattern due to \(\alpha\) not being an exact rational number. Despite the absence of a bandgap closing pattern, Figure 3(b) still shows a near-periodic profile with a period of 11 bandgaps. This is understandable because the closest rational approximation of \(\alpha=2/\pi\approx 2/(22/7)\approx 7/11\) (using the known approximation of \(\pi\)) reveals that this system should closely mimic one which exhibits bandgap closing at multiples of \(\alpha_{d}=11\). ### Physical implication of rational \(\alpha\) values Consider a finite uniform rod made of one of the two bi-layered PnC unit cell layers, that has a sonic speed \(c_{s}\) and a length \(\ell_{s}\). Excluding the rigid body mode at \(\omega=0\), the natural frequencies of this rod in the unconstrained form (i.e., with free-free boundary conditions) can be found by setting the lower off-diagonal element of the transfer matrix in Eq. (3) equal to zero. If the same rod is fixed from both ends, the natural frequencies can be obtained by setting the upper off-diagonal element of the transfer matrix in Eq. (3) equal to zero. Both cases yield \(\sin(k_{s}\ell_{s})=0\), which in turn provides the following set of natural frequencies [28]: \[\omega=\pi n_{s}\omega_{s};\ \ \ \ \ n_{s}=1,2,3,\ldots \tag{23}\] where \(n_{s}\) indicates the order of the vibrational mode in the complete set of non-zero natural frequencies. Using Eq. (7) and the substitution \(\alpha=\alpha_{n}/\alpha_{d}\), combined with Eq. (23), we arrive at: \[(\alpha_{d}-\alpha_{n})n_{1}=(\alpha_{d}+\alpha_{n})n_{2} \tag{24}\] Equation (24) captures the physical meaning behind rational values of \(\alpha\) in a concise manner. It indicates that for any rational \(\alpha\) value, a natural frequency of the first of the two layers of the order \((\alpha_{d}-\alpha_{n})n_{1}\) matches a natural frequency of the second layer of the order \((\alpha_{d}+\alpha_{n})n_{2}\), since \(\alpha_{d}\) and \(\alpha_{n}\) are integers. These natural frequencies must satisfy \(\Omega_{p}\) in Eqs. (19) and (22). Consequently, along with the harmonic mean in Eq. (6), these two equations can be utilized to find an exact solution for \(n_{1}\) and \(n_{2}\) for a prescribed rational value of \(\alpha\). Rearranging Eq. (6), it can be seen that: \[\frac{\omega_{0}}{\omega_{1}}+\frac{\omega_{0}}{\omega_{2}}=2 \tag{25}\] which, in conjunction with Eq. (23) at \(\omega=\omega_{0}\Omega_{p}\), becomes: \[\pi(n_{1}+n_{2})=2\Omega_{p} \tag{26}\] Solving Eqs. (24) and (26) simultaneously and plugging in the expressions for \(\Omega_{p}\) in Eqs. (19) and (22), we arrive at the following expressions for \(n_{1,2}\): \[n_{s}=\frac{1}{2}(2p-1)(\alpha_{d}\pm\alpha_{n}) \tag{27a}\] \[n_{s}=p(\alpha_{d}\pm\alpha_{n}) \tag{27b}\] for odd and even-numbered bandgaps, respectively, and with \(+\) (\(-\)) denoting the solution for \(s=1\) (\(s=2\)). Note that if the sign of \(\alpha\) flips, the solutions corresponding to the first PnC layer become those of the second layer and vice versa. Interestingly, this discussion of the physical meaning of rational \(\alpha\) values also has a connection to the "Bragg condition", as will be derived next. ### Connection to Bragg condition Bandgaps in PnCs are known to be size-dependent and initiate near frequencies described by the Bragg condition, which provides the proportional relationship between the size of a PnC and the wavelength [40; 41]. A Bragg condition can also be defined for each of the individual layers of a bi-layered PnC (since the individual layer can be thought of as a special PnC unit cell with zero-width bandgaps) as: \[\ell_{s}=n_{s}\frac{\lambda_{s}}{2} \tag{28}\] where \(\lambda_{s}=2\pi/k_{s}\) is the wavelength. Recalling that \(k_{s}=\omega/c_{s}\) for rods and rearranging Eq. (28) in terms of the angular frequency \(\omega\), it can be seen that the frequencies corresponding to the Bragg condition are given by \(\omega=\pi n_{s}\omega_{s}\), which perfectly align with the frequencies derived in Eq. (23). In other words, the frequencies corresponding to the Bragg condition of an individual layer are also the natural frequencies of a finite uniform rod comprised of that particular layer with free-free or fixed-fixed boundaries. By making use of Eq. (7), a non-dimensional form of the Bragg condition of each individual layer in a bi-layered PnC can be written as a function of the frequency contrast between the two layers \(\alpha\), as follows: \[\Omega=\frac{n_{s}\pi}{1\pm\alpha} \tag{29}\] where \(+\) (\(-\)) is for \(s=1\) (\(s=2\)). Recalling that a bandgap of a bi-layered PnC can only close if \(\alpha\) is a rational number (as proven in Section 3.2), it can be seen that a rational \(\alpha\) is guaranteed if the frequencies of the Bragg condition for the individual layers 1 and 2 are matched, i.e., when \(\Omega\) in Eq. (29) becomes identical for both the plus and minus solutions. As a result of this matching, \(\alpha\) takes the following expression: \[\alpha=\frac{n_{1}-n_{2}}{n_{1}+n_{2}} \tag{30}\] thus ensuring that \(\alpha\) is a rational number, and further cementing the connection between the Bragg condition and the bandgap closing condition in a bi-layered PnC rod. ### Mode shapes at bandgap closing It is now established that the Bragg bandgaps of a bi-layered PnC close when the Bragg condition frequencies of the two constitutive layers match. As a direct consequence of that condition, the natural frequencies of the bi-layered PnC unit cell become those of the individual layers at bandgap closing. We therefore formulate analytical expressions for the deformational mode shapes of the bi-layered PnC (often referred to as the unit cell Bloch modes [42]) which correspond to bandgap closing frequencies, for a complete understanding of these scenarios. The general solution of the displacement and internal force of the \(s^{\text{th}}\) layer of the PnC rod can be written as: \[u_{s}(x)=a_{s}\cos(k_{s}x)+b_{s}\sin(k_{s}x) \tag{31a}\] \[f_{s}(x)=E_{s}A_{s}k_{s}\big{(}b_{s}\cos(k_{s}x)-a_{s}\sin(k_{s}x)\big{)} \tag{31b}\] To obtain solutions for the coefficients \(a_{s}\) and \(b_{s}\), a total of four equations are needed which are found from the displacement and force continuity conditions between the PnC layers. To do so in a bi-layered PnC, we set \(x=0\) at the interface of the two layers. The displacement and force continuity conditions yield: \[a_{1}-a_{2}=0 \tag{32a}\] \[(1+\beta)b_{1}-(1-\beta)b_{2}=0 \tag{32b}\] By using the next interface to get the two remaining equations, we get \(u_{1}(\ell-\ell_{1})=u_{2}(\ell_{2})\) and \(f_{1}(\ell-\ell_{1})=f_{2}(\ell_{2})\), which can be expressed as \(u_{1}(\ell-\ell_{1})=\mathrm{e}^{\mathrm{i}q}u_{1}(-\ell_{1})\) and \(f_{1}(\ell-\ell_{1})=\mathrm{e}^{\mathrm{i}q}f_{1}(-\ell_{1})\), respectively, by virtue of Bloch's Figure 4: (a) A graphical summary of the relationship of the mode shapes (for given boundary conditions) of a PnC unit cell to that of its individual layers at bandgap closing frequencies. (b) _Left:_ Dispersion diagram of a bi-layered PnC rod unit cell for the case when \(|\alpha|=1/3\) and \(\beta=-0.75\). Two sets of folded lines represent the dispersion diagrams of the two individual layers of the PnC. Red dashed lines indicate the locations at which the two sets fold at the same frequency, indicating a bandgap closing of the bi-layered PnC as shown. _Right:_ Mode shapes of a bi-layered PnC rod unit cell using the same parameters. The mode shapes are shown for the three frequencies which correspond to bandgap closings within the range \(\Omega\in(0,6\pi)\), namely \(3\pi/2\), \(3\pi\), and \(9\pi/2\). Repeated modes exist at the bandgap closings due to two dispersion branches touching at that point. Changing the sign of \(\alpha\) flips the modes shape as illustrated in the right panel of the figure. Specifically, the deformation shape of layer 1 at \(\alpha=1/3\) becomes that of layer 2 at \(\alpha=-1/3\), and vice versa. Mode shapes calculated via the finite element method are superimposed as dashed lines for validation. theorem. At the special cases of \(\sin(k_{s}\ell_{s})=0\), these two continuity conditions simplify to: \[a_{1}\cos(n_{1}\pi)\mathrm{e}^{\mathrm{i}q}-a_{2}\cos(n_{2}\pi)=0 \tag{33a}\] \[(1+\beta)\cos(n_{1}\pi)\mathrm{e}^{\mathrm{i}q}b_{1}-(1-\beta)\cos(n_{2}\pi)b _{2}=0 \tag{33b}\] Solving Eqs. (32) and (33) simultaneously gives: \[\big{(}\cos(n_{1}\pi)\mathrm{e}^{\mathrm{i}q}-\cos(n_{2}\pi)\big{)}a_{1}=0 \tag{34a}\] \[\big{(}\cos(n_{1}\pi)\mathrm{e}^{\mathrm{i}q}-\cos(n_{2}\pi)\big{)}b_{1}=0 \tag{34b}\] Equation (34) represents an eigenvalue problem with the eigenvectors being \(\{a_{1}\ b_{1}\}^{\mathrm{T}}=\{1\ 0\}^{\mathrm{T}}\) and \(\{a_{1}\ b_{1}\}^{\mathrm{T}}=\{0\ 1\}^{\mathrm{T}}\). As such, we arrive at two different mode shape equations. The first corresponds to \(a_{1}=a_{2}=1\) and \(b_{1}=b_{2}=0\), i.e., \[u_{1}(x) =\cos\left(n_{1}\pi\frac{x}{\ell_{1}}\right);\ \ \ \ x\in[-\ell_{1},0] \tag{35a}\] \[u_{2}(x) =\cos\left(n_{2}\pi\frac{x}{\ell_{2}}\right);\ \ \ \ x\in(0,\ell_{2}] \tag{35b}\] which, interestingly, is independent of the impedance contrast \(\beta\) between the two PnC layers. The mode shapes in Eq. (35) mandate that the normalized amplitude at the interface is equal to one. Similarly, a second solution is found by using Eq. (32b) and applying \(b_{1}=1-\beta\) and \(a_{1}=a_{2}=0\), resulting in the following mode shape: \[u_{1}(x) =(1-\beta)\sin\left(n_{1}\pi\frac{x}{\ell_{1}}\right);\ \ \ \ x\in[-\ell_{1},0] \tag{36a}\] \[u_{2}(x) =(1+\beta)\sin\left(n_{2}\pi\frac{x}{\ell_{2}}\right);\ \ \ \ x\in(0,\ell_{2}] \tag{36b}\] showing that mode shapes from this second solution exhibit an amplitude of zero at the interface between the layers. Upon examining the modes shapes in Eqs. (35) and (36), it becomes clear that the deformation "shape" of each layer is independent of the other as inferred from the argument of the cosine and sine functions in both equations. The "amplitude", however, of mode shapes obtained from Eq. (36) depends on the impedance contrast between the two layers, as implied by the \((1\mp\beta)\) coefficient. The independent deformation shapes in each layer are attributed to the fact that Eq. (35) is merely a combination of free-free mode shapes for layers 1 and 2 if they are to be stand-alone structures. Likewise, Eq. (36) describes fixed-fixed mode shapes for layers 1 and 2, only scaled by the \((1\mp\beta)\) term. This intriguing relationship between the mode shapes of a PnC and those of its constitutive layers at bandgap closing is graphically summarized in Figure 4(a). The rightmost panel of Figure 4(b) shows the mode shapes for the case of \(|\alpha|=1/3\) and \(\beta=-0.75\). Here, we chose \(\ell_{1}/\ell=0.6\) and thus \(\ell_{2}/\ell=0.4\). The two modes derived earlier are normalized such that the maximum amplitude is unity and they are plotted at the three frequencies corresponding to bandgap closings within the range \(\Omega\in(0,6\pi)\), namely \(3\pi/2\), \(3\pi\), and \(9\pi/2\). The spatial frequency of the mode shapes is controlled by the value of \(n_{1,2}\). As can be inferred from Eq. (27), flipping the sign of \(\alpha\) switches the values of \(n_{1}\) and \(n_{2}\). This is graphically shown in Figure 4(b) where the deformation shape of layer 1 at \(\alpha=1/3\) becomes identical to that of layer 2 at \(\alpha=-1/3\), and vice versa, at any of the three bandgap closing frequencies shown (Understandably, the deformation shape spans a shorter or larger distance when the sign of \(\alpha\) is swapped to accommodate for the different lengths of the individual layers). For validation, all of the analytically-obtained results shown in Figure 4(b) are verified via a finite element model (FEM) implementing two-node rod elements [43], and shown as dashed lines in all the plotted mode shapes. The leftmost panel of Figure 4(b) shows several unique features of the dispersion diagrams of the bi-layered PnC and its two constitutive layers. The latter are given by two sets of folded lines described by \(\Omega(1\pm\alpha)=q\) (black and blue dashed lines). The red dashed lines indicate the locations at which the two sets fold at the same frequency, indicating a bandgap closing of the bi-layered PnC as shown. Finally, it can also be shown that \(n_{1,2}\) indicate precisely the number of folded lines that the dispersion relations of the individual layers have up to each bandgap closing of the bi-layered unit cell. For example, consider the first bandgap closing at \(\Omega=3\pi/2\). The dispersion relation \(\Omega(1+\alpha)=q\) before and up to that frequency consists of exactly two folded lines, while \(\Omega(1-\alpha)=q\) consists of one folded line, indicating values of \(n_{1}=2\) and \(n_{2}=1\). Using these values of \(n_{1,2}\), it immediately follows that \(\alpha=1/3\) by using Eq. (30), as expected. ### Bandgap transitions with varying \(\alpha\) and \(\beta\) The observations drawn in sections 3.2 and 3.3 regarding rational and irrational values of \(\alpha\) are in fact independent of the chosen value of \(\beta\). As a demonstration, Figure 5 shows the width \(\Delta\Omega\) of the first six bandgaps over the entire range of \(\alpha\) and \(\beta\) values. The following observations can be made: 1. Confirming the bandgap closing rules observed in Figure 2, the number of the zero-width bandgap is directly related to the value of \(\alpha\). For example, the fourth bandgap (which is even-numbered) closes at \(\alpha=\pm 1/2\) as expected, with the closed bandgap number being twice the denominator value (\(\alpha_{d}=2\)). Similarly, the fifth bandgap (which is odd-numbered) closes at \(\alpha=\pm 1/5\) and \(\alpha=\pm 3/5\), which are both ratios of odd numbers, and with the closed bandgap number matching the denominator value (\(\alpha_{d}=5\)). Finally, the sixth bandgap closes at both \(\alpha=\pm 1/3\) and \(\alpha=\pm 2/3\), which represent rational values of identical and different numerator-denominator parity, respectively. 2. Including the limiting case of \(|\alpha|=1\), the number of times a given bandgap closes is equal to its number plus one. For instance, the third bandgap closes four times at \(\alpha=\pm 1/3\) and \(\alpha=\pm 1\). 3. The special case of \(\alpha=0\) results in the closing of all even-numbered bandgaps, further confirming the result of Figure 2(a). 4. The special case of \(\beta=0\) forces all bandgaps to close regardless of the value of \(\alpha\) (as reported in [39]). It is also of interest to understand how bandgap limits behave as the value of \(\beta\) changes at a given \(\alpha\), as shown in Figures 6(a) and (b). Bragg bandgaps initiate with a non-zero impedance contrast \(\beta\) at frequencies at which the linear dispersion relation folds within the irreducible Brillouin zone, as shown in Figure 6(a), and grow in width (\(\Delta\Omega\)) with higher contrast values. As the contrast \(\beta\) approaches the limiting value of unity, the dispersion branches become flat. The growth of \(\Delta\Omega\) with increasing magnitude of \(\beta\) is further emphasized in Figure 6(b), and is shown to be symmetric about \(\beta=0\). It can be seen that even or odd-numbered bandgaps close when the two solutions of Eq. (29) match regardless of the value of \(\beta\). These closings are denoted with dashed lines in Figure 6(b). The behavior is quite different when observing the evolution of bandgap limits with a varying \(\alpha\) at specific values of \(\beta\), which is depicted in Figure 6(c). As the value of \(\alpha\) changes, the locus of the bandgap limits oscillates in a manner which increases at higher frequencies. Furthermore, the amplitude (i.e., frequency width) of these oscillations grows as the value of \(\beta\) increases. These oscillatory profiles have nodal points at the locations where the bandgap limit curves intersect, which represent rational values of \(\alpha\). At such nodes, the curves corresponding to the Bragg condition established in Eq. (29), and shown as dotted black lines, also intersect, thus constituting the requisite condition for bandgap closing. ## 4 Generalizing bandgap closing conditions to multi-layered PnCs While the bandgap closing conditions derived thus far have been mathematically proven for a bi-layered PnC, it is imperative to likewise demonstrate that similar features emerge in a PnC with an arbitrary \(L>2\) number of layers. To generalize our findings, consider a multi-layered unit cell of a PnC rod where all of the constitutive layers have distinct mechanical and geometrical properties. Analogous to the theoretical framework developed earlier, the frequencies corresponding to the Bragg conditions of \begin{table} \begin{tabular}{l l l} \hline Material & Density & Young’s Modulus \\ \hline ABS & 1040 kg/m\({}^{3}\) & 2.4 GPa \\ Aluminum & 2700 kg/m\({}^{3}\) & 69 GPa \\ Brass & 8530 kg/m\({}^{3}\) & 110 GPa \\ Magnesium Alloy & 1800 kg/m\({}^{3}\) & 42 GPa \\ Steel & 7850 kg/m\({}^{3}\) & 210 GPa \\ \hline \end{tabular} \end{table} Table 1: Material properties used in the multi-layered PnC rod unit cells used in Figure 7. Figure 5: Contours depicting the width \(\Delta\Omega\) of the first six bandgaps of a bi-layered PnC rod across the entire design space of \(\alpha\) and \(\beta\). Bandgap numbers 1 through 6 are placed at the right top corner of each subplot. As can be seen, \(\beta=0\) closes all bandgaps regardless of the value of \(\alpha\). \(\alpha=0\) closes all even-numbered bandgaps. Additionally, zero-width bandgaps occur at select rational values of \(\alpha\) indicated by the vertical dashed lines, the location of which depends on the bandgap number. Figure 6: (a) Dispersion diagram of a bi-layered PnC rod with \(\alpha=1/2\) and the two layers having varying impedance contrast ranging from \(\beta=0\) to \(\beta=1\). The \(\beta=0\) case, which is shown as dashed orange lines in all plots for reference, represents a bi-layered PnC unit cell with zero impedance contrast between its two layers (i.e., two layers with the same impedance) and shows no bandgap emergence (i.e., \(\Delta\Omega=0\)). As \(\beta\) increases, bandpass initiate at the folding frequencies and the bandgap width \(\Delta\Omega\) continues to grow until the dispersion branches becomes completely flat at \(\beta=1\). Evolution of bandgap limits of a bi-layered PnC rod for: (b) varying \(\beta\) at specific \(\alpha\) values of \(0,1/3,1/2,\) and \(2/3\), and (c) varying \(\alpha\) at specific \(\beta\) values of \(0.1,0.3,0.5,\)and \(0.7\). In (b), the dashed lines represent frequencies where bandgap limits match the Bragg conditions derived in Eq. (29), which are evidently not a function of \(\beta\). These conditions are a function of \(\alpha\) and are, therefore, tracked, via the same dashed lines in (c). Whenever the two solutions in Eq. (29) match, these dashed lines intersect resulting in identical lower and upper bandgap frequencies, i.e., a closing of the corresponding bandgap. each of the individual layers can be matched as follows: \[n_{1}\omega_{1}=n_{2}\omega_{2}=\cdots=n_{L}\omega_{L} \tag{37}\] which can be alternatively written as: \[\frac{\omega_{s}}{\omega_{j}}=\frac{n_{j}}{n_{s}} \tag{38}\] such that \(j\neq s\). To locate the frequencies where bandgaps close, we need to pursue a non-dimensional parameter which is reminiscent of the frequency contrast \(\alpha\) in the bi-layered PnC. To do so, a generalized harmonic mean can be introduced as follows: \[\omega_{0}=\left(\frac{\sum_{s=1}^{L}\frac{1}{\omega_{s}}}{L}\right)^{-1} \tag{39}\] which can be rearranged to read: \[\frac{L}{\omega_{0}}=\sum_{s=1}^{L}\frac{1}{\omega_{s}} \tag{40}\] Next, the frequencies \(\omega_{s}\) can be rewritten as a function of the \(j^{\text{th}}\) frequency \(\omega_{j}\) by using Eq. (38), which, after a few mathematical manipulations, becomes: \[\frac{1}{\omega_{j}}=\frac{L}{\omega_{0}}\frac{n_{j}}{\sum_{s=1}^{L}n_{s}} \tag{41}\] Note that Eq. (40) is recovered if all the components \(1/\omega_{j}\) in Eq. (41) are added. Also, it can be clearly seen that the last term in Eq. (41) is a rational number and, as a result, can be written in a form similar to \(\alpha\) as follows: \[\frac{\alpha_{j}}{\alpha_{d}}=\frac{n_{j}}{\sum_{s=1}^{L}n_{s}} \tag{42}\] giving the denominator \(\alpha_{d}=\sum_{s=1}^{L}n_{s}\) the same role it played in the bi-layered PnC case, defining the frequency at which a bandgap closes using the equation: \[\Omega_{p}=\frac{1}{L}\pi p\alpha_{d} \tag{43}\] The numerator \(\alpha_{j}=n_{j}\), on the other hand, provides the number of branches a linear dispersion relation of an individual layer has between Bragg conditions. A couple of additional remarks can be made: 1. If \(\alpha_{d}\) is odd, odd and even-numbered bandgaps located at the frequencies given by Eq. (43) will close. Figure 7: Dispersion diagrams of PnC rods with multi-layered unit cells showing bandgap closings at \(L\Omega_{p}=\pi p\alpha_{d}\). (a-c) Three-layered unit cells (\(L=3\)) with the following \(n_{1}\)-\(n_{2}\)-\(n_{3}\) harmonic combinations: (a) 1-1-1 (Bandgap closings at \(L\Omega_{p}=3\pi p\)), (b) 1-2-1 (Bandgap closings at \(L\Omega_{p}=4\pi p\)), and (c) 1-2-3 (Bandgap closings at \(L\Omega_{p}=6\pi p\)). (d-e) Four-layered unit cells with the following \(n_{1}\)-\(n_{2}\)-\(n_{3}\)-\(n_{4}\) harmonic combinations: (d) 1-1-1-1 (Bandgap closings at \(L\Omega_{p}=4\pi p\)) and (e) 1-2-2-1 (Bandgaps closings at \(L\Omega_{p}=6\pi p\)). (f) Five-layered unit cell with the following \(n_{1}\)-\(n_{2}\)-\(n_{3}\)-\(n_{4}\)-\(n_{5}\) harmonic combination: 1-1-2-1-1 (Bandgap closings at \(L\Omega_{p}=6\pi p\)). Note that \(p\in\mathbb{N}^{+}\) represents all non-zero natural numbers. However, an even \(\alpha_{d}\) will only close even-numbered bandgaps according to the same equation. 2. If the chosen values of \(n_{s}\) have a common factor, a cancellation of the common factor is required for Eq. (43) to correctly predict the frequencies at which bandgaps close. For example, if \(n_{1}=2\), \(n_{2}=4\), and \(n_{3}=6\) in a three-layered PnC (i.e., \(L=3\)), the number 2 is a common factor. As such, bandgap closing frequencies should be computed using \(n_{1}=1\), \(n_{2}=2\), and \(n_{3}=3\) instead. Figure 7 shows examples of multi-layered PnC with \(L=3,4,\text{and }5\). The materials used in the three-layered PnC rod are ABS, Aluminum, and Steel in that particular order. The four-layered and five-layered PnC rods add Brass and Magnesium alloy, respectively. All material properties are listed in Table 1. In all cases, the area and total length of all layers are constant and equal to \(A_{s}=400\) mm\({}^{2}\) and \(\ell=100\) mm, respectively. The individual lengths of the layers are calculated by combining \(\ell=\sum_{s=1}^{L}\ell_{s}\) and \(\omega_{s}=\ell_{s}/c_{s}\) with Eq. (37). These equations can be cast into a matrix form as follows: \[\begin{bmatrix}\frac{1}{c_{1}n_{1}}&1&1&\cdots&1\\ \frac{1}{c_{1}n_{1}}&-\frac{1}{c_{2}n_{2}}&0&\cdots&0\\ \frac{1}{c_{1}n_{1}}&0&-\frac{1}{c_{3}n_{3}}&\ddots&\vdots\\ \frac{1}{c_{1}n_{1}}&\vdots&\ddots&\ddots&0\\ \frac{1}{c_{1}n_{1}}&0&\cdots&0&-\frac{1}{c_{L}n_{L}}\end{bmatrix}\begin{split} \ell_{1}\\ \ell_{2}\\ \vdots\\ \ell_{L}\end{split}=\begin{cases}\ell\\ 0\\ \vdots\\ 0\end{cases} \tag{44}\] Starting with \(L=3\), we show combinations of the harmonics \(n_{1}\)-\(n_{2}\)-\(n_{3}\) of (a) 1-1-1, (b) 1-2-1, and (c) 1-2-3. For these PnC configurations, we predict bandgap closings to occur at \(\alpha_{d}=\sum_{s=1}^{L}n_{s}\) which correspond to \(L\Omega_{p}=3\pi p,4\pi p\), and \(6\pi p\) for (a), (b), and (c), respectively, as seen in the respective figures. Similar behavior can be seen in the \(L=4\) (1-1-1-1 and 1-2-2-1) and \(L=5\) (1-1-2-1-1) cases in (d) through (f). The bandgap width in all cases (a)-(f) has a periodic pattern which repeats itself in between bandgap closings. Additionally, the dispersion branches remain mirror symmetric about the bandgap closings, which is synonymous with the bi-layered (\(L=2\)) case. Finally, for completeness, we point out that the existence of two identical layers in a multi-layered unit cell of a PnC rod affects the calculation of \(\alpha_{j}/\alpha_{d}\) in Eq. (42). This is because of a hidden common factor that exists between the chosen vibrational modes that are intended to be matched. When faced with such a special case, the sum of the modes of the identical layers can be calculated and they can then be treated as a single continuous layer, even if they are not adjacent to one another. For instance, consider a four-layered unit cell made of Aluminum-Steel-Aluminum-Brass. We choose \(n_{1}=1\), \(n_{2}=8\), \(n_{3}=3\), and \(n_{4}=4\) and then calculate \(n^{\prime}=n_{1}+n_{3}=4\) for a collective mode for the Aluminum layers. As such, the common factor between \(n_{2}\), \(n_{4}\) and \(n^{\prime}\) is 4, and, as a result, \(\alpha_{d}=\sum_{s}n_{s}/4\). ## 5 Concluding Remarks The qualitative and quantitative criteria governing bandgap formation, distribution, and closing conditions were established in a generalized class of rod-based phononic crystals (PnCs) undergoing longitudinal deformations. A transfer-matrix-based approach was used to generate the wave dispersion profiles and develop expressions for bandgap limits and frequencies of maximum attenuation. By implementing two non-dimensional contrast parameters, a frequency contrast \(\alpha\) and an impedance contrast \(\beta\), which stem from the parameters of the PnC's constitutive layers, the conditions that lead to diminishing bandgaps were derived in closed form, showing that \(\alpha\) being a rational number is a necessary condition for bandgap closing in bi-layered PnCs. Furthermore, it was shown that, depending on the parity of the integer numerator and denominator values of \(\alpha\), the pattern and frequency location of the bandgap closing can be predicted as a function of the rational number \(\alpha\). It was found that the bandgap widths \(\Delta\Omega\) of a PnC with a rational \(\alpha\) exhibit a periodic profile, which perfectly repeats itself every time a bandgap closes. This pattern was correlated to the resonances of the individual layers of the PnC, and it was proven that matching the natural frequencies of the individual layers (if they were to be treated as stand-alone entities) forces a bandgap to close at the same frequencies. An additional connection was made between bandgap closing criteria and the physics underlying the mode shapes of the individual layers forming the PnC unit cell at different boundary conditions. The conclusions drawn from the bi-layered case were generalized to a PnC comprised of three or more layers, where it was similarly shown that the dispersion branches of the multi-layered PnC exhibit mirror symmetry about the frequencies at which bandgaps close. In fully analytical terms, it was also proven that the resonance matching condition for bandgap closing is independent of the number of layers forming the PnC's unit cell. Resolving the patterns and formation mechanisms of multi-bandgap dispersion profiles is particularly useful for a wide array of new and exciting topics, given the rising interest in exciting applications that require an understanding of how bandgaps close (e.g., topological transition). In tandem, the need to finely tailor phononic band structures remains highly critical for a broad range of elastoacoustic metamaterials. As such, the developments established herein provide a great asset for bandgap engineering in future configurable and tunable PnCs. ## Acknowledgements The authors acknowledge the support of this work by the US National Science Foundation through CMMI research award no. 1847254 (CAREER). ## Appendix A Deriving the dispersion relation from the transfer matrix When operating under linear reciprocal conditions, the determinant of the transfer matrix is unity [44]. By combining \(|\mathbf{T}-\lambda\mathbf{I}|=0\), where \(\lambda\) is an eigenvalue of \(\mathbf{T}\), with \(|\mathbf{T}|=1\), the following characteristic equation is derived: \[\lambda^{2}-\mathrm{tr}(\mathbf{T})\lambda+1=0 \tag{12}\] where, once again, \(\mathrm{tr}(\mathbf{T})\) is the trace of \(\mathbf{T}\). As a result, the eigenvalues of \(\mathbf{T}\) are derived from the roots of Eq. (12), leading to: \[\lambda_{\pm}=\frac{\mathrm{tr}(\mathbf{T})}{2}\pm\sqrt{\left(\frac{\mathrm{ tr}(\mathbf{T})}{2}\right)^{2}-1} \tag{13}\] Examining Eq. (13) reveals that \(\lambda_{-}+\lambda_{+}=\mathrm{tr}(\mathbf{T})\). Upon experssing the eigenvalues as an exponential function of the wavenumber, i.e., \(\lambda_{\pm}=e^{\pm\mathbf{i}q}\), we arrive at the following dispersion relation: \[\mathrm{tr}(\mathbf{T})=2\cos(q) \tag{14}\] ## Appendix B Special case of \(\boldsymbol{\alpha=0}\) A special configuration of the bi-layered PnC rod takes place when the frequency contrast \(\alpha\) is equal to zero. This is realized when the ratio of the length of layer 1 to that of layer 2 is equal to the sonic speed ratio between the two layers, i.e., \[\frac{\ell_{1}}{\ell_{2}}=\frac{c_{1}}{c_{2}} \tag{15}\] Upon substituting \(\alpha=0\) in the dispersion relation of Eq. (10), \(\Omega\) can be obtained analytically as follows: \[\Omega=k\pi+\frac{1}{2}\cos^{-1}\left(\beta^{2}+(1-\beta^{2})\cos(q)\right) \tag{16a}\] \[\Omega=(k+1)\pi-\frac{1}{2}\cos^{-1}\left(\beta^{2}+(1-\beta^{2})\cos(q)\right) \tag{16b}\] for \(k\in\mathbb{N}_{0}\) where \(\mathbb{N}_{0}\) is the set of all natural numbers including zero. The bandgap lower and upper limits can be respectively found using the following expressions: \[\Omega_{l}=k\pi+\frac{1}{2}\cos^{-1}\left(2\beta^{2}-1\right) \tag{17a}\] \[\Omega_{u}=(k+1)\pi-\frac{1}{2}\cos^{-1}\left(2\beta^{2}-1\right) \tag{17b}\] which confirms that all bandgaps have an identical width that is given by: \[\Delta\Omega=\pi-\cos^{-1}\left(2\beta^{2}-1\right) \tag{18}\] Finally, as stated in Sec. 3.1, setting \(\alpha=0\) in Eq. (12) shows that \(\Omega=r\pi/2\) is the frequency of maximum attenuation inside the bandgap. The attenuation constant corresponding to this frequency is given by: \[q_{\mathrm{I}}=\Im\left[\cos^{-1}\left(\frac{\beta^{2}+1}{\beta^{2}-1}\right)\right] \tag{19}\]
2304.09281
Optimal Eigenvalue Approximation via Sketching
Given a symmetric matrix $A$, we show from the simple sketch $GAG^T$, where $G$ is a Gaussian matrix with $k = O(1/\epsilon^2)$ rows, that there is a procedure for approximating all eigenvalues of $A$ simultaneously to within $\epsilon \|A\|_F$ additive error with large probability. Unlike the work of (Andoni, Nguyen, SODA, 2013), we do not require that $A$ is positive semidefinite and therefore we can recover sign information about the spectrum as well. Our result also significantly improves upon the sketching dimension of recent work for this problem (Needell, Swartworth, Woodruff FOCS 2022), and in fact gives optimal sketching dimension. Our proof develops new properties of singular values of $GA$ for a $k \times n$ Gaussian matrix $G$ and an $n \times n$ matrix $A$ which may be of independent interest. Additionally we achieve tight bounds in terms of matrix-vector queries. Our sketch can be computed using $O(1/\epsilon^2)$ matrix-vector multiplies, and by improving on lower bounds for the so-called rank estimation problem, we show that this number is optimal even for adaptive matrix-vector queries.
William Swartworth, David P. Woodruff
2023-04-18T20:37:45Z
http://arxiv.org/abs/2304.09281v1
# Optimal Eigenvalue Approximation via Sketching ###### Abstract Given a symmetric matrix \(A\), we show from the simple sketch \(GAG^{T}\), where \(G\) is a Gaussian matrix with \(k=O(1/\epsilon^{2})\) rows, that there is a procedure for approximating all eigenvalues of \(A\) simultaneously to within \(\epsilon\|A\|_{F}\) additive error with large probability. Unlike the work of (Andoni, Nguyen, SODA, 2013), we do not require that \(A\) is positive semidefinite and therefore we can recover sign information about the spectrum as well. Our result also significantly improves upon the sketching dimension of recent work for this problem (Needell, Swartworth, Woodruff, FOCS 2022), and in fact gives optimal sketching dimension. Our proof develops new properties of singular values of \(GA\) for a \(k\times n\) Gaussian matrix \(G\) and an \(n\times n\) matrix \(A\) which may be of independent interest. Additionally we achieve tight bounds in terms of matrix-vector queries. Our sketch can be computed using \(O(1/\epsilon^{2})\) matrix-vector multiplies, and by improving on lower bounds for the so-called rank estimation problem, we show that this number is optimal even for adaptive matrix-vector queries. Introduction Estimating the eigenvalues of a real symmetric matrix has numerous applications in data analysis, engineering, optimization, spectral graph theory, and many other areas. As modern matrices may be very large, traditional algorithms based on the singular value decomposition (SVD), subspace iteration, or Krylov methods, may be be too slow. Therefore, a number of recent works have looked at the problem of creating a small summary, or sketch of the input matrix, so that from the sketch one can approximate each of the eigenvalues well. Indeed, in the realm of sublinear algorithms, this problem has been studied in the streaming model [1], the sampling and property testing models [1, 2, 3, 4, 10, 11, 12], and matrix-vector and vector-matrix-vector query models [1, 13, 14, 15, 16]; the latter model also contains so-called bilinear sketches. In this work we focus on designing linear sketches for eigenvalue estimation. Namely, we are interested in estimating the spectrum of a real symmetric matrix up to error via a bilinear sketch with \(G\in\mathbb{R}^{k\times n}\) is a matrix of i.i.d. \(N(0,1/k)\) random variables, i.e., Gaussian of mean zero and variance \(1/k\). The algorithm should succeed with large constant probability in estimating the entire spectrum. This is a very natural sketch, and unsurprisingly has been used before both in [1] to estimate eigenvalues with an additive error of roughly \(\epsilon\sum_{i=1}^{n}|\lambda_{i}(A)|\), where \(\lambda_{i}(A)\) are the eigenvalues of \(A\), as well as in [16] for testing if a matrix is positive semidefinite (PSD). We note that the additive error of \(\epsilon\|A\|_{1}=\epsilon\sum_{i=1}^{n}|\lambda_{i}(A)|\) can be significantly weaker than our desired \(\epsilon\left\|A\right\|_{F}\) error, as \(\left\|A\right\|_{F}\) can be as small as \(\frac{\left\|A\right\|_{1}}{\sqrt{d}}\). This is analogous to the \(\ell_{2}\) versus \(\ell_{1}\) guarantee for heavy hitters in the data stream model, see, e.g., [14]. It may come as a surprise that \(GAG^{T}\) has any use at all for achieving additive error in terms of \(\epsilon\|A\|_{F}\)! Indeed, the natural way to estimate the \(i\)-th eigenvalue of \(A\) is to output the \(i\)-th eigenvalue of \(GAG^{T}\), and this is exactly what the algorithm of [1] does. However, by standard results for trace estimators, see, e.g., [17] and the references therein, the trace of \(GAG^{T}\) is about the trace of \(A\), which can be a \(\sqrt{d}\) factor larger than \(\|A\|_{F}\), and thus the estimation error can be much larger than \(\epsilon\|A\|_{F}\). This is precisely why [1] only achieves additive \(\epsilon\|A\|_{1}\) error with this sketch. Moreover, the work of [16] does use sketching for eigenvalue estimation, but uses a different, and much more involved sketch based on ideas for low rank approximation of PSD matrices [10], and achieves a much worse \(\tilde{O}(k^{2}/\epsilon^{12})\) number of measurements to estimate each of the top \(k\) eigenvalues, including their signs, up to additive error \(\epsilon\|A\|_{F}\). Here we use \(\tilde{O}()\) notation to suppress \(\operatorname{poly}(\log(n/\epsilon))\) factors. Note that for \(k>1/\epsilon^{2}\), one can output \(0\) as the estimate to \(\lambda_{k}\), and thus the sketch size of [16] is \(\tilde{O}(1/\epsilon^{16})\). To achieve error in terms of \(\|A\|_{F}\), the work of [1] instead considers the sketch \(GAH^{T}\), where \(G,H\in\mathbb{R}^{k\times n}\) are independent Gaussian matrices. However, the major issue with this sketch is it inherently loses sign information of the eigenvalues. Indeed, their algorithm for reconstructing the eigenvalues uses only the sketched matrix, while forgetting \(G\) and \(H\) (more specifically they only use the singular values of this matrix). However the distributions of \(G\) and \(H\) are invariant under negation, so the sketch alone cannot even distinguish \(A\) from \(-A.\) In addition to this, even if one assumes the input \(A\) is PSD, so that the signs are all positive, their result for additive error \(\epsilon\|A\|_{F}\) would give a suboptimal sketching dimension of \(k=\tilde{O}(1/\epsilon^{3})\); see further discussion below. ### Our Contributions Optimal Sketching Upper Bound.We obtain the first optimal bounds for eigenvalue estimation with the natural \(\epsilon\|A\|_{F}\) error via sketching. We summarize our results compared to prior work in Table 1. We improve over [1, 16] in the following crucial ways. Qualitatively, we drop the requirement that \(A\) is PSD. As mentioned, the eigenvalues of our sketch \(GAG^{T}\) may not be good approximations to the eigenvalues of \(A\). In particular, we observe that the sketched eigenvalues concentrate around \(\frac{1}{k}\operatorname{Tr}(A)\), which could be quite large, on the order of \(\frac{\sqrt{d}}{k}\left\|A\right\|_{F}\). By shifting the sketched eigenvalues by \(-\frac{1}{k}\operatorname{Tr}(A)\) via an additional trace estimator we compute, this enables us to correct for this bias, and we are able to show that the resulting eigenvalues are good approximations to those of \(A.\) In order to perform this correction we in fact require the sketched eigenvalues to concentrate around \(\frac{1}{k}\operatorname{Tr}(A)\). Obtaining this concentration is where we require Gaussianity in our argument1. We leave it as an open question to obtain similar concentration from common sketching primitives. Footnote 1: However in the appendix we give a faster sketch for PSD matrices. Comparison with existing work. Quantitatively, the analysis of [1] for the related \(GAH^{T}\) sketch works by splitting the spectrum into a "head" containing the large eigenvalues, and a "tail" containing the remaining eigenvalues. The authors then incur an additive loss from the operator norm of the tail portion of the sketch, and show that the head portion of the sketch approximates the corresponding eigenvalues to within a multiplicative error. Notably, their multiplicative constant is uniform over the large eigenvalues. This is a stronger guarantee than we need. For example, to approximate an eigenvalue of \(1/2\) to within \(\epsilon\) additive error, we need a \((1\pm O(\epsilon))\) multiplicative guarantee. However to approximate an eigenvalue of \(2\epsilon\) to within \(\epsilon\) additive error, a \((1\pm O(1))\) multiplicative guarantee suffices. In other words, smaller eigenvalues require less stringent multiplicative guarantees to achieve the same additive guarantee. We leverage this observation in order to get a uniform _additive_ guarantee for the large eigenvalues, while not relying on a uniform multiplicative guarantee. Thus, we improve the worst-case \(k=O(1/\epsilon^{3})\) bound of [1] to a \(k=O(1/\epsilon^{2})\) bound for an \(\epsilon\|A\|_{F}\) error guarantee. Indeed, one can show if the eigenvalues of \(A\) are, in non-increasing order, \[\frac{c_{d}}{\sqrt{1}},\frac{c_{d}}{\sqrt{2}},\frac{c_{d}}{\sqrt{3}},\frac{c_ {d}}{\sqrt{4}},\ldots,\frac{c_{d}}{\sqrt{d}},\] where \(c_{d}=O(\log^{-1/2}d)\) so that \(\left\|A\right\|_{F}=1\), then \(O(1/\epsilon^{3})\) is the bound their Theorem 1.2 and corresponding Lemma 3.5 would give. To see this, their Lemma 3.5, which is a strengthening of their Theorem 1.2, states that for \(i=1\ldots k\), \[\left|\lambda_{i}^{2}(GAH^{T})-\lambda_{i}^{2}(A)\right|\leq\alpha\lambda_{i} ^{2}(A)+O\left(\lambda_{k}^{2}(A)\right)+O\left(\frac{\alpha^{2}}{k}\left\|A_ {-k}\right\|_{F}^{2}\right), \tag{1}\] with sketching dimension \(O(k/\alpha^{2})\) on each side (and hence \(O(k^{2}/\alpha^{4})\) total measurements). Suppose \(\left\|A\right\|_{F}=O(1)\) and that we would like to use this bound to approximate \(\lambda_{\ell}(A)>\alpha\) to within \(\epsilon\) additive error. After adjusting for the squares, this is equivalent to bounding the left-hand side of \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{1}{c}{Sketching dimension} & Reference & Notes \\ \hline \(\tilde{O}(1/\epsilon^{6})\) & [1] & Loss sign information \\ \(\tilde{O}(1/\epsilon^{16})\) & [20] & \\ \(\Omega(1/\epsilon^{4})\) & [20] & Lower bound \\ \(O(1/\epsilon^{4})\) & **Our Work** & \\ \hline \hline \end{tabular} \end{table} Table 1: Our work and prior work on estimating each eigenvalue of an arbitrary symmetric matrix \(A\) up to additive \(\epsilon\|A\|_{F}\) error. (1) by \(O(\epsilon\lambda_{\ell})\) for \(i=\ell.\) Obtaining such a bound from (1) requires that the first two terms on the right-hand side are bounded by \(O(\epsilon\lambda_{\ell}(A))\), i.e., that \(\alpha\leq O(\epsilon/\lambda_{\ell}(A))\) and \(\lambda_{k}^{2}(A)\leq O(\epsilon\lambda_{\ell}(A))\). For the spectrum above, we must therefore take \(k\gtrsim c_{d}\frac{\sqrt{\ell}}{\epsilon}\), which results in a sketching dimension of \[\frac{k}{\alpha^{2}}\approx\frac{c_{d}\sqrt{\ell}}{\epsilon}\cdot\frac{\lambda _{\ell}(A)^{2}}{\epsilon^{2}}=\frac{c_{d}^{3}}{\epsilon^{3}\sqrt{\ell}}\] on each side. Thus for this spectrum, [1] requires a sketching dimension of \(O(1/\epsilon^{3})\) (up to \(\log d\) factors) to approximate the largest eigenvalues of \(A\) to \(\epsilon\) additive error. Indeed this bound does not achieve \(O(1/\epsilon^{2})\) sketching dimension, unless \(\ell\gtrsim 1/\epsilon^{2}\), at which point \(\lambda_{\ell}(A)\leq O(\epsilon)\) and does not need to be approximated by our algorithm. We note that while [15] could also report the signs of the approximate eigenvalues, their \(\tilde{O}(1/\epsilon^{16})\) sketch size makes it considerably worse for small values of \(\epsilon\). In contrast, our sketching dimension \(k\) is optimal among all non-adaptive bilinear sketches, due to the proof of part 1 of Theorem 31 of [15] applied with \(p=2\). Indeed, the proof of that theorem gives a pair of distributions on matrices \(A\) with \(\|A\|_{F}=\Theta(1)\) for which in one distribution \(A\) is PSD, while in the other it has a negative eigenvalue of value \(-\Theta(\epsilon)\). That theorem shows \(\Omega(1/\epsilon^{4})\) non-adaptive vector-matrix-vector queries are required to distinguish the two distributions, which implies in our setting that necessarily \(k=\Omega(1/\epsilon^{2})\). Concentration of Singular Values with Arbitrary Covariance Matrices.Of independent technical interest, we give the first bounds on the singular values of \(GB\) for an \(n\times n\) matrix \(B\) and a (normalized) Gaussian matrix \(G\) with \(k\) rows when \(k\ll n\). When taken together, our upper and lower bounds on singular values show for any \(1\leq\ell\) and \(k\geq\Omega(\ell)\), that \[\sigma_{\ell}(GB)^{2}=\sigma_{\ell}(B)^{2}\pm O\left(\frac{1}{\sqrt{k}}\right) \left\|B\right\|_{F}^{2}. \tag{2}\] Although there is a large body of work on the singular values of \(GB\), to the best of our knowledge there are no quantitative bounds of the form above known. There is work upper bounding \(\|GB\|_{2}\) for a fixed matrix \(B\)[14], and classical work (see, e.g., [14]) which bounds all the singular values of \(G\) when \(B\) is the identity, but we are not aware of concrete bounds that prove concentration around \(\|GB\|_{F}^{2}\) of the form in (2) for general matrices \(B\) that we need. Optimal Adaptive Matrix-Vector Query Lower Bound.A natural question is whether adaptivity can further reduce our sketching dimension. We show that at least in the matrix-vector product model, where one receives a sequence of matrix-vector products \(Av^{1},Av^{2},\ldots,Av^{r}\) for query vectors \(v^{1},v^{2},\ldots,v^{r}\) that may be chosen adaptively as a function of previous matrix-vector products, that necessarily \(r=\Omega(1/\epsilon^{2})\). Note that our non-adaptive sketch \(GAG^{T}\) gives an algorithm in the matrix-vector product model by computing \(AG^{T}\), and so \(r=k=O(1/\epsilon^{2})\). This shows that adaptivity does not help for eigenvalue estimation, at least in the matrix-vector product model. Our hard instance is distinguishing a Wishart matrix of rank \(r\) from a Wishart matrix of rank \(r+2\) (the choice of \(r+2\) rather than \(r+1\) is simply for convenience). We first argue that for our pair of distributions, adaptivity does not help. This uses rotational invariance properties of our Wishart distribution, even conditioned on the query responses we have seen so far. In fact, our argument shows that without loss of generality, the optimal tester is a non-adaptive tester which just observes the leading principle submatrix of the input matrix \(A\). We then explicitly bound the variation distance between the distributions of a Wishart matrix of rank \(r\) and one of rank \(r+2\). We also give an alternative, but related proof based on distinguishing a random \(r\) dimensional subspace from a random \(r+2\) dimensional subspace, which may be of independent interest. As an example, we note that this lower bound immediately recovers the \(\Omega(1/\epsilon)\) matrix-vector lower bound for estimating the trace of a PSD matrix to within \((1\pm\epsilon)\) multiplicative error [14, 15], as well as the \(\Omega(1/e^{p})\) lower bound given in [13] for approximating the trace of \(A\) to additive \(\epsilon\left\|A\right\|_{p}\) error (however the bound in [13] is more refined as it captures the dependence on failure probability). These results substantially broaden a previous lower bound for the rank-estimation problem [12]. Whereas the hard instance in [12] requires some non-zero eigenvalues to be extremely small, we show that the rank estimation problem remains hard even when all nonzero eigenvalues have comparable size (or in fact, even when they are all equal). ### Additional Work on Sampling in the Bounded Entry Model Recent work has considered the spectral estimation problem for entry queries to bounded-entry matrices. The work of [15] gives an \(\widetilde{O}(1/\epsilon^{6})\) query algorithm for approximating all eigenvalues of a symmetric matrix to within \(\epsilon\left\|A\right\|_{F}\) additive error, given a row-norm sampling oracle. However it remains open whether this bound can be improved to \(\widetilde{O}(1/\epsilon^{4})\) even for principal submatrix queries. Our result shows that \(O(1/\epsilon^{4})\) queries is at least attainable under the much less restrictive model of vector-matrix-vector queries. In contrast to [15], our algorithm does not simply return the eigenvalues of our sketch. Indeed no such algorithm can exist as it would violate the one-sided lower bound of [13]. ## 2 Sketching Algorithm and Proof Outline ``` 0:\(A\in\mathbb{R}^{d\times d}\) real symmetric, \(k\in\mathbb{N}\). procedure spectrum_appx(\(A\),\(k\)) Sample \(G\in\mathbb{R}^{k\times k}\) with i.i.d. \(\mathcal{N}(0,1/k)\) entries. \(S\gets GAG^{T}\) For \(i=1,\ldots,k\), let \(\alpha_{i}=\lambda_{i}(S)-\frac{1}{k}\operatorname{Tr}(S)\) For \(i=k+1,\ldots,d\), let \(\alpha_{i}=0\) return\(\alpha_{1},\ldots,\alpha_{d}\) sorted in decreasing order endprocedure ``` **Algorithm 1** **Theorem 1**.: _Let \(A\in\mathbb{R}^{d\times d}\) be symmetric (not necessarily PSD) with eigenvalues \(\lambda_{1}\geq\ldots\geq\lambda_{d}\). For \(k\geq\Omega(1/\epsilon^{2})\), Algorithm 1 produces a sequence \((\mu_{1},\ldots,\mu_{d})\) such that \(\left|\mu_{i}-\lambda_{i}\right|<\epsilon\left\|A\right\|_{F}\) for all \(i\) with probability at least \(3/5\)._ ### Proof Outline A natural idea is to split the spectrum of \(A\) into two pieces, \(A_{1}\) and \(A_{2}\), where \(A_{1}\) consists of the large eigenvalues of \(A\) which are at least \(\epsilon\left\|A\right\|_{F}\) in magnitude, and where \(A_{2}\) contains the remaining spectral tail. The eigenvalues of \(GA_{2}G^{T}\) will all concentrate around \(\operatorname{Tr}(A)\) up to \(O(\epsilon)\) additive error. We are then left with showing that the eigenvalues of \(GA_{1}G^{T}\) are \(O(\epsilon)\) additive approximations to the nonzero eigenvalues of \(A_{1}.\) In order to do this we prove upper and lower bounds on the eigenvalues of \(GA_{1}G^{T}\). For the upper bound (or lower bound if \(\lambda_{\ell}(A_{1})\) is negative) we give a general upper bound on the operator norm of \(GMG^{T}\) for a PSD matrix \(M\) with \(\left\|M\right\|_{F}\leq 1.\) By applying this result to various deflations of \(A_{1}\) we are able to give an upper bound on all eigenvalues of \(A_{1}\) simultaneously. For the lower bound, we first prove the analogous result in the PSD case where it is much simpler. We then upgrade to the general result. To get a lower bound on \(\lambda_{\ell}(GDG^{T})\) in the general case, we construct an \(\ell\) dimensional subspace \(S_{\ell}\) so that \(u^{T}GDG^{T}u\) is large for all unit vectors \(u\) in \(S_{\ell}.\) A natural choice would be to take \(S_{\ell}\) to be the image of \(GD_{+,\ell}G^{T},\) where \(D_{+,\ell}\) refers to \(D\) with all but the top \(\ell\) positive eigenvalues zeroed out. We would then like to argue that the quadratic form associated to \(GD_{-}G^{T}\) is small in magnitude uniformly over \(S_{\ell}\). Unfortunately it need not be as small as we require, due to the possible presence of large negative eigenvalues in \(D_{-}.\) We therefore restrict our choice of \(S_{\ell}\) to lie in the orthogonal complement of the largest \(r\) negative eigenvectors of \(GD_{-}G^{T}\). Since we restrict the choice of \(S_{\ell}\) we incur a cost, which damages our lower bound on \(\lambda_{\ell}(GD_{+}G^{T})\) slightly. However by choosing \(r\) carefully, we achieve a lower bound on \(\lambda_{\ell}(GDG^{T})\) of \(\lambda_{\ell}(D)-O(\epsilon).\) ## 3 Proof of Theorem 1 In this section and the next, we provide upper and lower bounds on the eigenvalues of a sketched \(d\times d\) matrix. We emphasize the results below will later be applied only to the matrix \(A_{1}\) which is rank \(O(1/\epsilon^{2}).\) Hence we will use the results below for \(d=O(1/\epsilon^{2}).\) ### Upper bounds on the sketched eigenvalues The following result is a consequence of Theorem 1 in [10] along with the remark following it. **Theorem 2**.: _Let \(G\in\mathbb{R}^{m\times n}\) have i.i.d. \(\mathcal{N}(0,1/m)\) entries, and let \(A\) and \(B\) be arbitrary matrices with compatible dimensions. With probability at least \(1-\delta\),_ \[\left\|A^{T}G^{T}GB-A^{T}B\right\|\leq\epsilon\sqrt{\left\|A\right\|^{2}+ \frac{\left\|A\right\|_{F}^{2}}{k}}\sqrt{\left\|B\right\|^{2}+\frac{\left\|B \right\|_{F}^{2}}{k}},\] _for \(m=O(\frac{1}{\epsilon^{2}}(k+\log\frac{1}{\delta}))\)._ **Lemma 3**.: _Let \(D\in\mathbb{R}^{d\times d}\) have eigenvalues \(\lambda_{1}\geq\ldots\geq\lambda_{d}\geq 0\) where \(\left\|D\right\|_{F}\leq 1\). Let \(G\in\mathbb{R}^{t\times d}\) have \(\mathcal{N}(0,1/t)\) entries. The bound_ \[\left\|GD^{1/2}\right\|^{2}\leq\lambda_{1}+O\left(\frac{1}{\sqrt{m}}\right)\] _holds with probability at least \(1-\frac{1}{20}2^{-\min(m,1/\lambda_{1}^{2})},\) provided that \(t\geq\Omega(m+d).\)_ Proof.: We first decompose \(D\) into two parts \(D=D_{1}+D_{2}\) where \(D_{1}\) contains the eigenvalues of \(D\) larger than \(\lambda_{1}/2\) and \(D_{2}\) contains the eigenvalues which are at most \(\lambda_{1}/2.\) Let \(x\) be an arbitrary unit vector and partition its support according to \(D_{1}\) and \(D_{2}\) so that \(x=x_{1}+x_{2}\). This allows us to write \[x^{T}D^{1/2}G^{T}GD^{1/2}x =x_{1}^{T}D_{1}^{1/2}G^{T}GD_{1}^{1/2}x_{1}+x_{2}^{T}D_{2}^{1/2}G^ {T}GD_{2}^{1/2}x_{2}\] \[\quad+2x_{1}^{T}D_{1}^{1/2}G^{T}GD_{2}^{1/2}x_{2}\] \[\leq\left\|x_{1}\right\|^{2}\left\|D_{1}^{1/2}G^{T}GD_{1}^{1/2} \right\|+\] \[\quad\left\|x_{2}\right\|^{2}\left\|D_{2}^{1/2}G^{T}GD_{2}^{1/2}\right\|\] \[\quad+2\left\|x_{1}\right\|\left\|x_{2}\right\|\left\|D_{1}^{1/2} G^{T}GD_{2}^{1/2}\right\|.\] We bound each of these operator norms in turn by using Theorem 2 above. Note that \(D_{1}\) has support of size at most \(4/\lambda_{1}^{2}\) since \(\left\|D_{1}\right\|_{F}^{2}\leq 1\), and so \(\operatorname{Tr}(D_{1})\leq\frac{4}{\lambda_{1}}.\) Taking \(k=\frac{1}{\lambda_{1}^{2}}\), \(\epsilon=\frac{1}{\sqrt{m\lambda_{1}}}\), and \(\delta=\frac{1}{60}2^{-1/\lambda_{1}^{2}}\) in Theorem 2 and applying the triangle inequality, we get \[\left\|D_{1}^{1/2}G^{T}GD_{1}^{1/2}\right\| \leq\lambda_{1}+\epsilon\left(\left\|D_{1}^{1/2}\right\|^{2}+ \frac{\left\|D_{1}^{1/2}\right\|_{F}^{2}}{k}\right)\] \[\leq\lambda_{1}+\epsilon\left(\lambda_{1}+\frac{\operatorname{Tr} (D_{1})}{k}\right)\] \[\leq\lambda_{1}+\epsilon\left(\lambda_{1}+\frac{4}{\lambda_{1}k}\right)\] \[\leq\lambda_{1}+\frac{5}{\sqrt{m}}\] Similarly for the second term, we note that \(\operatorname{Tr}(D_{2})\leq\frac{\lambda_{1}}{2}n\), and apply Theorem 2 with \(k=d\), \(\epsilon=1/4\), and \(\delta=\frac{1}{60}2^{-m}\) to get \[\left\|D_{2}^{1/2}G^{T}GD_{2}^{1/2}\right\| \leq\frac{\lambda_{1}}{2}+\epsilon\left(\frac{\lambda_{1}}{2}+ \frac{\operatorname{Tr}(D_{2})}{k}\right)\] \[\leq\frac{\lambda_{1}}{2}+\frac{1}{4}\left(\frac{\lambda_{1}}{2} +\frac{\operatorname{Tr}(D_{2})}{d}\right)\] \[\leq\frac{\lambda_{1}}{2}+\frac{1}{4}\left(\frac{\lambda_{1}}{2} +\frac{\lambda_{1}}{2}\right)\] \[=\frac{3}{4}\lambda_{1}.\] For the third term we choose \(k=\sqrt{d}/\lambda_{1}\), \(\epsilon=1/(\sqrt{\lambda_{1}}m^{1/4})\), and \(\delta=\frac{1}{60}2^{-\sqrt{m}/\lambda_{1}}\) which gives \[\left\|D_{1}^{1/2}G^{T}GD_{2}^{1/2}\right\| \leq\epsilon\sqrt{\lambda_{1}+\frac{\operatorname{Tr}(D_{1})}{k} }\sqrt{\frac{\lambda_{1}}{2}+\frac{\operatorname{Tr}(D_{2})}{k}}\] \[\leq\epsilon\sqrt{\lambda_{1}+\frac{\sqrt{d}}{k}}\sqrt{\frac{ \lambda_{1}}{2}+\frac{\sqrt{d}}{k}}\] \[\leq\epsilon\left(\lambda_{1}+\frac{\sqrt{d}}{k}\right)\] \[\leq 2\frac{\sqrt{\lambda_{1}}}{m^{1/4}}.\] Note that each application of Theorem 2 above allows \(G\) to have have \(\Theta(m)\) rows provided that \(m\geq d.\) Also note that each failure probability above is bounded by \(\frac{1}{60}2^{-\min(m,1/\lambda_{1}^{2})},\) since \(\frac{\sqrt{m}}{\lambda_{1}}\geq\min(m,\frac{1}{\lambda_{1}^{2}}).\) Thus we conclude with probability at least \(1-\frac{1}{20}2^{-\min(m,1/\lambda_{1}^{2})},\) that \[x^{T}D^{1/2}G^{T}GD^{1/2}x\leq\left(\lambda_{1}+\frac{5}{\sqrt{m}}\right)\|x_{ 1}\|^{2}+\frac{3}{4}\lambda_{1}\left\|x_{2}\right\|^{2}+4\frac{\sqrt{\lambda_{ 1}}}{m^{1/4}}\left\|x_{1}\right\|\left\|x_{2}\right\|.\] We view the right-hand expression as a quadratic form applied to the unit vector \((\left\|x_{1}\right\|,\left\|x_{2}\right\|).\) So its value is bounded by the largest eigenvalue of the \(2\times 2\) matrix \[M=\begin{pmatrix}\lambda_{1}+\frac{5}{\sqrt{m}}&\frac{2\sqrt{\lambda_{1}}}{m^ {1/4}}\\ \frac{2\sqrt{\lambda_{1}}}{m^{1/4}}&\frac{3}{4}\lambda_{1}\end{pmatrix}.\] Suppose that \(\lambda_{1}+\beta\) with \(\beta\geq 0\) is an eigenvalue of \(M.\) Then plugging into the characteristic polynomial gives \[\frac{4\lambda_{1}}{\sqrt{m}}=\left(\beta-\frac{5}{\sqrt{m}}\right)\left( \beta+\frac{\lambda_{1}}{4}\right)\geq\frac{\lambda_{1}}{4}\left(\beta-\frac{ 5}{\sqrt{m}}\right),\] from which it follows that \(\beta\leq O\left(\frac{1}{\sqrt{m}}\right)\) as desired. **Lemma 4**.: _Let \(D\in\mathbb{R}^{d\times d}\) (not necessarily PSD) have \(\left\|D\right\|_{F}\leq 1,\) and suppose \(\lambda_{\ell}(D)\geq 0.\) Let \(G\in\mathbb{R}^{k\times d}\) have i.i.d. \(\mathcal{N}(0,1/k)\) entries. Then with probability at least \(1-\frac{1}{20}2^{-\min(\ell,\epsilon^{-2})}\),_ \[\lambda_{\ell}(GDG^{T})\leq\lambda_{\ell}(D)+O\left(\epsilon\right),\] _for \(k\geq\Omega(d+\frac{1}{\epsilon^{2}}).\)_ First we have the following, where \(D_{+}\) and \(D_{-}\) denote the positive and negative semi-definite parts of \(D\): \[\lambda_{\ell}(GDG^{T}) =\lambda_{\ell}(GD_{+}G^{T}-GD_{-}G^{T})\] \[\leq\lambda_{\ell}(GD_{+}G^{T})\] \[=\lambda_{\ell}(D_{+}^{1/2}G^{T}GD_{+}^{1/2}).\] Let \(S_{d-\ell+1}\) be the span of a set of eigenvectors of \(D\) corresponding to \(\lambda_{\ell}(D),\ldots,\lambda_{d}(D).\) Then by Courant-Fischer2, Footnote 2: For example see [20] for a statement of the Courant-Fischer minimax theorem. \[\lambda_{\ell}(GDG^{T}) \leq\max_{v\in S_{d-\ell+1},\left\|v\right\|=1}v^{T}D_{+}^{1/2}G^ {T}GD_{+}^{1/2}v\] \[=\max_{v\in S_{d-\ell+1},\left\|v\right\|=1}\left\|GD_{+}^{1/2}v \right\|^{2}\] \[=\left\|GD_{+,-\left(\ell-1\right)}^{1/2}\right\|^{2},\] where \(D_{+,-\left(\ell-1\right)}\) is \(D_{+}\) with the top \(\ell-1\) eigenvalues zeroed out. Now Lemma 3 applies, and gives \[\lambda_{\ell}(GDG^{T})\leq\lambda_{\ell}(D_{+})+O\left(\epsilon\right)= \lambda_{\ell}(D)+O\left(\epsilon\right),\] with probability at least \(1-\frac{1}{20}2^{-\min(1/\epsilon^{2},1/\lambda_{\ell}(D)^{2})},\) for \(k\geq\Omega(d+\frac{1}{\epsilon^{2}}).\) Finally, note that \(\lambda_{\ell}(D)\leq\frac{1}{\sqrt{\ell}},\) so \[2^{-\min(1/\epsilon^{2},1/\lambda_{\ell}(D)^{2})}\leq 2^{-\min(1/\epsilon^{2}, \ell)}.\] ### Lower bounds on the sketched eigenvalues **Lemma 5**.: _Let \(M\in\mathbb{R}^{d\times d}\) be a PSD matrix with \(\left\|M\right\|_{F}\leq 1.\) Let \(G\in\mathbb{R}^{m\times d}\) have i.i.d. \(\mathcal{N}(0,\frac{1}{m})\) entries, where \(m\geq\Omega(d+\log(1/\delta))\). Also let \(S_{\ell}\) denote an arbitrary \(\ell\) dimensional subspace of \(\mathbb{R}^{m}.\) Then with probability at least \(1-\delta\), we have_ \[\max_{v\in S_{\ell},\left\|v\right\|=1}v^{T}GMG^{T}v\leq 3\frac{\ell}{m}\left\|M \right\|.\] Proof.: Let \(\Pi\in\mathbb{R}^{m\times\ell}\) has columns forming an orthonormal basis of \(S_{\ell}.\) Then we can write \[\max_{v\in S_{\ell},\left\|v\right\|=1}v^{T}GMG^{T}v=\left\|\Pi^{T}GMG^{T}\Pi \right\|.\] Using rotational invariance of \(G\) we note that \(\Pi^{T}G\) is distributed as \(\sqrt{\frac{\ell}{m}}\tilde{G}\) where \(\tilde{G}\in\mathbb{R}^{\ell\times d}\) has i.i.d. \(\mathcal{N}(0,\frac{1}{\ell})\) entries. Then \[\left\|\Pi^{T}GMG^{T}\Pi\right\|=\frac{\ell}{m}\left\|\tilde{G}M\tilde{G}^{T} \right\|=\frac{\ell}{m}\left\|M^{1/2}\tilde{G}^{T}\tilde{G}M^{1/2}\right\|,\] which by taking \((\epsilon,k)=(1,d)\) in Theorem 2 is bounded by \[\frac{\ell}{m}\left(\left\|M\right\|+\left(\left\|M^{1/2}\right\| ^{2}+\frac{\left\|M^{1/2}\right\|_{F}^{2}}{d}\right)\right) =\frac{\ell}{m}\left(\left\|M\right\|+\left(\left\|M\right\|+\frac {\operatorname{Tr}(M)}{d}\right)\right)\] \[\leq 3\frac{\ell}{m}\left\|M\right\|,\] with probability at least \(1-\delta.\) Note that we used the bound \(\operatorname{Tr}(M)\leq d\left\|M\right\|\) in the final step. **Lemma 6**.: _Let \(M\in\mathbb{R}^{d\times d}\) be PSD with \(\left\|M\right\|_{F}\leq 1\), and let \(G\in\mathbb{R}^{k\times d}\) have i.i.d. \(\mathcal{N}(0,\frac{1}{k})\) entries._ _By choosing \(k=\Theta(d+\frac{1}{\epsilon^{2}})\) the bound_ \[\lambda_{\ell}(GMG^{T})\geq\lambda_{\ell}(M)-\epsilon\] _holds with probability at least \(1-\frac{1}{40}2^{-\ell}.\)_ Proof.: Recall that the non-zero eigenvalues of \(GMG^{T}\) coincide with those of \(M^{1/2}G^{T}GM^{1/2}\), so \[\lambda_{\ell}(GMG^{T})=\lambda_{\ell}(M^{1/2}G^{T}GM^{1/2}).\] By the Courant-Fischer theorem, there exists an \(\ell\) dimensional subspace \(S_{\ell}\) of \(\mathbb{R}^{d}\) such that \(\left\|M^{1/2}x\right\|^{2}=x^{T}Mx\geq\lambda_{\ell}(M)\) for all \(x\in S_{\ell}.\) Now suppose that \(G\) is an \((\frac{\epsilon}{\lambda_{\ell}},\ell,\frac{1}{40}2^{-\ell})\)-OSE3, which can be achieved by taking Footnote 3: An \((\epsilon,k,\delta)\)-OSE refers to an oblivious embedding that has \(1\pm\epsilon\) distortion over any given \(k\) dimensional subspace with probability at least \(1-\delta.\) \[k=\Theta\left(\frac{\lambda_{\ell}^{2}}{\epsilon^{2}}\left(\ell+\log\frac{10} {2^{-\ell}}\right)\right).\] Since \(\left\|M\right\|_{F}\leq 1\), we have \(\lambda_{\ell}^{2}\leq\frac{1}{\ell}\), so in fact \(k=O(1/\epsilon^{2})\) above. Then with probability at least \(1-\frac{1}{10}2^{-\ell},\) the bound \[\left\|GM^{1/2}x\right\|^{2} \geq\left(1-\frac{\epsilon}{\lambda_{\ell}(M)}\right)\left\|M^{1/2 }x\right\|^{2}\] \[\geq\left(1-\frac{\epsilon}{\lambda_{\ell}(M)}\right)\lambda_{ \ell}(M)\] \[\geq\lambda_{\ell}(M)-\epsilon\] holds for all \(x\in S_{\ell}\). By the Courant-Fischer theorem, this implies that \(\lambda_{\ell}(M^{1/2}G^{T}GM^{1/2})\geq\lambda_{\ell}(M)-\epsilon\) as desired. **Lemma 7**.: _Suppose that \(D\in\mathbb{R}^{d\times d}\) is a (not necessarily PSD) matrix with \(\left\|D\right\|_{F}\leq 1\) and that \(G\in\mathbb{R}^{k\times d}\) has i.i.d. \(\mathcal{N}(0,1/k)\) entries. If \(\lambda_{\ell}(D)\geq 0,\) then with probability at least \(\frac{1}{20}2^{-\ell}\),_ \[\lambda_{\ell}(GDG^{T})\geq\lambda_{\ell}(D)-\epsilon,\] _for \(k\geq\Omega(d+\frac{1}{\epsilon^{2}}).\)_ Throughout the course of this argument we will need the parameters \(k\) and \(r\) to satisfy various inequalities. To streamline the proof we will list these assumptions here and later verify that they are satisfied with appropriate choices. The assumptions we will need are as follows: 1. \(k\geq c_{1}d,\) where \(c_{1}\geq 1\) is an absolute constant 2. \(k-r\geq\frac{c_{2}}{\epsilon^{2}}\) where \(c_{2}\) is an absolute constant 3. \(\frac{r}{k\sqrt{\ell}}\leq\epsilon\) 4. \(\frac{\ell}{k\sqrt{r}}\leq\epsilon\) To produce a lower bound on \(\lambda_{\ell}(GDG^{T})\) we will find a subspace \(S\) such that \(v^{T}GDG^{T}v\) is large for all unit vectors \(v\) in \(S\). First we write \(D=D_{+}-(D_{-,-r}+D_{-,+r})\) where \(D_{+}\) is the positive semi-definite part of \(D\), \(D_{-}\) is the negative semi-definite part of \(D\), \(D_{-,+r}\) denotes \(D_{-}\) with all but the top \(r\) eigenvalues zeroed out, and \(D_{-,-r}=D_{-}-D_{-,+r}\) (recall that \(r\) is the parameter from above which is to be chosen later). We also write \[GDG^{T} =GD_{+}G^{T}-GD_{-,+r}G^{T}-GD_{-,-r}G^{T}\] \[=G_{1}D_{+}G_{1}^{T}-G_{2}D_{-,+r}G_{2}^{T}-G_{3}D_{-,-r}G_{3}^{T}\] where each component is PSD, and where \(G_{1},G_{2},G_{3}\) consist of the columns of \(G\) corresponding to the nonzero entries of \(D_{+}\) and \(D_{-,+r}\) and \(D_{-,-r}\) respectively. In particular note that this decomposition shows that these three random matrices are mutually independent. Let \(W_{r}\subseteq\mathbb{R}^{k}\) denote the image of \(D_{-,+r}\) so that \(W_{r}^{\perp}=\ker(D_{-,+r}).\) Let \(\Pi_{W_{r}^{\perp}}\in\mathbb{R}^{k\times(k-r)}\) have columns forming an orthonormal basis for \(W_{r}^{\perp}.\) By rotational invariance of \(G\), \(G^{T}\Pi_{W_{r}^{\perp}}\) has i.i.d. \(\mathcal{N}(0,1/k)\) entries. Thus it follows that \[\Pi_{W_{r}^{\perp}}^{T}GD_{+}G^{T}\Pi_{W_{r}^{\perp}}\sim\frac{k-r}{k}\tilde{ G}D_{+}\tilde{G}^{T}\sim\left(1-\frac{r}{k}\right)\tilde{G}D_{+}\tilde{G}^{T},\] where \(\tilde{G}\in\mathbb{R}^{(k-r)\times d}\) has i.i.d \(\mathcal{N}(0,\frac{1}{k-r})\) entries. Now by Lemma 6, along with our second assumption above, we have \[\lambda_{\ell}(\tilde{G}D_{+}\tilde{G}^{T})\geq\lambda_{\ell}(D_{+})-\epsilon= \lambda_{\ell}(D)-\epsilon,\] with probability at least \(1-\frac{1}{40}2^{-\ell}.\) Thus with the same probability, we then have \[\lambda_{\ell}(\Pi_{W_{r}^{\perp}}^{T}GD_{+}G^{T}\Pi_{W_{r}^{\perp}})\geq\left( 1-\frac{r}{k}\right)(\lambda_{\ell}(D)-\epsilon)\geq\lambda_{\ell}(D)-2\epsilon,\] where the last inequality follows from our third assumption above, along with the observation that \(\lambda_{\ell}(D)\leq\frac{1}{\sqrt{\ell}}\) which comes from the assumption \(\left\|D\right\|_{F}\leq 1.\) If the above holds, then by the Courant-Fischer theorem, there exists a subspace \(S_{\ell}\subseteq W_{r}^{\perp}\subseteq\mathbb{R}^{k}\) such that \[x^{T}GD_{+}G^{T}x\geq\lambda_{\ell}(D)-2\epsilon \tag{3}\] for all \(x\in S_{\ell}.\) Note that the construction of \(S_{\ell}\) was independent of \(GD_{-,-r}G^{T}\) by the comment above. Thus we may apply Lemma 5, along with our first assumption, to conclude that with probability at least \(1-\frac{1}{40}2^{-d},\) \[\max_{v\in S_{\ell},\left\|v\right\|=1}v^{T}GD_{-,-r}G^{T}v\leq 3\frac{\ell}{k }\left\|D_{-,-r}\right\|\leq 3\frac{\ell}{k}\frac{1}{\sqrt{r}}. \tag{4}\] The last inequality holds because \(\left\|D_{-}\right\|_{F}=1,\) which implies that \(\lambda_{r}(D_{-})\leq\frac{1}{\sqrt{r}}.\) Now let \(u\in S_{\ell}\) be an arbitrary unit vector. We write \[uGDG^{T}u^{T}=u^{T}GD_{+}G^{T}u-u^{T}GD_{-,-r}G^{T}u-u^{T}GD_{-,+r}G^{T}u.\] The last term vanishes by design since \(x\in W_{r}^{\perp}.\) We then bound the first term using equation 3 and the second term using equation 4 to get \[uGDG^{T}u^{T}\geq(\lambda_{\ell}(D)-2\epsilon)-3\frac{\ell}{k}\frac{1}{\sqrt{ r}}\geq\lambda_{\ell}(D)-5\epsilon,\] where the second inequality is form the fourth assumption above. Our total failure probability in the argument above is at most \(\frac{1}{40}2^{-d}+\frac{1}{40}2^{-\ell}\leq\frac{1}{20}2^{-\ell}\) as desired. It remains to choose parameters so that our four assumptions are satisfied. For this we take \[k \geq\max\left(c_{1}d,\frac{c_{2}}{\epsilon^{2}}+\lfloor 2\ell \rfloor,\frac{2\sqrt{\ell}}{\epsilon}\right)\] \[r =\lfloor 2\ell\rfloor.\] Assumptions 1 and 2 clearly hold with this choice. For assumption 3, we have \[\epsilon k\sqrt{\ell}\geq\epsilon\frac{2\sqrt{\ell}}{\epsilon}\sqrt{\ell}=2 \ell\geq r,\] and for assumption 4, \[\epsilon k\sqrt{r}\geq\epsilon\frac{2\sqrt{\ell}}{\epsilon}\sqrt{2\ell-1}=2 \sqrt{\ell}\sqrt{2\ell-1}\geq\ell,\] since \(\ell\geq 1.\) Finally, since \(\ell\leq d,\) this gives a bound of \(k=O(d+\frac{1}{\epsilon^{2}})\) as desired (note the inequality \(\frac{\sqrt{d}}{\epsilon}\leq\max(d,1/\epsilon^{2})\) for bounding the last term in the max defining \(k\)). ### Controlling the Tail In this section we use Hanson-Wright4 to bound the effect of the tail eigenvalues of \(A\) on the sketch. Note that our application Hanson-Wright relies on Gaussianity of \(G\) in order for the entries of \(G^{T}u\) to be independent. Footnote 4: See [20] for a precise statement of Hanson-Wright. **Lemma 8**.: _Let \(Y\in\mathbb{R}^{d\times d}\) be symmetric (not necessarily PSD) with \(\left\|Y\right\|\leq\epsilon\) and \(\left\|Y\right\|_{F}\leq 1\). Let \(G\in\mathbb{R}^{k\times n}\) have i.i.d. \(\mathcal{N}(0,1/k)\) entries. For \(k\geq\Omega(1/\epsilon^{2})\) we have_ \[\left\|GYG^{T}-\frac{1}{k}\operatorname{Tr}(Y)I\right\|\leq O(\epsilon),\] _with probability at least \(29/30\)._ Proof.: Let \(u\in\mathbb{R}^{k}\) be an arbitrary fixed unit vector. Note that \(G^{T}u\) is distributed as \(\mathcal{N}(0,\frac{1}{k}I_{d})\) and so \[\mathbb{E}(u^{T}GYG^{T}u)=\frac{1}{k}\operatorname{Tr}(Y).\] Set \(\tilde{Y}=GYG^{T}-\frac{\operatorname{Tr}(Y)}{k}I.\) By Hanson-Wright, \[\operatorname{Pr}\left(\left|u^{T}\tilde{Y}u\right|\geq 30\epsilon\right) =\operatorname{Pr}\left(\left|u^{T}GYG^{T}u-\frac{1}{k} \operatorname{Tr}(Y)\right|\geq 30\epsilon\right)\] \[\leq 2\exp\left(-0.1\min\left(\frac{(30\epsilon)^{2}k^{2}}{\left\| Y\right\|_{F}^{2}},\frac{(30\epsilon)k}{\left\|Y\right\|_{2}}\right)\right)\] \[\leq 2\exp\left(-\min\left(90\epsilon^{2}k^{2},3k\right)\right).\] Note that in the final bound above we used the fact that \(\left\|Y\right\|_{2}\leq\epsilon\). Let \(\mathcal{N}\) be a net for the sphere in \(\mathbb{R}^{k}\) with mesh size \(1/3\), which may be taken to have size \(9^{k}\). By 4.4.3 in [20], \[\left\|G\tilde{Y}G^{T}\right\|_{2}\leq 3\sup_{x\in\mathcal{N}}|x^{T}G\tilde{Y}G ^{T}x|.\] By taking a union bound over the net and setting \(k\geq\Omega(1/\epsilon^{2})\), we then have \[\operatorname{Pr}\left(\left\|\tilde{Y}\right\|_{2}\geq 93\epsilon\right)\leq 2 \exp\left(-\min\left(90\epsilon^{2}k^{2},3k\right)\right)9^{k}\leq\frac{1}{30},\] for \(\epsilon<1\). ### Proof of Theorem 1 Proof.: By rescaling, it suffices to consider that case \(\left\|A\right\|_{F}=1.\) We start by decomposing \(A\) into two pieces \(A=A_{1}+A_{2}\), where \(A_{1}\) is \(A\) with all eigenvalues smaller than \(\epsilon\) in magnitude zeroed out. To handle the large eigenvalues, we apply Lemma 4 and Lemma 7. Suppose that \(A_{1}\) has \(n\) nonzero eigenvalues. Then we note that the nonzero eigenvalues of \(GA_{1}G^{T}\) have the same distribution as the eigenvalues of \(\tilde{G}\tilde{A}_{1}\tilde{G}^{T}\) where \(\tilde{A}_{1}\) is a symmetric \(n\times n\) matrix with eigenvalues the same as the nonzero eigenvalues of \(A_{1}\) and where \(\tilde{G}\in\mathbb{R}^{k\times n}\) has i.i.d. \(\mathcal{N}(0,1/k)\) entries. This effectively means that we may treat \(A_{1}\) has having dimension \(n\) when applying Lemma 4 and Lemma 7. By taking a union bound over the positive eigenvalues of \(A_{1}\) and applying Lemma 4 we get the upper bound \(\lambda_{\ell}(GA_{1}G^{T})\leq\lambda_{\ell}(A_{1})+O(\epsilon)\) uniformly for all \(\ell\) such that \(\lambda_{\ell}(A_{1})>0\), with failure probability at most \[\sum_{i=1}^{n}\frac{1}{20}2^{-\min(\ell,\epsilon^{-2})}\leq\frac{1}{20}\sum_{i= 1}^{n}2^{-\ell}\leq\frac{1}{20},\] where the the first inequality follows from the fact that \(\ell\leq n\leq 1/\epsilon^{2}\), which in turn holds since \(\left\|A_{1}\right\|_{F}\leq 1\). Similarly Lemma 7 gives the lower bound \(\lambda_{\ell}(GA_{1}G^{T})\leq\lambda_{\ell}(A_{1})-\epsilon\) uniformly for all \(\ell\) such that \(\lambda_{\ell}(A_{1})>0\), with failure probability at most \[\sum_{i=1}^{\ell}\frac{1}{20}2^{-\ell}\leq\frac{1}{20}.\] Thus with at least \(9/10\) probability, \(\left|\lambda_{\ell}(GA_{1}G^{T})-\lambda_{\ell}(A_{1})\right|\leq O(\epsilon)\) for all \(\ell\) such that \(\lambda_{\ell}(A_{1})>0\). By applying the above argument to \(-A_{1}\) we get the same guarantee for the negative eigenvalues, i.e. \(\left|\lambda_{k-\ell}(GA_{1}G^{T})-\lambda_{k-\ell}(A_{1})\right|\leq O(\epsilon)\) for all \(\ell\) such that \(\lambda_{k-\ell}(A_{1})<0.\) By a union bound, the positive and negative guarantees hold together with failure probability at most \(1/5\). Next we apply the tail bound of Lemma 8 to control the perturbations resulting from the tail. By the triangle inequality, \[\left\|GA_{2}G^{T}-\frac{1}{k}\operatorname{Tr}(GAG^{T})I\right\| \leq\left\|GA_{2}G^{T}-\frac{1}{k}\operatorname{Tr}(A_{2})I\right\|\] \[\qquad+\left\|\frac{1}{k}\operatorname{Tr}(A_{2})I-\frac{1}{k} \operatorname{Tr}(GAG^{T})I\right\|\] \[\leq\left\|GA_{2}G^{T}-\frac{1}{k}\operatorname{Tr}(A_{2})I\right\|\] \[\qquad+\frac{1}{k}\left|\operatorname{Tr}(A_{2})-\operatorname{Tr }(GA_{2}G^{T})\right|\] \[\qquad+\frac{1}{k}\left|\operatorname{Tr}(GA_{1}G^{T})\right|\] The first of these terms is bounded by \(O(\epsilon)\) with failure probability at most \(1/30\) by Lemma 8. The second term is easily bounded by \(O(\epsilon)\) with failure probability at most \(1/30\) since \(\operatorname{Tr}(GA_{2}G^{T})\) is a trace estimator for \(A_{2}\) with variance at \(O(\left\|A_{2}\right\|_{F})=O(1)\) (in fact the variance is even smaller). For the third term, note that \(A_{1}\) has at most \(1/\epsilon^{2}\) nonzero eigenvalues, so \(\operatorname{Tr}(A_{1})\leq\frac{1}{\epsilon}\left\|A\right\|_{F}\leq\frac{ 1}{\epsilon}\). Thus since \(\operatorname{Tr}(GA_{1}G^{T})\) is a trace estimator for \(A_{1}\), the third term is bounded by \(O(\epsilon)\) with failure probability at most \(1/30\). Thus we have the bound \[\left\|GA_{2}G^{T}-\frac{1}{k}\operatorname{Tr}(GAG^{T})I\right\|\leq O( \epsilon),\] with failure probability at most \(1/10.\) This gives the bound \[\lambda_{\ell}(GAG^{T}) =\lambda_{\ell}(GA_{1}G^{T}+GA_{2}G^{T})\] \[=\lambda_{\ell}\left(GA_{1}G^{T}+\frac{1}{k}\operatorname{Tr}( GAG^{T})I+GA_{2}G^{T}-\frac{1}{k}\operatorname{Tr}(GAG^{T})I\right)\] \[=\lambda_{\ell}\left(GA_{1}G^{T}+\frac{1}{k}\operatorname{Tr}( GAG^{T})I\right)\] \[\qquad\pm\left\|GA_{2}G^{T}-\frac{1}{k}\operatorname{Tr}(GAG^{T} )I\right\|_{2}\] \[=\lambda_{\ell}(GA_{1}G^{T})+\frac{1}{k}\operatorname{Tr}(GAG^{T })\pm O(\epsilon).\] Setting \(\widehat{\lambda}_{\ell}=\lambda_{\ell}(GAG^{T})-\frac{1}{k}\operatorname{Tr }(GAG^{T}),\) we therefore have \(\widehat{\lambda}_{\ell}=\lambda_{\ell}(GA_{1}G^{T})\pm O(\epsilon).\) Combining with the bounds above gives \(\widehat{\lambda}_{\ell}=\lambda_{\ell}(A_{1})\pm O(\epsilon)\) if \(\lambda_{\ell}(A_{1})>0\) and \(\widehat{\lambda_{k-\ell}}=\lambda_{k-\ell}(A_{1})\pm O(\epsilon)\) if \(\lambda_{k-\ell}(A_{1})>0.\) Thus there is a subset of \(n\) of the \(\widehat{\lambda_{\ell}}\)'s which provide an \(O(\epsilon)\) additive approximation to the set of eigenvalues of \(A\) which are at least \(\epsilon.\) The above bound shows that the remaining \(\widehat{\lambda_{\ell}}\)'s are bounded by \(O(\epsilon)\) and the result follows. ## 4 Lower bounds for eigenvalue estimation We will use the Wishart distribution throughout this section which is defined as follows. **Definition 9**.: _The \(n\) dimensional Wishart distribution with \(r\) degrees of freedom \(W(n,r)\) is the distribution of \(GG^{T}\) where \(G\in\mathbb{R}^{n\times r}\) has i.i.d. standard normal entries._ In this section we show that \(\Omega(r)\) matrix-vector queries are necessary to determine the rank of a matrix with all nonzero entries \(\Omega(1).\) Specifically we show that distinguishing between \(W(n,r)\) and \(W(n,r+2)\) requires \(\Omega(r)\) queries for \(r\leq O(n).\) In Appendix A we sketch a proof of a similar lower bound for determining the rank of the orthogonal projection onto a random subspace. For now we consider the following problem. **Problem 10**.: _Given a matrix \(A\) sampled from either \(\mathcal{D}_{1}=W(n,r)\) or \(\mathcal{D}_{2}=W(n,r+2)\) each with equal probability, decide between \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) with at least \(2/3\) probability, using (possibly adaptive) matrix-vector queries to \(A\)._ We first make note of the following result, which is effectively a version of Lemma 13 from [1], adapted to Wishart matrices \(W(n,r)\) with \(n\) and \(r\) not necessarily equal. This will allow us to show that adaptivity is unhelpful, and hence reduce to studying the non-adaptive case. **Proposition 11**.: _Let \(A\sim W(n,r),\) and let \(k<r\leq n.\) Then the conditional distribution \(A|\{Ae_{1}=x_{1},\ldots,Ae_{k}=x_{k}\}\) can be written as_ \[M_{k}+\operatorname{diag}(0_{k\times k},W(n-k,r-k)),\] _where \(M_{k}\in\mathbb{R}^{n\times n}\) has rank at most \(k\) and depends only on \(x_{1},\ldots,x_{k}\). In particular \(M_{k}\) does not depend on \(r.\)_ Proof.: Write \(A=GG^{T}\) where \(G\in\mathbb{R}^{n\times r}\) has i.i.d. \(\mathcal{N}(0,1)\) entries. Write \(g_{1},g_{2},\ldots\) for the rows of \(G\). We first consider the conditional distribution \(A|\{Ae_{1}=x_{1}\}.\) In other words, we are conditioning on the events \(\langle g_{1},g_{i}\rangle=x_{1i}\) for all \(i\). By rotational invariance, we may additionally condition on \(g_{1}=\sqrt{x_{11}}e_{1}\) without changing the resulting distribution. Then for \(i>1\), the conditional distribution of \(g_{i}\) can be written as \(\frac{x_{1i}}{\sqrt{x_{11}}}e_{1}+h_{i}\) where \(h_{i}\) is distributed as \(\mathcal{N}(0,I_{n-1})\) in the orthogonal complement of \(e_{1}.\) It follows from this that we can write \[A|\{Ae_{1}=x_{1}\}\sim\frac{1}{x_{11}}x_{1}x_{1}^{T}+\operatorname{diag}(0,W(n -1,r-1)). \tag{5}\] So we have \(M_{1}=\frac{1}{x_{11}}x_{1}x_{1}^{T}\). Now we apply the above line inductively. For \(j<r\), let \(W_{j}\sim\operatorname{diag}(0_{k\times k},W(n-j,r-j)),\) and write \[A|\{Ae_{1}=x_{1},\ldots Ae_{j+1}=x_{j}\} \sim(A|\{Ae_{1}=x_{1},\ldots Ae_{j}=x_{j}\})\,|\{Ae_{j+1}=x_{j+1}\}\] \[\sim(M_{j}+W_{j})|\{(M_{j}+W_{j})e_{j+1}=x_{j+1}\}\] \[\sim(M_{j}+W_{j})|\{W_{j}e_{j+1}=x_{j+1}-M_{j}e_{j+1}\}\] \[\sim(M_{j}+W_{j})|\{W_{j}e_{j+1}=v_{j+1}\}\] \[\sim M_{j}+(W_{j}|\{W_{j}e_{j+1}=v_{j+1}\})\] where we set \(v_{j+1}=x_{j+1}-M_{j}e_{j+1}\). By applying 5, \[\{W_{j}e_{j+1}=v_{j+1}\}=\frac{1}{v_{j+1,j+1}}v_{j+1}v_{j+1}^{T}+W_{j+1}.\] Hence we can take \[M_{j+1}=M_{j}+\frac{1}{v_{j+1,j+1}}v_{j+1}v_{j+1}^{T},\] and the induction is complete. **Proposition 12**.: _Of all (possibly adaptive) algorithms for Problem 10 which make \(k\leq r\) queries, there is an optimal such algorithm (in the sense of minimizing the failure probability), which queries on the standard basis vectors \(e_{1},\ldots,e_{k}\)._ Proof.: Let \(s\) be either \(r\) or \(r+2\) corresponding to which of \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) is sampled from. By rescaling, we assume that only unit vectors are queried. We argue by induction. Since \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) are rotationally invariant, we may without loss of generality take the first query to be \(e_{1}\). Now suppose inductively that there is an optimal \(k\) query algorithm \(\mathcal{A}\) whose first \(j\) queries are always \(e_{1},\ldots,e_{j}.\) Suppose on a fixed run, that \(Ae_{1}=x_{1},\ldots,Ae_{j}=x_{j}.\) By Proposition 11, we may write the resulting conditional distribution as \[A|\{Ae_{1}=x_{1},\ldots Ae_{j}=x_{j}\}=M_{j}+A_{j},\] where \(M_{j}\) depends deterministically on \(x_{1},\ldots,x_{j}\) (and not on \(s\)), and \(A_{j}\sim\operatorname{diag}(0_{j\times j},W(n-j,s-j)).\) Now since \(M_{j}\) is know to \(\mathcal{A}\), we may assume that on iteration \(j+1\), \(\mathcal{A}\) is given matrix-vector query access to \(A_{j}\), rather than to \(A\). Since the first \(j\) rows and columns of \(A_{j}\) are filled with zeros, we may assume that \(\mathcal{A}\) queries on a vector in \(\operatorname{span}\{e_{j+1},\ldots,e_{n}\}.\) Then by rotational invariance of \(W(n-j,s-j),\) we may take \(\mathcal{A}\) to query on \(e_{j}\) on iteration \(j+1.\) This completes the induction, and the claim follows. In light of the previous result, only non-adaptive queries are necessary. In fact we can make an even stronger claim. Let \(E_{k}\) denote the matrix with columns \(e_{1},\ldots,e_{k}.\) The previous proposition showed that an optimal tester only needs to observe \(AE_{k},\) the first \(k\) columns of \(A.\) In fact, only \(E_{k}^{T}AE_{k},\) the leading principal submatrix of \(A\) is relevant. We first state a simple fact that drives the argument. **Proposition 13**.: _Let \(X\in k\times r_{1}\) and \(Y\in k\times r_{2}\) be fixed matrices such that \(XX^{T}=YY^{T}.\) Let \(v_{1}\in\mathbb{R}^{r_{1}}\) and \(v_{2}\in\mathbb{R}^{r_{2}}\) have i.i.d. standard normal entries. Then \(Xv_{1}\) and \(Yv_{2}\) have the same distribution._ Proof.: Suppose without loss of generality that \(r_{2}\geq r_{1}.\) Then since \(XX^{T}=YY^{T},\) there is an orthogonal matrix \(U\in\mathbb{R}^{r_{2}\times r_{2}}\) such that \[YU=[X,0_{k\times(r_{1}-r_{2})}].\] Now let \(g\in\mathbb{R}^{r_{2}}\) have i.i.d. standard normal entries. By rotational invariance \(Ug\in\mathbb{R}^{r_{2}}\) does as well. So \(YU\) has the same distribution as \(Yv_{2}.\) Also \([X,0_{k\times(r_{1}-r_{2})}]g\) is distributed as \(Xv_{1},\) so \(Xv_{1}\) and \(Yv_{2}\) have the same distribution as desired. **Proposition 14**.: _Suppose that \(A_{1}\sim W(n,r)\) and \(A_{2}\sim W(n,r+2).\) Then for \(k\leq r\),_ \[\operatorname{TV}(A_{1}E_{k},A_{2}E_{k})=\operatorname{TV}(E_{k}^{T}A_{1}E_{k },E_{k}^{T}A_{2}E_{k}).\] Proof.: Let \(G_{1}\in\mathbb{R}^{k\times r}\) and \(H_{1}\in\mathbb{R}^{(n-k)\times r}\) have i.i.d. standard normal entries. Similarly let \(G_{2}\in\mathbb{R}^{k\times(r+2)}\) and \(H_{2}\in\mathbb{R}^{(n-k)\times(r+2)}\) have i.i.d. standard normal entries. By the definition of the Wishart distribution, the joint distribution of the entries of \(A_{1}E_{k}\) is precisely that of \((G_{1}G_{1}^{T},H_{1}G_{1}^{T})\) and similarly for \(A_{2}E_{k}.\) Hence, \[\operatorname{TV}(A_{1}E_{k},A_{2}E_{k})=\operatorname{TV}\left((G_{1}G_{1}^{ T},H_{1}G_{1}^{T}),(G_{2}G_{2}^{T},H_{2}G_{2}^{T})\right).\] For a fixed matrix \(M\) of the appropriate dimensions, we consider the conditional distribution \(H_{i}G_{i}^{T}|\{G_{i}G_{i}^{T}=M\}\) for \(i=1,2.\) The rows of this random matrix are independent (since the rows of \(H_{i}\) are independent), and by Proposition 13 the distribution of each row is a function of \(M\). Hence it follows that \[H_{1}G_{1}^{T}|\{G_{1}G_{1}^{T}=M\}=H_{2}G_{2}^{T}|\{G_{2}G_{2}^{T}=M\}\] for all \(M\). Therefore, \[\operatorname{TV}\left((G_{1}G_{1}^{T},H_{1}G_{1}^{T}),(G_{2}G_{2}^{T},H_{2}G_ {2}^{T})\right)=\operatorname{TV}(G_{1}G_{1}^{T},G_{2}G_{2}^{T}).\] Since \(E_{k}^{T}A_{i}E_{k}\) has the same distribution as \(G_{i}G_{i}^{T},\) the claim follows. Our problem is now reduced to that of determining the degrees of freedom of a Wishart from observing the top corner (which is itself Wishart). We will give a lower bound for this problem. Our proof uses the following version of Theorem 5.1 in [10]. **Theorem 15**.: _Let \(\alpha\in(0,1)\) be a constant, and let \(n,r\to\infty\) simultaneously, with \(n/r\to\alpha.\) Then_ \[\frac{\det(W(n,r))}{(r-1)(r-2)\ldots(r-n)}\to e^{\mathcal{N}(0,-2\log(1- \alpha))},\] _where the convergence is in distribution._ **Lemma 16**.: _Let \(\alpha=0.1\). There exists a constant \(c\) so that if \(r\geq c\), then_ \[\operatorname{TV}\left(W(\lfloor\alpha r\rfloor,r),W(\lfloor\alpha r\rfloor,r+2 )\right)\leq 0.2.\] Proof.: We write \(n=\lfloor\alpha r\rfloor\) with the understanding that \(n\) is a function of \(r\). Let \(\mu_{n,r}\) be the measure on \(\mathbb{R}^{n(n+1)/2}\) associated to \(W(n,r)\), and let \(f_{n,r}\) be the corresponding density function (with respect to the Lebesgue measure). Also let \(\Delta_{+}\subseteq\mathbb{R}^{n(n+1)/2}\) be the PSD cone. Then we have \[\operatorname{TV}(W(n,r),W(n,r+2)) =\int_{\Delta_{+}}\left(f_{n,r}(A)-f_{n,r+2}(A)\right)_{+}d\lambda\] \[=\int_{\Delta_{+}}\left(1-\frac{f_{n,r+2}(A)}{f_{n,r}(A)}\right) _{+}d\mu_{n,r}\] We recall the following standard formula for the density of the Wishart distribution (see [1] for example): \[f_{n,r}(A)=\frac{(\det A)^{\frac{1}{2}(r-n-1)}e^{-\frac{1}{2}\operatorname{Tr }(A)}}{\sqrt{2^{r}n}\pi^{\frac{1}{4}n(n-1)}\prod_{i=1}^{n}\Gamma\left(\frac{1 }{2}(r+1-i)\right)}.\] Cancelling and applying the identity \(\Gamma(x+1)=x\Gamma(x)\) gives \[\frac{f_{n,r+2}(A)}{f_{n,r}(A)} =\frac{\det A}{2^{n}}\prod_{i=1}^{n}\frac{\Gamma\left(\frac{1}{2 }(r+1-i)\right)}{\Gamma\left(1+\frac{1}{2}(r+1-i)\right)}\] \[=\frac{\det A}{2^{n}}\prod_{i=1}^{n}\frac{1}{\frac{1}{2}(r+1-i)}\] \[=\frac{\det A}{r(r-1)\ldots(r-n+1)}.\] This gives \[\operatorname{TV}(W(n,r), W(n,r+2))=\] \[=\mathbb{E}_{A\sim W(n,r)}\left(1-\frac{\det A}{r(r-1)\ldots(r-n+ 1)}\right)_{+}.\] Therefore it suffices to bound this expectation. Since \(\frac{r-n}{r}\to(1-\alpha)\) as \(r\to\infty\) we have from Theorem 15 that \[\frac{\det W(n,r)}{r(r-1)\ldots(r-n+1)}\to(1-\alpha)e^{\mathcal{N}(0,-2\log(1- \alpha))}.\] Therefore \[\operatorname{TV}(W(n,r),W(n,r+2))\to\mathbb{E}_{x\sim\mathcal{N}(0,-2\log(1- \alpha))}\left[1-(1-\alpha)e^{x}\right]_{+},\] where swapping the limit with the expectation was justified since the random variables in the limit were all bounded by \(1.\) This last expectation may be computed numerically to be approximately \(0.1815\) and the claim follows. **Theorem 17**.: _Suppose that \(r\geq C_{1}\) and \(d\geq C_{2}r\) for absolute constants \(C_{1}\) and \(C_{2}.\) Let \(\mathcal{A}\) be an adaptive algorithm making \(k\) matrix-vector queries, which correctly decides between \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\) with \(2/3\) probability. Then \(k\geq r/10.\)_ Proof.: Consider a protocol which makes \(k\) matrix-vector queries. By Proposition 12 and Proposition 14 it suffices to consider non-adaptive protocols which observe \(E_{k}^{T}\mathrm{I\!E}_{k}\). Suppose that \(A\) is either drawn from \(\mathcal{D}_{1}\) or \(\mathcal{D}_{2}\) and hence distributed as \(W(k,r)\) or \(W(k,r+2)\). Lemma 16 now implies that distinguishing these distributions requires \(k\geq r/10\) as desired. **Corollary 18**.: _An algorithm which estimates all eigenvalues of any matrix \(A\) up to \(\epsilon\left\|A\right\|_{F}\) error, with \(3/4\) probability must make at least \(\Omega(1/\epsilon^{2})\) matrix-vector queries._ Proof.: The nonzero eigenvalues of \(W(n,r)\) are precisely the squared singular values of an \(n\times r\) matrix with i.i.d. Gaussian entries. So by standard bounds (see [22] for example), the nonzero eigenvalues of \(W(n,r)\) and \(W(n,r+2)\) are bounded between \(\frac{1}{2}n\) and \(2n\) with high probability as long as \(n\geq Cr\) for an absolute constant \(C\). Since \(W(n,r)\) has rank \(r\), the Frobenius norm of \(W(n,r)\) is bounded by \(2n\sqrt{r}\), and similarly for \(W(n,r+2).\) Thus setting \(\alpha=\frac{1}{10\sqrt{r+2}}\), we see that an algorithm which estimates all eigenvalues of a matrix to \(\alpha\left\|A\right\|_{F}\) additive error could distinguish \(W(n,r)\) from \(W(n,r+2)\), and hence by Theorem 17 must make at least \(r/10\) queries. The result follows by setting \(r=\Theta(1/\epsilon^{2})\). ## 5 Acknowledgements D. Woodruff would like to acknowledge partial support from a Simons Investigator Award. W. Swartworth was partially supported by NSF DMS #2011140. The authors would also like to acknowledge Cameron Musco, Deanna Needell, and Gregory Dexter for helpful conversations when preparing this manuscript.
2302.12249
MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes
Neural radiance fields enable state-of-the-art photorealistic view synthesis. However, existing radiance field representations are either too compute-intensive for real-time rendering or require too much memory to scale to large scenes. We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. MERF reduces the memory consumption of prior sparse volumetric radiance fields using a combination of a sparse feature grid and high-resolution 2D feature planes. To support large-scale unbounded scenes, we introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection. We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.
Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman
2023-02-23T18:59:07Z
http://arxiv.org/abs/2302.12249v1
# MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes ###### Abstract. Neural radiance fields enable state-of-the-art photorealistic view synthesis. However, existing radiance field representations are either too comprehensive for real-time rendering or require too much memory to scale to large scenes. We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. MERF reduces the memory consumption of prior sparse volumetric radiance fields using a combination of a sparse feature grid and high-resolution 2D feature planes. To support large-scale unbounded scenes, we introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection. We design a lossless procedure for braking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. + Footnote †: Work done while interning at Google. at most once, rendering a ray through a volume may require many samples. With state of the art neural or hybrid representations, each of these sample queries is very expensive to evaluate, either in terms of compute or memory bandwidth. As a result, methods that work for scenes with limited extent (single objects in space or forward-facing scenes) typically do not scale up to larger unbounded scenes. All neural or hybrid volumetric methods must address two fundamental trade-offs that arise from these constraints: * _Volume vs. surface?_ Purely volumetric rendering models are most amenable to gradient-based optimization and produce excellent view synthesis results [14]. On the other hand, increasing sparsity and moving closer [13, 12] or completely [11, 15] to a surface-like representation degrades image quality but results in compact representations that are cheap to render. * _Memory bound vs. compute bound?_ The most compact representations (such as the MLP network in Mildenhall et al. [16] or the low-rank decomposition in Chen et al. [16]) require many FLOPS to query, and the fastest representations (such as the sparse 3D data structures used in Yu et al. [17] and Hedman et al. [18]) consume large amounts of graphics memory. One approach to this trade-off is to embrace a slower, more compact volumetric model for optimization and to subsequently "bake" it into a larger but faster representation for rendering. However, baking often affects the representation or rendering model which can lead to a large drop in image quality. Though this can partially be ameliorated by fine-tuning the baked representation, fine-tuning does not easily scale to larger scenes, as computing gradients for optimization requires significantly more memory than rendering. The goal of our work is to find a representation that is well suited for both optimization and fast rendering. Our solution is a single unified radiance field representation with two different underlying _parametrizations_. In both stages, our memory-efficient radiance field (MERF) is defined by a combination of a voxel grid [13, 14] and triplane data structure [10]. During optimization, we use the NGP hash grid structure [12] to compress our parameterization, which allows for differentiable sparsification and provides an inductive bias that aids convergence. After optimization, we query the recovered NGP to explicitly bake out the MERF and create a binary occupancy grid to accelerate rendering. Critically, both the NGP-parameterized and baked MERF _represent the same underlying radiance field function_. This means that the high quality achieved by the optimized MERF carries over to our real-time browser-based rendering engine. ## 2. Related Work As our goal is real-time view synthesis in large unbounded scenes, this discussion is focused on approaches that accelerate rendering or reconstruct large spaces. For a comprehensive overview of recent view synthesis approaches, please refer to Tewari et al. [16]. Early methods for real-time large-scale view synthesis either captured a large number of images and interpolated them with optical flow [1] or relied heavily on hand-made geometry proxies [15, 17]. Later techniques used inaccurate, but automatically reconstructed geometry proxies [10], and relied on screen-space neural networks to compensate for this [12, 13]. Neural Radiance Fields (NeRF) [14] facilitated higher quality reconstructions by representing the full scene volume as a multi-layer perceptron (MLP). This volumetric representation can easily model thin structures and semi-transparent objects and is also well-suited to gradient-based optimization. NeRF was quickly extended to reconstruct large scenes by reconstructing crowdsourced data [11], tiling the space with NeRF networks [11, 12], and reconstructing the scene in a warped domain where far-away regions are compressed [14, 15]. Later, fast radiance field reconstruction was achieved by representing the scene as a grid, stored either densely [16], as latent features to be decoded [13, 12], or as latent hash grids [12] implemented with specialized CUDA kernels [11]. While this dramatically reduces reconstruction time, accurate real-time rendering of large scenes has not yet been demonstrated at high resolutions. Other methods addressed real-time rendering by precomputing and storing (i.e. _baking_) NeRF's view-dependent colors and opacities in volumetric data structures [10, 11, 12, 13], or by splitting the scene into voxels and representing each voxel with a small separate MLP [15]. However, these representations consume a lot of graphics memory and are thus limited to objects, not scenes. Furthermore, these methods incur a quality loss during baking due to the mismatch between the slower rendering procedure used for training and the real-time rendering procedure used for inference. Alternatively, faster rendering can be achieved by extending the network to work with ray segments rather than points [10, 12] or by training a separate sampling network [14, 15, 16]. However, these approaches have not achieved real-time rates at high resolutions, likely because they require evaluating an MLP for each sample along a ray. Light field coordinates circumvent this problem and require just one MLP evaluation per ray [10, 11, 12, 13], this approach has only been demonstrated to work well within small viewing volumes. Similarly, multi-plane image [12, 13, 14] or multi-sphere image [10, 11] representations map well to graphics hardware and can be rendered in real-time, but also support only restricted camera motion. It is also possible to speed-up NeRF rendering by post-processing the output image with a convolutional neural network. This makes it possible to perform an expensive volumetric rendering step at a lower resolution and then upsample that result to the final desired resolution [10, 12, 13]. Wu et al. [16] combined this approach with baking and showed high-quality real-time rendering of large scenes. However, to achieve this, they required a 3D scan of the scene as input, and they used a CUDA implementation designed for workstation-class hardware. In contrast, our method only needs posed images as input and runs in a browser on commodity hardware such as laptops. Recent methods achieve extremely fast rendering by constraining the NeRF network evaluations to planes or polygons (Chen et al., 2022; Lin et al., 2022). While this works well for objects or limited camera motion, we show that these approaches introduce a loss in quality for large unbounded scenes. The problem of compressing NeRF reconstructions has also been explored in prior work. Several methods achieve this by post-processing an existing reconstruction through incremental pruning (Deng and Tartaglione, 2023) with vector quantization (Li et al., 2022). Takikawa et al. (2022) directly optimize for a compressed codebook representation of the scene. While these methods all report impressive compression ratios, they all rely on evaluating an MLP for each volume sample and are therefore too slow for real-time rendering of large scenes. Our approach works by projecting 3D samples onto three 2D projections that correspond to the cardinal axes. Similar representations, often referred to as _tri-planes_, have been explored for surface reconstruction from point clouds (Songyou Peng, 2020) and generative modelling of 3D scenes (DeVries et al., 2021) or faces (Chan et al., 2022). Recently TensoRF (Chen et al., 2022) use tri-planes for NeRF reconstruction. TensorRF decomposes the 3D scene volume into a sum of vector-matrix outer products, which makes it possible to directly train a compressed and high quality radiance field. However, TensoRF trades off memory footprint for more expensive queries that involve a large matrix multiplication. Our representation significantly speeds up the query time by removing the need for the matrix product while simultaneously halving the memory bandwidth consumption. ## 3. Preliminaries We begin with a short review of relevant prior work on radiance fields for unbounded scenes. A radiance field maps every 3D position \(\mathbf{x}\in\mathbb{R}^{3}\) and viewing direction \(\mathbf{d}\in\mathbb{S}^{2}\) to the volumetric density \(\tau\in\mathbb{R}_{+}\) at that location and the RGB color emitted from it along the view direction, \(\mathbf{c}\in\mathbb{R}^{3}\). The color of the ray emitted from point \(\mathbf{o}\) in the direction \(\mathbf{d}\) can then be computed using the radiance field by sampling points along the ray, \(\mathbf{x}_{i}=\mathbf{o}+t_{i}\mathbf{d}\), and compositing the corresponding densities \(\{\tau_{i}\}\) and colors \(\{\mathbf{c}_{i}\}\) according to the numerical quadrature approach of Max (1995): \[\mathbf{C}=\sum_{i}w_{i}\mathbf{c}_{i}\,,\ \ w_{i}=\alpha_{i}T_{i}\,,\ \ T_{i}=\prod_{j=1}^{i-1}(1-\alpha_{j})\,,\ \ \alpha_{i}=1-e^{-\tau_{i}\delta_{i}}\,, \tag{1}\] where \(T_{i}\) and \(\alpha_{i}\) denote transmittance and alpha values of sample \(i\), and \(\delta_{i}=t_{i+1}-t_{i}\) is the distance between adjacent samples. The original NeRF work parameterized a radiance field using a Multilayer Perceptron (MLP), which outputs the volume density and view-dependent color for any continuous 3D location. In order to reduce the number of MLP evaluations to one per ray, SNeRG uses a deferred shading model in which the radiance field is decomposed into a 3D field of densities \(\tau\), diffuse RGB colors \(\mathbf{c_{d}}\), and feature vectors \(\mathbf{f}\)(Hedman et al., 2021). SNeRG's deferred rendering model volumetrically accumulates the diffuse colors \(\{\mathbf{c}_{d,i}\}\) and features \(\{\mathbf{f}_{i}\}\) along the ray, similar to Equation 1: \[\mathbf{C}_{d}=\sum_{i}w_{i}\mathbf{c}_{d,i},\ \ \ \ \mathbf{F}=\sum_{i}w_{i} \mathbf{f}_{i}, \tag{2}\] and computes the ray's color as the sum of the accumulated diffuse color \(\mathbf{C}_{d}\) and the view-dependent color computed using a small MLP \(h\) that takes as input \(\mathbf{C}_{d}\), \(\mathbf{F}\), and the viewing direction \(\mathbf{d}\): \[\mathbf{C}=\mathbf{C}_{d}+h(\mathbf{C}_{d},\mathbf{F},\mathbf{d}). \tag{3}\] SNeRG uses a large MLP during training and bakes it after convergence into a block-sparse grid for real-time rendering. In order for radiance fields to render high quality unbounded scenes containing nearby objects as well as objects far from the camera, mip-NeRF 360 (Barron et al., 2022) uses a contraction function to warp the unbounded scene domain into a finite sphere: \[\text{contract}(\mathbf{x})=\begin{cases}\mathbf{x}&\text{if }\|\mathbf{x}\|_{2}\leq 1\\ \left(2-\frac{1}{\|\mathbf{x}\|_{2}}\right)\,\frac{\mathbf{x}}{\|\mathbf{x}\|_ {2}}&\text{if }\|\mathbf{x}\|_{2}>1\end{cases} \tag{4}\] ## 4. Scene Representation In this section, we describe the MERF scene representation, which is designed to enable real-time volumetric rendering of unbounded scenes while maintaining a low memory footprint. ### Volume Parameterization MERF represents a scene using a 3D field of volume densities \(\tau\in\mathbb{R}_{+}\), diffuse RGB colors \(\mathbf{c_{d}}\in\mathbb{R}^{3}\), and feature vectors \(\mathbf{f}\in\mathbb{R}^{K}\), as shown in Figure 2. These quantities are rendered using the deferred shading model from SNeRG, described in Section 3. We parameterize this field with a low-resolution 3D \(L\times L\times L\) voxel grid \(V\) and three high-resolution 2D \(R\times R\) grids \(P_{x}\), \(P_{y}\), and \(P_{z}\), one for each of the cardinal \(yz\), \(xz\), and \(xy\) planes. Each element of the low-resolution 3D grid and the three high-resolution 2D grids Figure 2. Our scene representation. For a location \(\mathbf{x}\) along a ray: (1) We query its eight neighbors on a low-resolution 3D grid; and we project it onto each of the three axis-aligned planes, and then query each projection’s four neighbors on a high-resolution 2D grid. (2) The eight low-resolution 3D neighbors are evaluated and trilinearly interpolated while the three sets of four high-resolution 2D neighbors are evaluated and bilinearly interpolated, and the resulting features are summed into a single feature vector \(\mathbf{r}\). (3) The feature vector is split and nonlinearly mapped into three components: density \(\tau\), RGB color \(\mathbf{c_{d}}\), and a feature vector \(\mathbf{f}\) encoding view dependence effects. stores a vector with \(C=4+K\) channels In our experiments, we use \(C=8\) and default to \(L=512\) and \(R=2048\). We define the continuous field of \(C\)-vectors as the sum of trilinearly interpolated vectors from the 3D grid and bilinearly interpolated vectors from the three 2D grids: \[\mathbf{t}(x,y,z)=\mathbf{V}(x,y,z)+\mathbf{P}_{\mathbf{v}}(y,z)+\mathbf{P}_{ \mathbf{y}}(x,z)+\mathbf{P}_{\mathbf{z}}(x,y), \tag{5}\] where \(\mathbf{V}\colon\mathbb{R}^{3}\to\mathbb{R}^{C}\) is a trilinear interpolation operator using the 3D grid values, and \(\mathbf{P}_{i}\colon\mathbb{R}^{2}\to\mathbb{R}^{C}\) is a bilinear interpolation operator using the grid perpendicular to the \(i\)th axis, for \(i\in\{x,y,z\}\). We split the \(C\)-vector at any 3D location into three components corresponding to density \(\tilde{r}\in\mathbb{R}\), diffuse color \(\tilde{\mathbf{c}}_{d}\in\mathbb{R}^{3}\), and view-dependence feature \(\tilde{\mathbf{f}}\in\mathbb{R}^{K}\), and then apply nonlinear functions to obtain the three values: \[\tau=\exp(\tilde{r}),\quad\mathbf{c}_{\mathbf{d}}=\sigma(\tilde{\mathbf{c}}_ {d}),\quad\mathbf{f}=\sigma(\tilde{\mathbf{f}}), \tag{6}\] where \(\sigma\) is the standard logistic sigmoid function, which constrains colors and features to lie within \((0,1)\). Note that we apply the nonlinearities after interpolation and summation, which has been shown to greatly increase the representational power of grid representations (Karnewar et al., 2022; Sun et al., 2022). training and for rendering. Prior work accounted for this by fine-tuning the baked representation (Yu et al., 2022), but fine-tuning requires the entire representation to be stored in memory and suffers from the aforementioned scalability issues. Instead, we simulate finite grid resolutions during training by querying the MLPs at virtual grid corners and interpolating the resulting outputs using bilinear interpolation for the high-resolution 2D grids and trilinear interpolation for the low-resolution voxel grid. In addition to requiring high-capacity representations, high-resolution large-scale scenes require dense samples along rays during volume rendering, which also significantly contributes to the training memory footprint. To address this, mip-NeRF 360 introduced a hierarchical sampling technique that uses "proposal" MLPs to represent coarse versions of scene geometry. A proposal MLP maps 3D positions to density values, which are converted into probability distributions along rays that are supervised to be consistent with the densities output by the NeRF MLP. These proposal distributions are used in an iterative resampling procedure that produces a small number of samples that are concentrated around visible scene content. While this proposal MLP hierarchical sampling strategy is effective for reducing the number of samples along each ray during training, the proposal MLP is too expensive to evaluate for real-time rendering purposes. Instead, we use traditional empty space skipping during rendering, which also concentrates representation queries around surfaces. To avoid introducing a mismatch between training and rendering, we only bake content in regions considered occupied by the proposal MLP during training, as detailed in Section 5.3. ### Quantization To reduce our system's memory consumption at render time, we wish to quantize each of the \(C\) dimensions at every location in the grid to a single byte (see further discussion in Section 6). However, simply quantizing the optimized grid values after training creates mismatches between the optimized model and the one used for rendering, which leads to a drop in rendering quality as shown in Table 2 (b). Our solution to this is to quantize the \(C\) values at every location during optimization. That is, we nonlinearly map them to lie in \([0,1]\) using a sigmoid \(\sigma\), then quantize them to a single byte using a quantization function \(q\), and finally affinely map the result to the range \([-m,m]\), as: \[\tilde{\mathbf{t}}^{\prime}=2m\cdot q\big{(}\sigma(\tilde{\mathbf{t}})\big{)} -m\,, \tag{8}\] where we choose \(m=14\) for densities (which are computed using an exponential nonlinearity), and \(m=7\) for diffuse colors and features. Note that this only quantizes the values stored in the grid, and the non-linearities in Equation 6 are subsequently applied after linearly interpolating and summing these values. We implement the byte quantization function \(q\) as: \[q(x)=x+\mathcal{N}\left(\frac{\lfloor(2^{8}-1)x+\nicefrac{{1}}{{2}}\rfloor}{ 2^{8}-1}-x\right)\,, \tag{9}\] where \(\mathcal{N}(\cdot)\) is a stop-gradient, which prevents gradients from back-propagating to its input. This use of a stop-gradient allows us to obtain gradients for the non-differentiable rounding function by treating \(q\) as the identity function during the backward pass, which is referred to as the straight-through estimator, as used in (Bengio et al., 2013; Yin et al., 2019). ### Baking After training, we evaluate and store the MLP's outputs on discrete grids for real-time rendering. First, we compute a binary 3D grid \(\mathbf{A}\) indicating voxels that contributed to any training image (i.e., voxels should _not_ be stored if they correspond to occluded content, are not sampled by any training ray, or have low opacity). To populate \(\mathbf{A}\), we render all training rays and extract from them a set of weighted points \(\{(\mathbf{x}_{i},\mathbf{w}_{i})\}\), where \(\mathbf{x}_{i}\) is the point's position, and \(\mathbf{w}_{i}\) is the associated volume rendering weight from Equation 1. Note that these points cluster around surfaces in the scene as they are sampled with a proposal-MLP (Barron et al., 2022). We mark the eight voxels surrounding a given point \(\mathbf{x}_{i}\) as occupied if both the volume rendering weight \(\mathbf{w}_{i}\) and the opacity, \(\alpha_{i}\), exceed a threshold set to \(0.005\). To cull as aggressively as possible, we compute \(\alpha_{i}\) based on the distance between samples \(\delta_{i}\) used by the real-time renderer -- recall that it steps through contracted space with a small uniform step size. As the proposal-MLP often suggests steps larger than \(\delta_{i}\), computing \(\alpha_{i}\) this way leads to better culling. However, we still guarantee voxels which contribute a significant opacity value (\(\alpha_{i}>0.005\)) are not culled in the final sampling scheme. Note that while the opacity \(\alpha_{i}\) only depends on the density at \(\mathbf{x}_{i}\), the weight \(\mathbf{w}_{i}\) also depends on densities along the entire ray, making usage of \(\mathbf{w}_{i}\) necessary to account for visibility. We observe that the opacity check based on the real-time renderer's step size significantly decreases the fraction of the volume marked as occupied. Note that this is in addition to the sparsity already achieved by only considering voxels in locations that have been sampled by the proposal-MLP. In contrast, existing baking pipelines often do not consider the proposal-MLP and perform visibility culling with uniformly-spaced sample points. This often results in fog-like artifacts and floating blobs because the underlying 3D field can have arbitrary values in regions not sampled by the proposal-MLP. Table 2 demonstrates that our Proposal-MLP-aware baking pipeline is almost lossless. After computing the binary grid \(\mathbf{A}\), we bake the three high-resolution 2D planes and the low-resolution 3D voxel grid. Following SNeRG, we store this voxel grid in a block-sparse format, where we only store data blocks that contain occupied voxels. For empty space skipping, we create multiple lower resolution versions of the binary occupancy grid \(\mathbf{A}\) with max-pooling. To reduce storage, we encode textures as PNGs. ## 6. Real-Time Rendering We implement our real-time viewer as a Javascript 3D (three.js) web application, based on SNeRG's implementation, where rendering is orchestrated by a single GLSL fragment shader. For efficient ray marching, we employ a multi-resolution hierarchy of occupancy grids. The set of occupancy grids is created by max-pooling the full-resolution binary mask \(\mathbf{A}\) with filter sizes \(16,32\) and \(128\). For instance, if the base resolution is \(4096\), this results in occupancy grids of size \(256,128\) and \(32\), occupying a total of \(18\) MB of video memory. We leverage this multi-resolution hierarchy of occupancy grids for faster space skipping. Given any sample location, we query the occupancy grids in a coarse-to-fine manner. If any level indicates the voxel as empty, we can skip the corresponding volume until the ray enters a new voxel at that level and compute the new sample location using the efficient ray-AABB intersection discussed in Section 4.2. We only access the MERF scene representation for samples where all occupancy grid levels are marked as occupied. Finally, we terminate ray marching along a ray once the transmittance value \(T_{i}\) (defined in Equation 1) falls below \(2\times 10^{-4}\). To further decrease the number of memory accesses during rendering, we split textures into density and appearance (containing diffuse RGB colors and feature vectors) components. When accessing the MERF representation at any location, we first query the density component and only read the corresponding appearance component if the voxel opacity computed from the returned density is nonzero. Moreover, we obtain an additional 4\(\times\) speed-up by optimizing the deferred rendering MLP. More specifically, we conduct loop unrolling, modify the memory layout to facilitate linear accesses, and exploit fast _mat4_-multiplication. ## 7. Experiments We experimentally evaluate our model in terms of rendering quality, video memory consumption, and real-time rendering performance. We compare MERF to a variety of offline view synthesis methods (NeRF [17], mip-NeRF 360 [1], Stable View Synthesis [14], and Instant-NGP [17]), and real-time ones (Deep Blending [13], Mobile-NeRF [15], and SNeRG [16]. To make this evaluation as rigorous as possible we evaluate against an improved version of SNeRG, which we call SNeRG++. SNeRG++ uses many components of our approach: multi-level empty space skipping, an optimized MLP implementation, our improved shaking pipeline, and post-activation interpolation (which increases model expressivity by allowing for intra-voxel discontinuities [18, 19]). Unless otherwise stated, for MERF we set the triplane resolution \(R\) to 2048 and the sparse grid resolution \(L\) to 512, and for SNeRG++ we set the grid resolution to 2048. For evaluation, we use the challenging mip-NeRF 360 dataset [1] which contains five outdoor and four indoor scenes. All scenes are unbounded and require a high resolution representation (20483) to be faithfully reproduced. If not indicated otherwise, reported metrics are averaged over runs from the five outdoor scenes. We evaluate rendering quality using peak-signal-to-noise-ratio (PSNR), SSIM [21], and LPIPS [19]. Footnote 3: [https://github.com/face](https://github.com/face) achieves similar quality to SNeRG++, while requiring a fraction of the memory. Additionally, in Figure 7 we see that ablating the 3D grid from MERF leads to a significant loss in quality. ### Real-time Rendering Evaluation Finally, we evaluate the rendering speed of MERF, MobileNeRF, SNeRG++, and Instant-NGP in frames per second (FPS). Note that MERF, Mobile-NeRF and SNeRG++ all run in the browser and use the view-dependence model introduced by Hedman et al. (Hedman et al., 2021). In contrast, Instant-NGP uses a different view-dependence model, and is implemented in CUDA and is therefore less portable across devices. For benchmarking the methods that include web viewers (MERF, Mobile-NERF, SNeRG++) we use an M1 MacBook Pro and set the rendering resolution to \(1280\times 720\). When evaluating against Instant-NGP, to make the comparison fair we use an RTX 3090 GPU (which Instant-NGP is optimized for) and increase the rendering resolution to \(1920\times 1080\) to demonstrate MERF's scalability on high-powered devices. As can be seen in Table 3, our method runs faster than SNeRG++ while consuming only one fifth of the memory. While MobileNeRF achieves higher frame rates on the Macbook than MERF, it requires twice as much video memory and reduces rendering quality (a 1.24 dB reduction in PSNR). This reduced quality is especially evident in background regions, as shown in Figure 6. From our experiment with the RTX 3090, we see that Instant-NGP does not achieve real-time performance (4 FPS), while MERF renders at frame rates well above 100. ### Limitations Since we use the view-dependence model introduced in SNeRG (Hedman et al., 2021), we also inherit its limitations: By evaluating view-dependent color once per ray, we are unable to faithfully model view-dependent appearance for rays that intersect with semi-transparent objects. Furthermore, since the tiny MLP has limited capacity, it may struggle to scale to much larger scenes or objects with complex reflections. Moreover, our method still performs volume rendering, which limits it to devices equipped with a sufficiently powerful GPU such as laptops, tablets or workstations. Running our model on smaller, thermally limited devices such as mobile phones or headsets will require further reductions in memory and runtime. ## 8. Conclusion We have presented MERF, a compressed volume representation for radiance fields, which admits real-time rendering of large-scale scenes in a browser. By using novel hybrid volumetric parameterization, a novel contraction function that preserves straight lines, and a baking procedure that ensures that our real-time representation describes the same radiance field as was used during optimization, MERF is able to achieve faster and more accurate real-time rendering of large and complicated real-world scenes than prior real-time NeRF-like models. Out of all real-time methods, ours produces the highest-quality renderings for any given memory budget. Not only does it achieve 31.6% (MSE) higher quality in the outdoor scenes compared to MobileNeRF, the previous state-of-the-art, it also requires less than half of the GPU memory. ## Acknowledgments We thank Marcos Seefelder, Julien Philip and Simon Rodriguez for their suggestions on shader optimization. This work was supported by the ERC Starting Grant LEGO3D (850533) and the DFG EXC number 2064/1 - project number 390727645.
2308.07179
Incorporating Annotator Uncertainty into Representations of Discourse Relations
Annotation of discourse relations is a known difficult task, especially for non-expert annotators. In this paper, we investigate novice annotators' uncertainty on the annotation of discourse relations on spoken conversational data. We find that dialogue context (single turn, pair of turns within speaker, and pair of turns across speakers) is a significant predictor of confidence scores. We compute distributed representations of discourse relations from co-occurrence statistics that incorporate information about confidence scores and dialogue context. We perform a hierarchical clustering analysis using these representations and show that weighting discourse relation representations with information about confidence and dialogue context coherently models our annotators' uncertainty about discourse relation labels.
S. Magalí López Cortez, Cassandra L. Jacobs
2023-08-14T14:39:02Z
http://arxiv.org/abs/2308.07179v1
# Incorporating Annotator Uncertainty into Representations of Discourse Relations ###### Abstract Annotation of discourse relations is a known difficult task, especially for non-expert annotators. In this paper, we investigate novice annotators' uncertainty on the annotation of discourse relations on spoken conversational data. We find that dialogue context (single turn, pair of turns within speaker, and pair of turns across speakers) is a significant predictor of confidence scores. We compute distributed representations of discourse relations from co-occurrence statistics that incorporate information about confidence scores and dialogue context. We perform a hierarchical clustering analysis using these representations and show that weighting discourse relation representations with information about confidence and dialogue context coherently models our annotators' uncertainty about discourse relation labels. ## 1 Introduction Discourse relations (DRs) are those relations such as Elaboration, Explanation, Narration, which hold between discourse units. The task of labeling DRs is known to pose difficulties for annotators (Spooren and Degand, 2010), as sometimes more than one interpretation may be possible (Scholman et al., 2022; Webber, 2013). Recent studies have shown that allowing for multiple labels in annotation can improve the performance of discourse parsers (Yung et al., 2022). Scholman et al. (2022) test different label aggregation methods in a crowdsourced corpus annotated by 10 workers and find that probability distributions over labels better capture ambiguous interpretations of discourse relations than majority class labels. (1) shows an example from their corpus, where the relation between the second and third sentences (in italics and bold, respectively), was interpreted as Conjunction by four annotators and Result by five annotators. 1. It is logical that our attention is focused on cities. _Cities are home to 80% of the 500 million or so inhabitants of the EU._ **It is in cities that the great majority of jobs, companies and centres of education are located.** (adapted from DiscoGeM, Europarl genre; Scholman et al., 2022, italics and bolding are ours.) Annotating the discourse relation between these two sentences with both Conjunction and Result captures different possible interpretations of the relation between these segments. For example, the two sentences may contain two conjoined facts about cities, but can also be perceived as describing a causal relation between the first and second sentence (i.e., as cities are home to the largest part of the population, most jobs, companies and educational institutions are located there). In this work, we investigate which relations are distributionally similar or co-occurring in multi-label annotations of spontaneous conversations. We are particularly interested in how novice annotators interpret discourse relation categories when annotating spoken conversational data. We collect annotations of DRs from Switchboard telephone conversations (Godfrey et al., 1992), allowing for multiple labels, and ask for confidence scores. We find that confidence scores vary significantly across dialogue contexts (single turn vs. pairs of turns produced by the same speaker vs. pairs of turns produced by different speakers). We incorporate information about these three dialogue context types and confidence scores into distributed representations of discourse relations. A clustering analysis shows that discourse relations that tend to occur across speakers cluster together, while discourse relations which tend to occur within a speaker, either in the same turn or different turns, form their own cluster. ## 2 Annotation of Discourse Relations Our analyses are built on the dataset collected in Lopez Cortez and Jacobs (2023), who selected 19 conversations from Switchboard1, a corpus consisting of telephone conversations between pairs of participants about a variety of topics (e.g. recycling, movies, child care). We chose this corpus because it contains informal, spontaneous dialogues, and because it has been used within linguistics in various studies on conversation Jaeger and Snider (2013); Reitter and Moore (2014). Footnote 1: We discarded the annotations from one conversation because the annotators did not follow the guidelines. ### Discourse Units An initial set of turns for annotation was selected by using spaCy's dependency parser Honnibal et al. (2020), version 3.3.1) to select turns with two or more ROOT or VERB tags. We define a turn as each segment of dialogue taken from Switchboard. We note that an utterance produced by one speaker (A) may take place during a continuous utterance by another speaker (B). Switchboard splits A's utterance into two turns in these cases. We return to this point in the Discussion. We manually segmented these turns into elementary discourse units (EDUs). The main criteria for segmenting turns into EDUs was that the unit performs some basic discourse function Asher and Lascarides (2003). By default, finite clauses are considered EDUs, as well as comment words like "Really?" or acknowledgments such as "Uh-huh" or "Yeah." Cases of interruptions and repairs were segmented if they constituted a turn in Switchboard, as in example (2a), and when they contained a verb, as in example (2b). Cases of repetition as in (2c) were not considered separate EDUs. We segmented disfluencies ("uh") and some non-verbal communication ("[laughter]") but we did not select these for discourse relation labeling. 1. [label=(2)] 2. B: \(\|\) So you don't see too many thrown out around the \(\|\) [laughter] \(\|\) streets. \(\|\) A: \(\|\) Really \(\|\) B: \(\|\) Or even bottles. \(\|\) 3. B: \(\|\) I think, \(\|\) uh, \(\|\) I wonder \(\|\) if that worked. \(\|\) 4. A: \(\|\) What kind of experience do you, do you have, then with child care? \(\|\) Because many EDUs are very short, we selected pairs of elementary discourse units and complex discourse units (CDUs) for discourse relation annotation. CDUs consist of two or more EDUs that constitute an argument to a discourse relation Asher and Lascarides (2003). We use the term _discourse units_ (DUs) to refer to both EDUs and CDUs. ### Dialogue Contexts We manually selected items for annotation across three different contexts: within a single turn, across two turns within a speaker, and across two immediately adjacent turns (two speakers). (3) shows an example for each context kind, with the first DU in italics and the second in bold. Example (3a) shows two discourse units within a speaker's turn. (3b) shows two discourse units uttered by the same speaker but that span across two different turns, interrupted by one turn. We did not include any constraint for the length of the interrupting turn. (3c) shows two DUs uttered by speakers in adjacent turns. We leave for future work the annotation of pairs of discourse units that may have a longer-distance relation with more turns in between DUs. 1. [label=(3)] 2. A: \(\|\)_and they discontinued them \(\|\)_because people were coming and dumping their trash in them. \(\|\)_ B: \(\|\) No, \(\|\)_I just, \(I\) noticed \(\|\) in Iowa and other cities like that, it's a nickel per aluminum can. \(\|\)_ A: \(\|\) Oh. \(\|\) B: \(\|\) **So you don't see too many thrown out around the \(\|\) [laughter] \(\|\) streets**. 3. A: \(\|\)_We live in the Saginaw area. \(\|\)_ B: \(\|\)**Saginaw? \(\|\)** ### Taxonomy of Discourse Relations The DRs chosen to annotate our corpus were adapted from the STAC corpus manual Asher et al. (2012); 2016). STAC is a corpus of strategic multiparty chat conversations in an online game. Table 1 shows the taxonomy used. We selected 11 DRs based on a pilot annotation by the first author, and added an "Other" category for relations not included in the list of labels. We focused on a small taxonomy to minimize the number of choices presented to our novice annotators. We refer readers to Lopez Cortez and Jacobs (2023) for details and examples of each relation in the taxonomy. Future work will include revising the taxonomy used. ### Annotation Procedure The annotation of discourse relations was done by students enrolled in a Computational Linguistics class. Students were divided into 19 teams of approximately 5 members each, and each team was assigned a conversation. The annotation was performed individually, but teams then discussed their work and wrote a report together. Annotators were trained using written guidelines, a quiz-like game, and a live group annotation demo. We used the annotation interface Prodigy Montani and Honnibal (2018). Each display presented the two target discourse units plus two context turns before and two after. Annotators also had access to the entire conversation throughout the annotation task. Below the text, the screen showed a multiple choice list of discourse relations plus the "Other" category. We allowed for the selection of multiple labels following previous findings that allowing for multiple labels better captures ambiguous interpretations of discourse relations Scholman et al. (2022) and improves the performance of discourse parsers Yung et al. (2022). Each display also asked for confidence scores in the range 1-5, corresponding to least to most confident. We did not pursue label-specific confidence scores but rather the confidence in the label(s) as a whole in the interest of minimizing annotator overhead. The results of this work show that per-label confidence scores or a slider-based approach may be informative and is a topic for future work. We include an example annotation item in Appendix C. ## 3 Dialogue Context as a Predictor of Confidence Scores First we sought to understand how discourse relations and dialogue context (as defined above) influence annotator confidence. Because our confidence ratings data has multiple observations for each annotator, each team and each DU, it is hierarchical and thus benefits from being analyzed using hierarchical mixed effects models. Due to the ordinal nature of the ratings data, we use the cumulative link approach CLMM; Liddell and Kruschke (2018); Howcroft and Rieser (2021) rather than model confidence scores as real-valued in linear regression. We first built a null model containing only random intercepts by annotator and compared it to a model containing an additional fixed effect and random slope by annotator for dialogue context type: single turn, across turns within speaker and across speakers (\(kind\), dummy coded). A likelihood ratio test revealed a significant improvement in fit by adding kind as a predictor (\(\chi^{2}(7)=126.64,p<0.001\)). Adding random intercepts for DU pairs to account for annotation difficulty across DU pairs also led to a significant improvement in model fit beyond the model containing dialogue context kind (\(\chi^{2}(1)=195.01,p<0.001\)). This suggests that our annotators' confidence scores are sensitive to the context of DU pairs. Figure 1 shows mean confidence scores per context kind across discourse relations. Confidence scores within a speaker both across and within turns received similar confidence ratings (\(\beta=-0.13\), \(z=-0.56\), \(p=\text{n.s.}\)2), while annotators were significantly more confident for relation annotation across speakers (\(\beta=0.63\), \(z=3.05\), \(p<.01\)). The CLMM revealed that annotators used confidence scores between 3 and 5 overall, except for the label "Other", for which they selected lower confidence scores. Background received lower confidence scores overall. Continuation, Contrast and Narration received higher scores for contexts \begin{table} \begin{tabular}{l|l} \hline \hline Acknowledgement & Elaboration \\ Background & Explanation \\ Clarification Question & Narration \\ Comment & Question-Answer Pair \\ Continuation & Result \\ Contrast & Other \\ \hline \hline \end{tabular} \end{table} Table 1: Taxonomy of discourse relations. Figure 1: Confidence scores per context kind across discourse relations. _qap_ stands for Question-Answer Pair and _clarificationq_ for Clarification Question. within speaker. Comment and Result received higher scores for turns across speakers and single turn. For Elaboration and Explanation, mean confidence scores are very similar across the three contexts, with slightly higher scores for single turn and pairs of turns within speaker. Acknowledgment, Clarification Question ("clarificationq") and Question-Answer Pair ("qap") received higher scores for turns across speakers, which makes sense given the dialogic nature of these relations. However, these relations also received rather high confidence scores for single turn and pairs of turns within speaker, which is a bit surprising. We suspect this might be due to the context turns included for each pair of DUs, which might have led annotators to choose relations between discourse units other than for the pair of highlighted DUs. Future analysis will look closer at this aspect. ## 4 Distributed Representations from Discourse Relation Annotations To model the similarity between discourse relations as perceived by annotators, we computed embedding representations of discourse relations. We extracted each \(n\) individual annotation containing **r**elation-**e**onfidence \((r,c)\) tuples selected by a given annotator for a pair of DUs. We concatenate bag-of-relation vectors with one-hot encoded features representing the dialogue context kind, and multiply the count vector of annotated relations (either 1 or 0 for each relation) by the confidence score (1-5) for that pair of DUs. This weighting learns more from high confidence; an ideal reweighting may be possible with additional parameter search, possibly in conjunction with the CLMM outputs. For an \(n\times 1\) confidence ratings matrix \(C\), an \(n\times 12\) bag-of-relations matrix \(R\), an \(n\times 3\) discourse context matrix \(D\) for each annotation, we obtain an annotation matrix \(A=C\times(R|D)\). We then obtain a square co-occurrence matrix \(O\) such that \(O=A\cdot A^{T}\), which we factorize using Principal Component Analysis (without shifting the intercept following Levy and Goldberg, 2014). Each relation is thus represented as a vector that consolidates co-occurrences between all relations within a single annotator that are weighted by confidence score. We then projected these embeddings into two dimensions with UMAP (McInnes et al., 2018) and performed a hierarchical clustering analysis over the resulting coordinates due to the greater discriminability afforded by continuous distance metrics. Informally, the UMAP coordinates appear more gradient in the representational space when confidence was included (right panel) than when it was not included (left panel). When context is not included, the UMAP coordinates primarily represent the frequency of labels in our corpus, which we include in Appendix A. We visualize the UMAP coordinates in Figure 2a. Figure 2b shows a dendrogram with the output clusters, colored according to the optimal number of clusters (\(k=2\)), calculated using average silhouette widths (Levshina, 2022). There are two large clusters, one of which contains two sub-clusters with Background and Continuation, on the one hand, and Elaboration and Explanation on the other. In the other large cluster, Acknowledgement and Comment form a sub-cluster. These are very common relations between pairs of turns across speakers. Clarification Question and Question-Answer Pair form another sub-cluster, also common relations between pairs of turns across speakers, in close proximity to the Other label, which received a sub-cluster of its own. Narration and Contrast and Result, form the last sub-clusters, which we suspect is due in part to the frequencies of these relations (Schnabel et al., 2015). We include a dendrogram with the output clusters of a hierarchical clustering analysis performed with base bag-of-relations vectors (without context kind and confidence scores weight) in Figure 3 in Appendix B for comparison. Currently, we provide these results as a proof of concept of the feasibility and interpretability of noisy labels produced by novice annotators. Importantly, annotations weighted by confidence produce coherent clusters of discourse relations. We envision applications of DR embeddings to several domains including dialogue generation, such that appropriate responses to input are partially conditioned on a latent or mixed combination of DRs. ## 5 Related Work Annotation of discourse relations is usually done within Rhetorical Structure Theory (Mann and Thompson, 1987), as in the RST-DT (Carlson et al., 2003) and GUM (Zeldes, 2017) corpora, within Segmented Discourse Representation Theory (SDRT, Asher and Lascarides, 2003), as in the STAC (Asher et al., 2016) and Molweni (Li et al., 2020) corpora, or within the Penn Discourse Treebank framework (Prasad et al., 2008, 2014, 2018). We use a taxonomy adapted from SDRT, in particular, the STAC corpus. Annotators are usually trained to identify discourse relations using the framework's taxonomy. Some recent alternatives to explicitly collecting annotation of DRs include crowdsourcing by eliciting connectives (Yung et al., 2019; Scholman et al., 2022) or question-answer pairs (Pyatkin et al., 2020) rather than relations. In this work, we wanted to investigate how annotators perceive discourse relation categories, and therefore a connective insertion task would only provide indirect evidence. We train annotators on DR labeling and ask annotators to choose from a set of discourse relation labels. We allow for multiple labels to investigate what relations are more confusable or perceived as co-occurring (Marchal et al., 2022). ## 6 Discussion and Future Work In this study, we collected multiple annotations of discourse relations from a subset of the Switchboard corpus, together with confidence scores. We found that dialogue context had a significant effect on confidence scores. We computed embedding representations of DRs using co-occurrence statistics and weighted the vectors using context type and confidence scores, and found that these representations coherently model our annotators uncertainty about discourse relation labels. Discourse units that occur across turns as defined by Switchboard do not necessarily occur across continuous utterances from the speaker's point-of-view. Obtaining information about whether same-speaker pairs of discourse units fall into the same or different utterances may help to explain additional variance in annotator confidence. Additionally, in this work, we investigated annotators' confidence on the annotation of adjacent turns. In future work, we plan to annotate discourse relations across longer-distance discourse units and to allow for hierarchical annotation. We expect that annotation confidence will also vary across longer-distance units and across different depths of annotation. In the future, we plan to use this information to run a larger scale annotation study of the Switchboard corpus to analyze discourse relation patterns in spoken dialogues. ## Limitations This work is limited by the size of the dataset and the taxonomy used in the annotation task. While we found that our annotators perceived some of the categories as more similar or confusable, future work can examine annotators' uncertainty in a larger set of discourse relations. The selection of DUs for annotation was also non-exhaustive. In future work, we plan to expand the selection procedure so that we include more distantly related DUs. We also note that the frequency of discourse relation labels and individual differences in confidence levels among annotators may bias the representations. We plan to look into these potential biases in future work. Figure 2: Dimensionality reduction and clustering of relation embeddings. ## Ethics Statement We are not aware of ethical issues associated with the texts used in this work. Students participated in the annotation task as part of course credit but annotation decisions were not associated with their performance in the course. ## Acknowledgements We would like to thank Jurgen Bohnemeyer and three anonymous reviewers for feedback on a previous version of this paper. We also thank the students who participated in the annotation task.
2305.16088
Social Sustainability of Digital Transformation: Empirical Evidence from EU-27 Countries
In the EU-27 countries, the importance of social sustainability of digital transformation (SOSDIT) is heightened by the need to balance economic growth with social cohesion. By prioritizing SOSDIT, the EU can ensure that its citizens are not left behind in the digital transformation process and that technology serves the needs of all Europeans. Therefore, the current study aimed firstly to evaluate the SOSDIT of EU-27 countries and then to model its importance in reaching sustainable development goals (SDGs). The current study, using structural equation modeling, provided quantitative empirical evidence that digital transformation in Finland, the Netherlands, and Denmark are respectively most socially sustainable. It is also found that SOSDIT leads the countries to have a higher performance in reaching SDGs. Finally, the study provided evidence implying the inverse relationship between the Gini coefficient and reaching SDGs. In other words, the higher the Gini coefficient of a country, the lower its performance in reaching SDGs. The findings of this study contribute to the literature of sustainability and digitalization. It also provides empirical evidence regarding the SOSDIT level of EU-27 countries that can be a foundation for the development of policies to improve the sustainability of digital transformation. According to the findings, this study provides practical recommendations for countries to ensure that their digital transformation is sustainable and has a positive impact on society.
Saeed Nosratabadi, Thabit Atobishi, Szilard HegedHus
2023-05-25T14:21:01Z
http://arxiv.org/abs/2305.16088v1
# Social Sustainability of Digital Transformation: Empirical Evidence from EU-27 Countries ###### Abstract In the EU-27 countries, the importance of social sustainability of digital transformation (SOSDIT) is heightened by the need to balance economic growth with social cohesion. By prioritizing SOSDIT, the EU can ensure that its citizens are not left behind in the digital transformation process and that technology serves the needs of all Europeans. Therefore, the current study aimed firstly to evaluate the SOSDIT of EU-27 countries and then to model its importance in reaching sustainable development goals (SDGs). The current study, using structural equation modeling, provided quantitative empirical evidence that digital transformation in Finland, the Netherlands, and Denmark are respectively most socially sustainable. It is also found that SOSDIT leads the countries to have a higher performance in reaching SDGs. Finally, the study provided evidence implying the inverse relationship between the Gini coefficient and reaching SDGs. In other words, the higher the Gini coefficient of a country, the lower its performance in reaching SDGs. The findings of this study contribute to the literature of sustainability and digitalization. It also provides empirical evidence regarding the SOSDIT level of EU-27 countries that can be a foundation for the development of policies to improve the sustainability of digital transformation. According to the findings, this study provides practical recommendations for countries to ensure that their digital transformation is sustainable and has a positive impact on society. Keywords:digital transformation; digitalization; social sustainability; sustainable development goals; structural equation modeling; EU-27 countries + Footnote †: journal: Journal of LaTeX Templates ## 1 Introduction Digital transformation is a process that is happening across various sectors in European countries. It encompasses the use of digital technologies, such as the internet, mobile devices, big data and analytics, and artificial intelligence, to improve the way organizations and governments operate and deliver services to citizens (Aly 2022). The European Union has made digital transformation a priority and has implemented policies and initiatives to drive digitalization across the member states. This includes the Digital Single Market strategy, which aims to remove barriers to online trade and create a level playing field for businesses across the EU. Additionally, the European Commission also introduced the European Data Strategy and the European Artificial Intelligence Strategy to promote the use of data and AI for economic growth and societal benefits. The European Commission considers the targets of "more than 90% of SMEs reach at least a basic level of digital intensity" and "75% of EU companies using cloud/AI/big data" for the transition to digitalization by 2030 (Commission 2021). In specific sectors, digital transformation has had a significant impact. In the healthcare sector, for example, digitalization has led to the development of telemedicine and e-health services, which allow patients to receive medical treatment remotely and improve access to healthcare for citizens in rural areas (Gjellebeek et al. 2020). In the manufacturing sector, Industry 4.0 technologies (Frank et al. 2019), such as the internet of things (Haghnegahdar et al. 2022) and advanced robotics (Parmar et al. 2022), are being used to increase efficiency and re duce costs. In the field of education, digital transformation has led to the development of online learning platforms and the incorporation of digital tools in the classroom, which can improve access to education and personalize learning for students (Sousa et al. 2022). Another target of the European Commission for digital transformation by 2030 is for 80% of the population to have at least some digital skills, using the slogan "gigabit for all and 5G everywhere" (Commission 2021), because 40% of Europeans do not have basic digital skills. This highlights the need to ensure that all citizens have access to digital technologies and the skills to use them, in order to participate in the digital economy and benefit from digitalization. Automation and digitalization are expected to lead to the displacement of up to 14% of jobs in the EU by the end of the decade. This highlights the need to address the potential negative impacts of digitalization on employment and to ensure that workers are reskilled to adapt to the changing labor market. These statistics demonstrate the need to address the social sustainability of digital transformation (SOSDIT) in Europe, to ensure that digitalization benefits all citizens, and that technology is used to improve the lives of all members of society, rather than exacerbating existing social inequalities. SOSDIT, indeed, is about considering the social impact of technology in the process of digital transformation. Social sustainability in the context of digital transformation refers to the impact that technology and digitalization have on society as a whole. Iqbal et al. (Iqbal et al. 2021) define social sustainability as "a measure of the human's welfare". This includes ensuring that the benefits of digitalization are equitably distributed, and that technology is used to improve the lives of all members of society, rather than exacerbating existing social inequalities. Additionally, social sustainability in digital transformation also includes addressing the potential negative impacts of technology on employment and privacy. While digital transformation is seen as an opportunity to drive economic growth and improve citizens' lives, it is important to consider the SOSDIT in European countries to ensure that the benefits of digitalization are equitably distributed, and that technology is used to improve the lives of all members of society. Although both European countries and institutions influencing the development of European countries (such as the European Union and the European Commission) have understood the importance of SOSDIT and have defined goals in their development plans towards achieving SOSDIT, there are no criteria and metrics to evaluate the level of SOSDIT of a country. Hence, the fundamental question is: The first research question (RQ1): How socially sustainable is digital transformation across the EU-27 countries? On the other hand, achieving the Sustainable Development Goals (SDGs) is important for European countries. The SDGs provide a framework for addressing some of the most pressing global challenges, such as poverty, inequality, and climate change (Clemente-Suarez et al. 2022). By achieving the SDGs, European countries can contribute to creating a more sustainable and equitable world (D'Adamo et al. 2022). The SDGs are relevant to the economic, social, and environmental challenges that European countries are facing. Achieving the SDGs can help to drive economic growth, improve citizens' lives, and create more inclusive and resilient societies. Therefore, the second research question (RQ2) is: RQ2: Does SOSDIT lead EU-27 countries in achieving sustainable development goals? The present study was, in fact, conducted to answer these two research questions (i.e., RQ1 and RQ2). For this purpose, the current research aims to bridge the gap in the literature by developing a conceptual model to provide a tool for measuring SOSDIT and on the other hand, to provide quantitative empirical evidence to answer the questions raised. Therefore, the current article pursues two objectives: (1) to provide a model for assessing the SOSDIT at a country level and (2) to evaluate the effect of countries' performance in SOSDIT on their achievement of SDGs. The findings of this article not only theoretically contribute to the research literature of sustainability and digital transformation, but also provide quantitative empirical evidence to evaluate the SOSDIT of EU-27 countries. To do so, in the second section of this article, the subject literature as well as the development of hypotheses and the design of the conceptual model of this article have been elaborated. The third section of this article is dedicated to data collection and methodology applied for data analysis. The results of the quantitative analysis of the conceptual model as well as the test of the hypotheses are given in the fourth section of the article. The analysis, the implementation, and the limitations of the findings are presented in Sections 5 (i.e., Findings and Discussion) and 6 (Conclusion). ## 2 Theoretical Background Technology has a profound impact on society, and it is essential to ensure that technology is developed, deployed, and used in ways that promote social well-being and support human values (Felt, 2022). Failure to consider the social dimension of digital transformation can lead to negative consequences, such as digital divides (Reggi and Gil-Garcia, 2021), unequal access to technology (Tiku, 2021), data breaches (Seh et al., 2020), job losses (Bertani et al., 2020), cultural homogenization (Reid, 2006), and erosions of democratic governance (Clarke and Dubois, 2020). By addressing the social dimensions of digital transformation, society can ensure that technology is used to promote human development and support social progress. In other words, social sustainability of digital transformation should be able to evaluate the impact of technology on society and the ways in which society can ensure that the benefits of technology are accessible to everyone. Social sustainability refers to the maintenance and promotion of the well-being and quality of life of individuals and communities, with a focus on ensuring that social benefits and opportunities are equitably distributed and maintained over time (Afshari et al., 2022). In order to explain social sustainability, the researchers state different dimensions and aspects, of which four have been the most referred to, that are: (1) social inclusion (Clube and Tennant, 2022; Mirzoev et al., 2022), (2) human rights protection (Lozano, 2022; Trevino-Lozano, 2022), and (3) access to education (Leite, 2022; Singh and Singh, 2022). Social inclusion refers to the active engagement of all individuals and groups in society, regardless of their background, identity, or circumstances (Fante et al., 2022). This includes ensuring equal access to resources, services, and opportunities, as well as promoting diversity and reducing discrimination and prejudice. In the digital age, access to technology and the internet has the potential to greatly improve social inclusion by connecting people and providing access to information and resources that were previously out of reach. At the same time, however, the digital divide and unequal access to technology can deepen existing inequalities and exclusions, so it is important to ensure that everyone has access to the benefits of digital transformation. Hence, the concept of digital inclusion has been developed. Digital inclusion refers to the equal access and meaningful use of information and communication technologies (ICTs) by all members of society, regardless of age, gender, education, income, or other factors (Chohan and Hu, 2022). The goal of digital inclusion is to ensure that everyone can participate fully in the digital economy and society, and that the benefits of digital technologies are shared equitably. This includes ensuring access to the internet, digital devices, digital literacy skills, and digital content and services that meet the diverse needs of individuals and communities. Digital inclusion also aims to address the digital divide, which refers to the unequal distribution of technology and its benefits, and to ensure that everyone has the opportunity to participate in the digital world and benefit from its opportunities (Aissaoui, 2022). Therefore, it can be concluded that digital inclusion is one of the main aspects of SOSDIT. In fact, digital inclusion ensures equal access to technology and digital literacy skills for all members of society, which is important for bridging the digital divide and promoting equal opportunities. Accordingly, the first hypothesis of the current research is designed as follows: **H1.**_Digital inclusion is one of the factors of SOSDIT_. Human rights protection is an essential aspect of social sustainability, as it ensures that all individuals are treated with dignity, respect, and fairness, and have the freedom to participate in the decisions that affect their lives (Knebel et al. 2022). This includes the protection of civil, political, social, and economic rights, as well as the right to participate in the democratic process (Stone Sweet and Sandholtz 2023). On the other hand, digital transformation has significant implications for privacy, freedom of speech, and other human rights, and it is important to ensure that these rights are protected in the digital space (Kirchschlaeger 2019). For example, the collection and use of personal data, the impact of algorithmic decision-making, and the influence of misinformation and propaganda all raise important human rights concerns (Bharti and Aryal 2022). Besides, digital transformation has created new forms of risk and harm, such as online harassment and abuse (Francisco and Felmlee 2022), cyberbullying (Giumetti and Kowalski 2022), and exposure to harmful content (Donaldson et al. 2022; Katsaros et al. 2022). At the same time, digital technologies also offer new opportunities for promoting safety, such as by providing access to emergency services, enabling new forms of community support and resilience, and promoting digital literacy and awareness of online risks. When a country has strong privacy, data protection, and security laws and regulations in place, it means that citizens' personal information is protected from unauthorized access and misuse (Rusakova et al. 2020). This helps to ensure that citizens feel safe and secure when using digital services and technologies and can trust that their personal information will not be misused. Strong privacy, data protection, and security laws and regulations can also help to prevent discrimination and bias, as well as protect citizens from fraud and identity theft. Digital privacy and security are crucial for protecting personal data and information from misuse and unauthorized access, which is essential for maintaining trust in technology and safeguarding fundamental human rights. Therefore, the second hypothesis of this study can be developed as follows: **H2.**_Digital privacy and security is one of the factors of SOSDIT_. Access to education is crucial for social sustainability, as it provides individuals with the skills, knowledge, and perspectives necessary to participate in the modern world and contribute to the betterment of their communities (Ahel and Lingenau 2020). Education also supports economic growth, social mobility, and improved health outcomes. Digital skills are the ability to use digital technologies effectively, efficiently, and responsibly to find, evaluate, use, create, and communicate information (Morte-Nadal and Esteban-Navarro 2022; Timmaz et al. 2022). In order to participate in the digital economy and society, individuals need access to education and training that will help them develop digital skills (Ahel and Lingenau 2020). This includes not only formal education, but also training and support provided by employers, community organizations, and government agencies. Access to education provides individuals with the opportunity to develop digital skills, and digital skills are essential for accessing educational opportunities and benefiting from the digital transformation of education (Haleem et al. 2022). Therefore, the digital skills variable is a critical component of social sustainability of digital transformation. Digital skills address the impact of technology on employment and promote reskilling and upskilling to prepare workers for the digital economy, which is vital for supporting economic growth and social well-being (van Laar et al. 2019). Thus, the third hypothesis of this study is presented as follows: **H3.**_Digital skills is one of the factors of SOSDIT_. These aspects of social sustainability are important considerations in shaping the impact of digital transformation on society. To ensure that the benefits of digital technologies are equitably distributed and that the risks are mitigated, it is important to consider these aspects in the design and implementation of digital solutions. Sustainable digital transformation is an important aspect of achieving the Sustainable Development Goals (SDGs) set by the United Nations. The SDGs are a universal call to action to end poverty, protect the planet and ensure that all people enjoy peace and prosperity by 2030. Digital technologies are seen as a key enabler to achieve these goals, by improving access to information, education, healthcare, and economic opportunities, as well as by improving the efficiency and effectiveness of various sectors. Digital technologies have the potential to contribute significantly to achieving several of the Sustainable Development Goals (SDGs) set by the United Nations. For instance, SDG 1 (Kelikume 2021): No Poverty can be advanced by providing access to financial services and digital skills training to underserved communities, thereby creating new economic opportunities for individuals and communities. In addition, SDG 4 (Kalimullina et al. 2021): Quality Education can be achieved by providing access to online education and digital learning resources, which can help expand access to quality education for all. SDG 5 (ElMassah and Mohieldin 2020): Gender Equality can be promoted by providing access to digital services and technologies for women and girls and addressing digital gender gaps, thereby empowering women and girls to participate fully in the digital economy and society. SDG 8 (Myovella et al. 2020): Decent Work and Economic Growth can be advanced by creating new jobs and improving the productivity of existing jobs through the use of digital technologies. SDG 9 (Nobrega et al. 2021): Industry, Innovation and Infrastructure can be advanced by driving innovation and increasing access to digital technologies in various sectors, thereby helping to spur economic growth and development. SDG 11 (Perez-Martinez et al. 2023): Sustainable Cities and Communities can be advanced by using digital technologies to improve urban planning and management, thereby promoting more sustainable and livable communities. Finally, SDG 17 (Castro et al. 2021): Partnerships for the Goals can be advanced by fostering collaboration and sharing of knowledge and resources among various stakeholders through the use of digital technologies. Since SOSDIT has the potential to play a critical role in advancing the SDGs and promoting sustainable development, the fourth hypothesis of this study is as follows: **H4**. _The performance of countries in SOSDIT has a positive and direct effect on their performance in achieving SDGs._ The literature provides ample evidence that income inequality has a significant impact on achieving the SDGs. Kabeer and Santos (Kabeer and Santos 2017) argue that income inequality is often accompanied by other intersecting inequalities that can impede progress towards the SDGs. Similarly, Scherer et al. (Scherer et al. 2018) find that a reduction in income inequality is positively associated with achieving SDG 10, which aims to reduce inequalities within and among countries. Ghosh et al. (Ghosh et al. 2020) also report that reducing income inequality can contribute to achieving SDG 10 as well as SDG 11, on sustainable cities and communities, emphasizing the synergies between the goals. In addition, Heerink and Ma (Heerink and Jia 2006) suggest that rising income inequality can lead to lower health outcomes and possibly higher fertility rates (i.e., SDG 3). Nasrollahi et al. (Nasrollahi et al. 2018) further support this by demonstrating a negative and significant relationship between income inequality and the composite index of sustainable development. Based on these findings, we hypothesize that: **H5**. _The Gini coefficient of a country has a direct negative effect on the performance of countries in achieving the SDGs._ **H6**. _The Gini coefficient as a moderator variable affects the process of influencing SOSDIT on the achievement of SDGs._ The Gini coefficient, as a widely used measure of income inequality evaluation, reflects the distribution of income or consumption expenditure among individuals or households within a country. By using the Gini coefficient, we aim to measure the extent of income inequality within countries and investigate its impact on the performance of countries in achieving the SDGs. It ranges from 0 to 1, with a value of 0 indicating perfect equality (everyone has the same income) and a value of 1 indicating perfect inequality (one person has all the income). Since the larger Gini coefficient represents greater inequality, it is expected to have a negative impact on the achievement of the SDGs, which is why this issue is mentioned in the fifth hypothesis. The graphical representation of the conceptual model and research hypotheses of this study are depicted in Figure 1. ## 3 Methodology ### Data Source In order to analyze the proposed conceptual model and evaluate the level of SOSDIT of the EU-27 countries, Eurostat data was used. EU-27 countries are Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden. In this study, the latest data available in the Eurostat database were used: it should be mentioned that the data collection was done in January 2023. In order to evaluate the digital inclusion variable, the data related to the ICT usage variable was used in this database, and the data related to ICT trust, security and privacy, and digital skills were used in this database to determine the digital privacy and security and digital skills variables, respectively. The data related to the Gini coefficient were collected from the World Development Indicators database, and finally, the SGD Index was used to evaluate the performance of the EU-27 countries in achieving the SDGs. In Table 1, the explanation of the data related to each variable is presented. \begin{table} \begin{tabular}{c c} \hline **Variables** & **Explanation** & **Question Code** \\ \hline Digital Inclusion Use of ICT at work and activities performed & Q1-1 \\ \hline \end{tabular} \end{table} Table 1: Description of the data used for each of the variables of the conceptual model. Figure 1: The proposed conceptual model and the hypotheses of the study. Work from home, from an external site or on the move Internet use by individuals Individuals frequently using the internet Smartphone has some security system, installed automatically or provided with the operating system (individuals who used internet in the past 3 months) Individuals know that cookies can be used to trace movements of people on the internet (3 months) Individuals manage access to personal data on the internet (3 months): read privacy policy statements before providing personal data Smartphone has some security system, installed automatically or provided with the operating system (All individuals) Smartphone has some security system, installed by somebody or subscribed to it (3 months) Individuals already lost information, documents, pictures or other kind of data on their smartphone as a result of a virus or other hostile type of programs (3 months) Individuals' level of digital skills (from 2021 onwards) Individuals who have used a search engine to find information Individuals who have sent an email with attached files Individuals who have posted messages to chat rooms, newsgroups or an online discussion forum Digital Skills Individuals who have used the internet to make phone calls Individuals who have used peer-to-peer file sharing for exchanging movies, music, etc. Employed ICT specialists -- total Enterprises that provided training to develop/upgrade ICT skills of their personnel by NACE Rev.2 activity GINI Coefficient GINI Coefficient SDGs Index SDGs Index ``` ### Data Analysis To test proposed conceptual model in this article, structural equation modeling (SEM) based on partial least squares, which is called SEM-PLS, has been used using SmartPLS 4 software. SEM-PLS has a much better performance in evaluating models with little data (Becker et al., 2023). Since there were only 27 countries (i.e., 27 rows of data) for analysis, SEM-PLS was used. The SEM approach in evaluating the conceptual models includes two stages. In the first stage, the measurement model is tested, and in the second stage, the structural model will be evaluated. The measurement model refers to the relationship between the observable variables (which are the questionnaire questions) and latent variables (which refers to the main variable that those questions represent). In order to evaluate the measurement model, validity and reliability tests, as well as factor analysis, are performed. This is why the structural model deals with the causal relationships between the latent variables (or the main variables of the model), and to measure the structural model, path coefficients and determination coefficients are checked. ## 4 Results ### Measurement Model In the present study, exploratory factor analysis was performed first, and its results are given in Table 2. It should be noted that two criteria should be considered for factor analysis: (1) the absolute value of the loading factors should be above 0.7 and (2) these loading factors should be significant in the confidence interval of at least 95% (Becker et al. 2023). The results of the factor analysis test show that all four questions selected to evaluate the latent variable of digital inclusion are above the threshold and are significant (\(p<0.05\)). This is why one of the six questions considered to evaluate digital privacy and security variables is both above 0.7 and significant (i.e., Q2-2). Since the absolute values of the loading factors related to question Q2-1 and Q2-5 are significant and equal to 0.636 and 0.638, respectively (very close to the threshold value), these questions were also used in the evaluation of the final model. In other words, in the present study, the loading factor threshold was considered equal to 0.6. The negativity of the loading factor represents the inverse relationship between the question and the variable. Because this question measures the level of security that users consider when using digital tools, therefore, the more security strategies a user uses, the less security he/she feels, which is why it has an inverse relationship with the main variable. Since the loading factor of this question is significant (\(p<0.05\)), we tried not to ignore the importance of this question in the proposed model. Therefore, questions Q2-1, Q2-2, Q2-5 were used to evaluate the loading variable of digital privacy and security. Finally, the loading factors of six of the eight questions assigned to evaluate digital skills are above the threshold level (which is 0.6) and are significant (\(p<0.05\)). The loading factors of SDG Index and Gini coefficient variables are 1 because only one question is assigned to each of them. After the factor analysis, the validity and reliability of the variables were measured. The results of the reliability test show that both Cronbach's alpha and composite relia \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Variables**} & \multirow{2}{*}{**Question Codes**} & \multirow{2}{*}{**Loading Factor**} & \multicolumn{3}{c}{**Standard**} & \multirow{2}{*}{**T Statistics**} & \multirow{2}{*}{\(p\) **Values**} \\ & & & & & **(STDEV)** & \\ \hline \multirow{4}{*}{Digital Inclusion} & Q1-1 & 0.882 & 0.88 & 0.041 & 21.369 & 0.00 \\ & Q1-2 & 0.905 & 0.907 & 0.028 & 31.862 & 0.00 \\ & Q1-3 & 0.941 & 0.942 & 0.015 & 63.208 & 0.00 \\ & Q1-4 & 0.94 & 0.94 & 0.017 & 56.546 & 0.00 \\ & Q2-1 & \(-\)0.636 & \(-\)0.599 & 0.254 & 2.508 & 0.012 \\ & Q2-2 & 0.750 & 0.717 & 0.171 & 4.386 & 0.00 \\ & Q2-3 & 0.179 & 0.155 & 0.342 & 0.522 & 0.601 \\ & Q2-4 & \(-\)0.407 & \(-\)0.365 & 0.311 & 1.31 & 0.19 \\ & Q2-5 & 0.638 & 0.602 & 0.216 & 2.95 & 0.003 \\ & Q2-6 & \(-\)0.446 & \(-\)0.452 & 0.18 & 2.485 & 0.013 \\ & Q3-1 & 0.836 & 0.829 & 0.07 & 11.864 & 0.00 \\ & Q3-2 & 0.958 & 0.955 & 0.019 & 50.504 & 0.00 \\ & Q3-3 & 0.914 & 0.914 & 0.023 & 39.239 & 0.00 \\ & Q3-4 & 0.574 & 0.525 & 0.206 & 2.789 & 0.005 \\ & Q3-5 & 0.638 & 0.601 & 0.176 & 3.631 & 0.00 \\ & Q3-6 & 0.226 & 0.179 & 0.246 & 0.919 & 0.358 \\ & Q3-7 & 0.895 & 0.898 & 0.03 & 29.422 & 0.00 \\ & Q3-8 & 0.805 & 0.805 & 0.073 & 10.99 & 0.00 \\ & SDGI & 1 & 1 & 0 & 0 & 0.00 \\ Gini Coefficient & GINI & 1 & 1 & 0 & 0 & 0.00 \\ \hline \hline \end{tabular} \end{table} Table 2: The result of measurement model test. bility (CR) of digital inclusion and digital skills are above the threshold level of 0.7. The acceptable threshold level for the average variance extracted (AVE) is 0.5. The AVE threshold value ensures that the questions assigned to a variable explain at least 50% of the variance of that variable (no other variables). Table 3 shows that the AVE values for digital inclusion and digital skills are above the threshold level of 0.5. However, the current study fails to provide the necessary reliability and validity to measure the latent variable of digital privacy and security, and this variable was removed from model--in other words, the hypothesis corresponding to this variable is rejected, which will be discussed in detail in the next part. ### Hypothesis Testing In SEM, the evaluation of the relationships between the main research variables (which are the latent variables) is called the structural model test. In the structural model test, the path coefficients should be statistically significant. The test of the structural model is actually the test of the hypotheses of this research. Testing the First, Second, and Third Hypotheses The first three hypotheses of this study indicate that digital inclusion, digital privacy and security, and digital skills shape the SOSDIT of a country. Of course, since the digital privacy and security variable could not achieve the required reliability and validity, despite the fact that its path coefficient (\(\beta\) = 0.201) is significant (\(p\) < 0.05) with a confidence interval of at least 95%, the second hypothesis of this study is not confirmed. Since this study employed the secondary data collected by Eurostat, failure to confirm the validity and reliability of the questions of this variable resulted in us removing the variable from the model because the authors of this article were not able to design different questions and recollect data in order to increase reliability and validity of this variable. Besides, the results of the structural model test show that the path coefficients of digital inclusion (\(\beta\) = 0.347) and digital skills (\(\beta\) = 0.500) are significant (\(p\) < 0.05). These results provide quantitative empirical evidence in support of the first and third hypotheses of this study. The summary of the test of the hypotheses of this research is given in Table 4. The fourth hypothesis of this study refers to the effect of SOSDIT on SDG Index. The result of this hypothesis test shows that the path coefficient of this variable to the SDG Index variable (\(\beta\) = 0.64) is significant (\(p\) < 0.001). Therefore, the fourth hypothesis of this research is also confirmed. On the other hand, the fifth hypothesis of this research refers to the influence of the Gini coefficient in the process of SOSDIT affecting the SDG Index, and the present study fails to provide quantitative empirical evidence to confirm this \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Hypotheses** & \(\beta\) & **Standard Deviation T Statistics** & \(p\) **Values** & **Result** \\ \hline Digital Inclusion \(>\) SOSDIT & 0.347 & 0.024 & 14.167 & 0.00 & Confirmed \\ Digital Privacy and security \(>\) SOSDIT & 0.201 & 0.051 & 3.929 & 0.00 & Not Confirmed \\ Digital Skills \(>\) SOSDIT & 0.5 & 0.04 & 12.402 & 0.00 & Confirmed \\ SOSDIT \(>\) SDG Index & 0.64 & 0.123 & 5.183 & 0.00 & Confirmed \\ Gini Coefficient \(>\) SDG Index & \(-\)0.308 & 0.137 & 2.252 & 0.024 & Confirmed \\ Gini Coefficient x SOSDIT \(>\) SDG Index & \(-\)0.088 & 0.141 & 0.625 & 0.532 & Not Confirmed \\ \hline \hline \end{tabular} \end{table} Table 4: Hypothesis testing results. \begin{table} \begin{tabular}{l c c c} \hline \hline **Variables** & **Cronbach’s Alpha** & **CR** & **AVE** \\ \hline Digital Inclusion & 0.937 & 0.938 & 0.841 \\ Digital Skills & 0.886 & 0.94 & 0.586 \\ Digital Privacy and Security & 0.513 & 0.601 & 0.295 \\ \hline \hline \end{tabular} \end{table} Table 3: The results of Cronbach’s alpha, CR, and AVE. hypothesis and this hypothesis is not confirmed. However, the results show that the Gini coefficient directly has a significant effect on the SDG Index, and since the path coefficient of this influencing process is negative (\(\beta\) = \(-\)0.308), the effect of this variable on the SDG Index is negative. In other words, the higher the Gini coefficient of a country, the lower its SDG Index. In Figure 2, the output of the SmartPLS software is presented, where loading factors (relationships between observable variables (yellow rectangles) and hidden variables (blue circles)), path coefficients (relationships between hidden variables), and also the magnitude of the coefficients of determination (R2) (which are the same numbers written in the blue circles/hidden variables) are shown. ### Answers to Research Questions RQ1: How socially sustainable is digital transformation in EU-27 countries? After confirming the first and third hypotheses of this study, it is possible to calculate the SOSDIT level of EU-27 countries. The average score of countries in the field of digital inclusion and digital skills is considered as the performance of those countries in SOSDIT. The performance of the 27 European Union member states is given in Table 5 and illustrated in Figure 3. The SOSDIT score can be between 0 and 1, where 1 is the highest score that a country can achieve in terms of SOSDIT, and the number 0 represents the weakest performance of a country in SOSDIT. \begin{table} \begin{tabular}{l c c c} \hline \hline **Country** & **SOSDIT Score** & **Country** & **SOSDIT Score** \\ \hline Finland & 0.59 & Sweden & 0.50 \\ Netherlands & 0.57 & Czech Republic & 0.49 \\ Denmark & 0.56 & Ireland & 0.49 \\ Austria & 0.53 & Lithuania & 0.49 \\ Germany & 0.53 & Belgium & 0.48 \\ Cyprus & 0.52 & Italy & 0.48 \\ France & 0.51 & Slovenia & 0.48 \\ Hungary & 0.51 & Poland & 0.47 \\ Luxembourg & 0.51 & Slovakia & 0.47 \\ \hline \hline \end{tabular} \end{table} Table 5: SOSDIT score of EU-27 countries. Figure 2: The output of SmartPLS software – the research conceptual model test. According to the results, Finland, Netherlands, and Denmark have obtained the highest scores in SOSDIT, of 0.59, 0.57, and 0.56, respectively, and Romania, Bulgaria, and Portugal with scores of 0.41, 0.42, and 0.45 respectively, have had the weakest performance in SOSDIT. RQ2: Does SOSDIT lead EU-27 countries in achieving sustainable development goals? The hypotheses of this research have been tested and hypotheses one, three, and four of this research have been confirmed, and the effect of the Gini coefficient on the SDG Index has also been proven. The magnitude and intensity of the impact of digital inclusion, digital skills, and Gini coefficient on the SDG Index variable is measured with the R\({}^{2}\) coefficient. R\({}^{2}\) = 0.632 and it illustrates that the mentioned variables can explain 63% of the changes of the SDG Index, which is a very considerable amount. On the other hand, it is suggested to check the magnitude of the F-square statistic. F\({}^{2}\) is the change in R\({}^{2}\) caused by the removal of an exogenous variable from the model. According to **Cohen** (1988), values higher than 0.15 are desirable for this statistic. The summary of F\({}^{2}\) values is given in Table 6. Table 6 shows that the F\({}^{2}\) value of the effect of Gini coefficient on SOSDIT is less than the threshold, which represents the not considerable effect of the Gini coefficient on SOSDIT. Figure 3: Social sustainability of digital transformation in EU-27 countries. ## 5 Findings and Discussion Social sustainability of digital transformation refers to the ways in which digital technology is designed and used to support and promote social equity, fairness, and well-being, as well as to address social challenges such as inequality, poverty, and social exclusion. The development and deployment of digital skills are critical components of social sustainability as they enable individuals, organizations, and communities to participate in the digital economy and benefit from the opportunities it provides. The confirmation of the first and third hypotheses of this study made it possible to evaluate the level of social sustainability of digital transformation in European countries. The first hypothesis of the research illustrated that digital inclusion is crucial for the social sustainability of digital transformation in a country as it guarantees equal access to the benefits and opportunities provided by technology. By ensuring digital inclusion, the benefits of digital transformation can be shared equitably among all members of society. Digital technologies have the potential to bridge existing social and economic divides, and digital inclusion helps to prevent these divides from deepening by providing equal access to technology and digital skills. Additionally, digital technologies can improve health and well-being through telemedicine and access to health information, and digital inclusion makes sure that everyone can take advantage of these benefits, regardless of their location or financial situation. Similarly, digital technologies have the power to transform education, and digital inclusion helps to guarantee that everyone has access to these benefits, regardless of their background or circumstances. Hence, digital inclusion is a critical aspect of ensuring the social sustainability of digital transformation in a country. This study also shows that digital skills play a crucial role in the sustainability of digital transformation in a country, as they are essential for individuals, organizations, and communities to participate in the digital economy and benefit from the opportunities it provides. Without digital skills, individuals, organizations, and communities may be left behind, leading to digital inequality and exclusion. Digital skills are essential for individuals to participate in the digital economy, as many jobs now require a basic level of digital proficiency. The development of digital skills contributes to workforce development, improved access to information and services, increased entrepreneurship, and helps to bridge the digital divide by reducing digital inequality and increasing the participation of underprivileged communities and marginalized groups in the digital economy, ultimately contributing to overall economic stability and sustainability. In addition, the present study provides evidence that the degree of SOSDIT of a country affects its performance in achieving SDGs. SOSDIT is critical to ensuring that the benefits of digital technologies are shared equitably and that the negative impacts are mitigated for all members of society, ultimately contributing to the achievement of the SDGs. Besides, it is also found that the Gini coefficient has a negative impact on a country's performance in achieving the Sustainable Development Goals (SDGs). A high Gini coefficient indicates a large divide between the wealthy and the poor, where a small percentage of the population controls a large proportion of the wealth. This leads to several negative outcomes for the country's SDG Index. Firstly, it creates poverty and hardship for a large portion of the population, negatively impacting the SDGs of No Poverty and Reduced Inequalities. Secondly, it can result in decreased economic growth as the purchasing power of the majority of the population is reduced, negatively affecting the SDG of Decent Work and Economic Growth. Thirdly, it leads to inadequate access to \begin{table} \begin{tabular}{l c} \hline \hline & **F-Square** & **SDG Index** \\ \hline SOSDIT & 0.866 \\ Gini Coefficient & 0.174 \\ Gini Coefficient \(\star\) & SOSDIT & 0.02 \\ \hline \hline \end{tabular} \end{table} Table 6: Results of F-square test. basic services such as healthcare, education, clean water, and sanitation. Finally, high levels of income inequality can cause political instability, which can negatively impact a country's ability to achieve the SDGs. Thus, it is important for countries to address income inequality through policies that promote equitable distribution of wealth and resources, contributing to a more sustainable future. To encapsulate, the theoretical contributions of this study lie in the development of a conceptual model to evaluate the SOSDIT among EU-27 countries and the examination of the relationship between SOSDIT, the performance of countries in achieving the SDGs, and the Gini coefficient. The study provides a framework for understanding the importance of digital inclusion and digital skills as the building blocks of SOSDIT and highlights the direct impact of SOSDIT on the performance of countries in achieving the SDGs. The study also sheds light on the negative effect of the Gini coefficient on the performance of countries in achieving the SDGs. These contributions add to the existing literature on digital transformation, social sustainability, and the SDGs and provide valuable insights for policymakers, researchers, and practitioners. Based on the findings of this study, it is clear that an interdisciplinary approach is necessary to understand the complex relationship between digital inclusion, sustainability, and development. This study draws on insights from multiple fields, including economics, information systems, and sustainability studies. The developed conceptual model integrates these perspectives and provides a framework for analyzing the relationship between digital inclusion and the achievement of the SDGs. One critical reflection that this study systematizes is the need to view digital inclusion as a fundamental component of social sustainability. This view challenges traditional notions of sustainability that focus exclusively on environmental sustainability and recognizes that sustainable development must also include social and economic sustainability. This study shows that digital inclusion is a key enabler of social sustainability and can play a crucial role in achieving the SDGs. Another critical reflection that our study systematizes is the importance of recognizing the role of inequality in shaping the relationship between digital inclusion and sustainable development. These findings show that the Gini coefficient has a significant negative effect on the performance of countries in achieving the SDGs, highlighting the need to address inequality as part of efforts to promote sustainable development. This study underscores the importance of taking an intersectional approach that recognizes the ways in which different forms of inequality intersect and compound one another. ### Theoretical Contributions The theoretical contributions of this manuscript are multi-fold. Firstly, the concept of SOSDIT is introduced, which focuses on the ways in which digital technology can be designed and used to promote social equity, fairness, and well-being, and to address social challenges such as inequality, poverty, and social exclusion. This concept highlights the need to prioritize social sustainability in the design and deployment of digital technologies, which can help to ensure that the benefits of digital transformation are shared equitably among all members of society and that the negative impacts are mitigated for all. Secondly, the study identifies digital inclusion and digital skills development as critical components of SOSDIT. Digital inclusion refers to the need to ensure equal access to the benefits and opportunities provided by technology, while digital skills development is essential for individuals, organizations, and communities to participate in the digital economy and benefit from the opportunities it provides. These concepts highlight the importance of ensuring that all members of society have access to digital technologies and the skills needed to use them effectively, which can help to prevent the deepening of social and economic divides and contribute to the achievement of the SDGs. Thirdly, the study provides empirical evidence of the relationship between SOSDIT and the achievement of SDGs. The findings demonstrate that countries with a higher degree of SOSDIT have a higher performance in achieving SDGs, indicating the im portance of prioritizing social sustainability in the design and deployment of digital technologies. This provides a theoretical basis for policymakers to develop policies and strategies to promote social sustainability in the digital transformation process, ultimately contributing to a more sustainable future. Finally, the study identifies the negative impact of income inequality, as measured by the Gini coefficient, on a country's performance in achieving SDGs. This highlights the need to address income inequality through policies that promote equitable distribution of wealth and resources, which can contribute to achieving SDGs and promoting a more sustainable future. ### Practical Contributions The practical contributions of this study are significant as it provides important insights for policymakers, researchers, and practitioners to promote social sustainability in the digital transformation process. Firstly, the study highlights the importance of digital inclusion and digital skills as critical components of social sustainability. Policymakers can use these findings to design and implement policies that ensure equal access to technology and digital skills for all members of society, regardless of their background or circumstances. This can be achieved through initiatives such as free digital skills training programs, subsidized access to technology, and policies that ensure the availability of digital services in rural and underprivileged areas. Secondly, the study emphasizes the need to address income inequality through policies that promote equitable distribution of wealth and resources. Policymakers can use these findings to design policies that address income inequality, such as progressive taxation, social welfare programs, and investment in education and training. Thirdly, the study highlights the direct impact of SOSDIT on the performance of countries in achieving the SDGs. Policymakers can use these findings to prioritize SOSDIT in their national development plans, allocate resources to promote digital inclusion and digital skills, and design policies that ensure that the benefits of digital transformation are shared equitably among all members of society. ## 6 Conclusions In conclusion, this study highlights the critical role of SOSDIT in the achievement of the Sustainable Development Goals (SDGs) among EU-27 countries. Our findings demonstrate that digital inclusion and digital skills are the main factors of SOSDIT and significantly impact the ability of countries to attain SDGs. Our study also highlights the negative impact of income inequality, as measured by the Gini coefficient, on the performance of countries in achieving SDGs. These findings have important implications for policymakers and decision-makers, as they suggest that investment in digital inclusion and digital skills can have a positive impact on the sustainability of digital transformation and contribute to the achievement of the SDGs. Furthermore, reducing income inequality through progressive taxation, investment in education and job training programs, inclusive economic growth, and safety net programs for the most vulnerable populations, can also contribute to a more sustainable digital transformation and improved performance in achieving the SDGs. This study underscores the need for continued research and action to ensure a socially sustainable digital transformation that benefits all individuals and contributes to a more sustainable future. This study demonstrates the value of an interdisciplinary approach to understanding the complex relationship between digital inclusion, sustainability, and development. By systematically integrating insights from multiple fields, this study provides a framework for analyzing the relationship between digital inclusion and the achievement of the SDGs, highlighting the need to view digital inclusion as a fundamental component of social sustainability, and emphasizing the importance of addressing inequality in efforts to promote sustainable development. ### Practical Implications and Recommendations * Invest in digital infrastructure: Governments should invest in the development of digital infrastructure, such as high-speed internet access, to ensure that everyone has access to technology and digital skills. * Provide digital skills training: Governments should provide training and support to ensure that everyone has the skills and knowledge to use technology effectively. This includes training for individuals and organizations, as well as training for educators to ensure that digital skills are taught in schools. * Promote digital literacy: Governments should promote digital literacy and ensure that individuals have the skills and knowledge to use technology effectively. This can be achieved through education and training programs, as well as through public awareness campaigns. * Foster digital inclusion: Governments should foster digital inclusion by addressing issues such as the digital divide and ensuring that everyone has access to technology and digital skills. This can be achieved through public-private partnerships and community initiatives. * Investment in education and job training programs: Providing access to education and job training programs can help to equip individuals with the skills needed to secure well-paying jobs, increase their earning potential, and reduce income inequality. This can also lead to a reduction in the Gini coefficient and improve a country's performance in achieving the SDGs, especially the SDGs of Decent Work and Economic Growth, No Poverty, and Quality Education. By investing in education and job training programs, a country can provide opportunities for individuals to improve their lives and contribute to a more sustainable future. ### Recommendations for Future Research As with any study, there are limitations to the scope of research and the available data. This study provides valuable insights into the relationship between SOSDIT, the performance of countries in achieving the SDGs, and the Gini coefficient among EU-27 countries. However, there is a need for further research to build on these findings and to expand the understanding of this relationship beyond the EU-27. In this section, recommendations for future research are outlined that could help to address these limitations and provide a more comprehensive understanding of the role of digital transformation in achieving social sustainability and the SDGs. These recommendations include investigating regional disparities, examining the impact of technology adoption, studying the role of the private sector, and evaluating the effectiveness of policy interventions. * Investigation of regional disparities: This study focuses on EU-27 countries, but future research could explore regional disparities within countries and how they affect the performance of countries in achieving the SDGs. * Examination of the impact of technology adoption: Future research could explore the impact of technology adoption, such as the adoption of artificial intelligence and the internet of things, on SOSDIT and the performance of countries in achieving the SDGs. * Study of the role of the private sector: The private sector plays a critical role in digital transformation and the achievement of the SDGs. Future research could explore the role of the private sector in promoting SOSDIT and contributing to the achievement of the SDGs. * Evaluation of the effectiveness of policy interventions: Future research could evaluate the effectiveness of policy interventions aimed at promoting SOSDIT and the performance of countries in achieving the SDGs. This would provide valuable insights into what works and what does not, helping policymakers and decision-makers to make informed decisions. ### Limitations of the Study While this study provides valuable insights into the relationship between SOSDIT and the performance of countries in achieving the SDGs, it is important to note several limitations that may affect the interpretation of the results. Firstly, the study only focuses on EU-27 countries, which may limit the generalizability of the findings to other regions or countries. Future research could examine the applicability of these findings to other regions and expand the scope of analysis beyond the EU-27. Secondly, the study relies on available indicators to measure SOSDIT, the performance of countries in achieving the SDGs, and the Gini coefficient. The limitations of these indicators should be considered when interpreting the results, and future studies could explore alternative or additional indicators to provide a more comprehensive assessment of these concepts. Lastly, the study is limited by the availability and quality of data, as well as potential measurement errors or biases. Further research could address these limitations by collecting more comprehensive and accurate data, using alternative measurement approaches, or conducting case studies to provide a more nuanced understanding of the relationship between SOSDIT and the achievement of the SDGs. Conceptualization, S.N. and Sz.H.; methodology, T.A.; software, S.N.; validation, Sz.H.; formal analysis, S.N., T.A.; investigation, S.N.; data curation, T.A.; writing\(-\)original draft preparation, S.N., T.A.; writing\(-\)review and editing, Sz.H.; visualization, S.N.; supervision, Sz.H. This research received no external funding. Not applicable. The authors declare no conflict of interest.
2307.09909
Chit-Chat or Deep Talk: Prompt Engineering for Process Mining
This research investigates the application of Large Language Models (LLMs) to augment conversational agents in process mining, aiming to tackle its inherent complexity and diverse skill requirements. While LLM advancements present novel opportunities for conversational process mining, generating efficient outputs is still a hurdle. We propose an innovative approach that amend many issues in existing solutions, informed by prior research on Natural Language Processing (NLP) for conversational agents. Leveraging LLMs, our framework improves both accessibility and agent performance, as demonstrated by experiments on public question and data sets. Our research sets the stage for future explorations into LLMs' role in process mining and concludes with propositions for enhancing LLM memory, implementing real-time user testing, and examining diverse data sets.
Urszula Jessen, Michal Sroka, Dirk Fahland
2023-07-19T11:25:12Z
http://arxiv.org/abs/2307.09909v1
# Chit-Chat or Deep Talk: Prompt Engineering for Process Mining ###### Abstract Abstract: This research investigates the application of Large Language Models (LLMs) to augment conversational agents in process mining, aiming to tackle its inherent complexity and diverse skill requirements. While LLM advancements present novel opportunities for conversational process mining, generating efficient outputs is still a hurdle. We propose an innovative approach that amend many issues in existing solutions, informed by prior research on Natural Language Processing (NLP) for conversational agents. Leveraging LLMs, our framework improves both accessibility and agent performance, as demonstrated by experiments on public question and data sets. Our research sets the stage for future explorations into LLMs' role in process mining and concludes with propositions for enhancing LLM memory, implementing real-time user testing, and examining diverse data sets. process mining, large language models, conversational agents 2015 ## 1 Introduction Process mining endeavours, increasingly prominent across various industry domain such as healthcare, manufacturing and supply chains, require stakeholder engagement with diverse skills [1]. This paper propose a novel framework for creating conversational agents capable of directly extracting answers from event data, thus reducing the need for interaction with multiply stakeholders. Recent advancements in Large Language Models (LLM) have shown great potential in handling tasks such as interpreting nuanced questions from non-experts. Despite the apparent simplicity of their usage, the performance of LLMs is largely reliant on the construction of a well-crafted, nuanced prompt [2]. This paper explores the optimal methods for automating tasks typically performed by humans, specifically converting an end-user's question into an event data query. Conventionally, these tasks involve process analysts who comprehend the issues related to process optimization, domain experts who understand the process at hand, and data engineers who can effectively query the background data to generate accurate answers. The key challenge lies in assembling all necessary information to construct the query. By emulating and automating these tasks, this study aims to streamline the workflow and deliver results directly to the end-user. In this study, we have constructed a framework, supported by a Large Language Model (LLM), which emulates the tasks and skills associated with various process mining participants. We have developed prompts that emulate individual roles and used an orchestrator to integrate them. Finally, we employed the corpus of process mining questions collated by Barbieri et al. [3] and evaluated our framework using the BPI Challenge 2019 dataset [4].Our findings indicate that in 77% of instances, LLMs were capable of fully or partially comprehending the question and outlining the appropriate solution. Furthermore, in 68% of cases, the model provided either the correct or a partially correct answer. This paper is organized as follows: Following this introduction, Chapter 2 provides an overview of the background and relevant literature in the field. Chapter 3 describes our proposed approach for integrating Large Language Models (LLMs) into conversational interfaces for process mining. Chapter 4 then illustrates an experimental evaluation to confirm the effectiveness of using LLMs for conversational querying in process mining. ## 2 Background and relevant literature Process mining a growing branch of data science, enables companies to analyze their business processes based on their event data. It provides insights into prevalent performance bottlenecks, inefficiencies, and compliance risks in operational work. Nonetheless,the discovered data models provide often complicated, underfit or Spaghetti-like diagrams[5], and the underlying event-log data can be challenging to analyze through non-experts[3]. This study aims to address this challenge by proposing a novel approach: integrating large language models (LLMs) into conversational interfaces for process mining querying. A large language model (LLM) is a natural language processing tool(NLP) that can understand and generate human-readable text[6]. This approach provides users, without technical or process-mining expertise, direct access to the insights derived from their event data. The success of many process mining projects often relies on effective interaction between diverse stakeholders[7]. The complexity of data and results, coupled with communication requirements, has amplified the interest in enhancing usability and understandability for all participants [8, 9] Extending this, Dumas et al. suggest the need for not only improved usability but also the development of an Augmented Business Process Management System. This system would facilitate a conversationally actionable interface between humans and IT systems [10]. Recently, large language models (LLMs) such as GPT-4 have shown remarkable progress in natural language processing tasks. They have demonstrated their capability in generating human-like responses to complex queries and providing accurate answers to questions [11]. LLMs learn substantial linguistic and factual world knowledge from a vast corpora of data. They can execute multiply task, but their performance can be highly variable and in some cases unsatisfactory[12]. The quality of the factual information obtained from the LLM depends on carefully designed and nuanced prompts[13]. To address this, prompt engineering has been proving success [2].Prompt engineering in the context of large language models involves designing and optimizing prompts to evoke desired responses from these models. In essence, prompts serve as input texts that direct the language model to produce specific outputs. The efficacy of prompt engineering has been demonstrated across a range of applications. These include knowledge-based question answering [14], essay writing [15], sentiment analysis [15], and addressing medical challenge problems [16]. This study investigates the feasibility of using Large Language Models (LLMs) for conversational querying in process mining. It examines the challenges inherent to natural language processing (NLP) and explores strategies to mitigate some of these challenges, notably through prompt engineering and task orchestration based on varying outcomes. ## 3 Architecture and process for prompt engineering ### Generic approach One of the main challenges in implementing natural language interfaces for process mining analysis using LLMs is the creation of nuanced, context-specific task descriptions. These must generate SQL queries that are not only semantically correct but also precisely address the user's question. A common strategy employed to address such problems entails the generation of a prompt, including the users question, followed by a request to generate SQL which answers the question. Such prompt typically includes instructions regarding the format and characteristics expected in the SQL statement. The SQL is then executed against an event log database and the resulting solution, or an error, is fed back to the system which decides whether to display it to the user or withhold it based on predefined criteria. This approach, however, underperforms in the following scenarios: 1. The question contains domain-specific terms. 1. Effect: LLM is unable to understand the specific terms and falls back to generic meaning of the words. Often this leads to incorrect interpretation of the questions and therefore answer is inappropriate. 2. The SQL is malformed. 1. Effect: Error in execution of SQL, the user does not see an answer. 3. The question is complex and requires multiple SQL statements. 1. Effect: The answer is malformed or meaningless SQL, alternatively, the generated SQL makes many assumptions and does not answer the actual question. 4. The data model is not standard or LLM does not have sufficient information about its structure. 1. Effect: the LLM hallucinates column names, makes assumptions about data formatting. The ensuing SQL statements either exhibit functional inadequacies, leading to erroneous outcomes, or generate outputs characterized by nonsensical or stochastic properties. In the subsequent sections, we present an architectural framework designed to effectively address the aforementioned limitations. ### Architecture To illustrate the architecture of our solution, consider the following scenario: we want an LLM to translate the question "What is the main bottleneck in my process in department A?" into a SQL query. The task requires specific information to execute. Necessary elements include: 1. The data structure, such as the case_concept_name field, timestamp, or event field names. 2. Process-mining-specific information, such as the information on how the eventlog is built what is the meaning of case, activity or timestamp in that context 3. Domain-specific information, such as the interpretation of the term "process bottleneck", and instructions on how it is related to the underlying data. Also as bottleneck can mean different things such as resource or time bottleneck. 4. Data set-specific information, such as the method for calculating duration if there are no _start, _end timestamps for each activity. 5. Mapping of data to domain knowledge e.g. what is the representation of department A in the given data set. Figure 1 depicts the overall architecture of our framework. To generate a response, a customized _prompt_, integrating both general guidance on output structure and specific context, is created for each query. This necessitates an update to the eventlog _DB context_ (field names, structure, data types) for each new query. The LLM then establish a list of required information and employs the _context ontology_ to enhance the prompt. Upon receiving the prompt, the LLM crafts an _SQL query_ to be run against _the Eventlog in the DB_. If the execution is successful, the user is provided with a response; if it fails, the LLM is given _feedback_--previous results, the SQL error, and instructions for correction. This iterative cycle continues until a satisfactory answer is achieved or a set loop limit is met. ### Conceptualizing Interaction with LLM as a Process In the context of process mining, conversational agents face a significant challenge: they must effectively integrate diverse skill sets and knowledge domains to optimize company-specific processes. This requires data engineering expertise for understanding data contexts, process analysis capabilities to comprehend terms like "process model," "variant," "deviations," and to identify bottlenecks and improvement opportunities. Moreover, responding to domain-specific queries may require domain expert's knowledge. Figure 1: Architecture of a conversational agent for process mining Figure 2 illustrates the process we have developed to address the diversity of required skills and knowledge. One of the unique attributes of LLMs is their effectiveness in understanding problems when specific keywords are activated, such as "you are an experienced data engineer with expertise in databases and SQL queries (...)." By establishing a process and assigning roles to segregate duties and context, enabling the LLM to focus on its specific task, the outcomes of multiple queries present a greater specificity. In order to answer a specific question a range of different prompts has to be created in order to query different aspects of a question. Once the context has been established, the initial prompt to the Data Engineer can be formulated. This prompt would only contain the necessary data and would instruct the LLM to construct the SQL query and elucidate the reasoning behind the sequential steps of obtaining an answer to the user's query1. Footnote 1: Additional prompt examples can be found at [https://tinyurl.com/chitchatdeeptalk](https://tinyurl.com/chitchatdeeptalk) The diagram in Figure 3 outlines the general process constructed in our proposed framework. Upon receiving a user's question, the application (orchestrator) first checks for similar questions in the database. These questions have previously been submitted to an LLM to generate embeddings for each of them. If any of these vectors show a similarity greater than 0.9 with the new question, the orchestrator verifies the success of the previous answer. If successful, the sql query is executed and answer is immediately forwarded to the user. Otherwise, the orchestrator forwards the execution to prompt creation task where the context of different perspectives of the question is assembled into one general, tailored to the question prompt. Upon crafting the SQL query, it would be dispatched to the database via the orchestrator. In the event of errors, the Data Engineer prompt would be supplemented with the database's error Figure 2: The general process of prompt engineering. information and executed again. Upon successful query execution, the user would receive an answer to their question. After two loops the model would be changed from GPT 3.5-turbo to GPT-4. The prompt creation component can also receive context from the most similar questions from the database. If it is not possible to create correct answer, the orchestrator would ask the user to provide additional feedback to assist in answering the question. ## 4 Evaluation We developed our proposed conversational agent architecture and prompt engineering process using Python, employing GPT-3.5-turbo and GPT4 as Large Language Models (LLMs). We conducted a study to assess the effectiveness of this approach, employing a real-world dataset and real-life questions. ### Data used in study Questions in Barbieri et al.'s work[3, 17] were sorted into four categories: process model, event log data, analysis, and advanced analysis, across 23 perspectives like bottleneck analysis and conformance checking. The question corpus, primarily in Portuguese and translated by volunteers, included malformed or unrelated queries. Rather than excluding these flawed questions,as was done by Barbieri et al.[3, 17], we tested them all on the LLM, adjusting only year-specific queries to "2019". This was done to explore the LLM's potential in comprehending complex or unclear texts, where traditional rule-based systems may struggle.The questions has Figure 3: Interaction between user and LLM as a process been asked against the BPI Challenge 2019 dataset [4]2. Footnote 2: The dataset is a log of a the purchase order handling process for a large multinational company operating from the Netherlands in the area of coatings and paints. The log contains a total of 1,595,923 events spread over 51,734 cases. The cases are created at the level of the position item of a purchase order.There are 42 different events in the eventlog. The events were executed mostly in 2018, 2019 and cases are spread over 13,881 variants. ### Initial Experiment Results We executed the experiment in two stages. The initial round, focusing on the corpus's first 100 questions, was designed to identify and address weaknesses in our methodology. After rectifying these, we progressed to the second round, evaluating the entire corpus. During the first round, we also fine-tuned our prompts and refined the architecture to minimize human intervention. Insights gained from this preliminary round were instrumental in enhancing our architectural approach for the second round involving the full corpus of questions. Details of these modifications and the enhanced process are discussed in Section 3. ### Adjusted setup and final experiment results To handle the complexity of defining expected answers, we manually evaluated the responses using certain criteria, dividing correct answers into **fully answered** and **partially answered** categories. * A **fully answered** question is one that correctly addresses the question, including any additional information provided by the LLM3. Footnote 3: For example, in response to a question like “Which tasks have the maximum duration,” we would consider the LLM’s answer correct if it not only identifies the tasks with the maximum duration but also provides additional information such as the specific duration of these tasks. * **Partially answered** questions are those where the executed database query didn't generate errors and the resulting answer required some expert interpretation4. Footnote 4: For instance, a query requesting the top variants in a process model might return all variants sorted by frequency, but fail to specify the top 3, 5, or 10 variants. These types of answers, which could be refined with extra feedback or contextual information for the LLM, were considered partially answered. In contrast, **wrong answered questions** encompassed those that did not generate any code (resulting in SQL Server errors), as well as responses that, while calculated, could not be considered correct. In our evaluation process for the responses generated by the Large Language Model (LLM), we identified two primary criteria: **Understood** and **Partially Understood**. These criteria were assessed based on the logical chain of thought demonstrated by the model in its responses. * An **Understood** question is distinguished by the LLM's comprehensive and accurate response, suggesting a full grasp of the query. This entails proper problem identification and subsequent application of relevant operations to derive a solution5. Footnote 5: For instance, when asked to identify the primary bottlenecks in a process, the LLM correctly pinpoints them by measuring the durations between events, identifying cases that exceed the average duration, and ultimately, providing the specific bottleneck events. This line of reasoning displays a thorough understanding of the question. * Conversely, a **Partially Understood** question is characterized by the LLM's partial grasp of the question or its incomplete or partially correct response6. Footnote 6: A prime example is the query, “How many services do not conform to the online sales model?” Here, while the LLM rightly perceives the necessity to contrast event sequences with a predefined online sales model, it may fall short in comprehensively determining what qualifies as conformance to the model, or may not produce a complete count of non-conforming services. This yields a response that only partially addresses the question. Our findings indicate that the Chain-Of-Thought, particularly for Understood questions, is not just valuable for end-users but also enlightening for process analysts. It sheds light on how to break down complex questions into manageable parts and solve them systematically. For the examples of "Understood" and "Partially Understood" questions with LLM reasoning behind, refer to the file in the provided folder at [https://tinyurl.com/chitchatdeeptalk](https://tinyurl.com/chitchatdeeptalk). Table 1 summarizes the results as how many of the 795 questions were (partially) answered or understood by the agent compared to the results of Barbiere at al.[17]. Though the proportion of fully or partially answered questions didn't increase significantly, utilizing the LLM permitted the assessment of a larger corpus of questions. In contrast to the method employed by Barbieri et al.[17], which solely examined whether the model comprehend the semantic essence of the question, our study additionally assessed the model's capacity to formulate logical reasoning for the correct solution. This approach provides a more in-depth perspective on the understanding capabilities of the LLM. For future research, an important objective would be to explore ways to effectively measure the understanding of LLMs to enhance the results. ### Improving the results with process orchestrator and one/few shot learning In the large language model the model is already trained on the vast amount of data and it is not possible to adjust the model in order to increase the performance of the model in specific tasks. In the case of such models a set of methods has been developed in order to fine-tune the model to the function it should fulfill. One of the methods is already explained in the Approach Chapter and contains of asking the LLM to explain the reasoning behind the answer Chain-Of-Thought. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Result** & **Count** & **Ratio** & **Count[17]** & **Ratio[17]** \\ \hline Answered & 285 & 36\% & 266 & 56\% \\ Partially answered & 254 & 32\% & 42 & 9\% \\ \hline \hline Understood & 155 & 19\% & 304 & 64\% \\ Partially understood & 459 & 58\% & 42 & 9\% \\ \hline \hline \end{tabular} \end{table} Table 1: Our experiments were conducted on a corpus of 795 questions, a substantial increase from the 476 questions evaluated in the study by Barbieri et al.[17]. These findings present a comparison between our results and those from the previous research. Similarly the additional methods used to fine-tune such models is zero- and few-shot learning. 7. Footnote 7: In the case of LLMs, zero-shot learning is achieved by providing the model with a prompt that describes only the task it needs to perform. The model then generates an output based on its understanding of the prompt and its pre-existing knowledge. Few-shot learnings achieved by providing the model with one or a few examples of the task it needs to perform, along with a prompt that describes the task. The model then generates an output based on its understanding of the prompt and the provided examples [13]. In our procedure, we utilized various methods to enhance the LLM's responses. The results of our experiments are displayed in Table 2. Our application's orchestrator attempted to execute the SQL code after each LLM response. In 61 instances, the GPT 3.5-Turbo Model produced the correct answer without additional shots (Zero Shot) and in 49 the answer was partially correct. If the code failed to execute, we asked the model to correct the error, supplying it with the error message. This method further improved 46 (Partially Answered) and 12 (Fully Answered) responses with GPT 3.5-Turbo.If these attempts still failed, the orchestrator involved the GPT-4 model, sending the entire conversation to it. If this approach did not yield the desired outcome, the orchestrator provided example code or generated additional context (one or few-shot). This improved an additional 61 cases in total for GPT 3.5-Turbo. This managed to enhance 193 partially correct responses for GPT 4.0 in few-shot mode, and a further 178 fully answered questions. It's evident from these results that GPT-4 performs best with few-shot learning. However, the significant cost associated with this model should not be overlooked. The GPT-3.5 Turbo model was capable of producing satisfactory results in 168 cases. The overall cost factor must be a part of the evaluation as well; for our case of answering approximately 800 questions, the total expenditure was around $60. In a real-world scenario with live data, this could translate to considerably higher costs. Therefore, the balance between accuracy and cost is essential when choosing between these solutions. ## 5 Conclusion In this paper, we've delved into the potential of Large Language Models (LLMs) in enhancing conversational agents for process mining. We've proposed a framework that fosters data-focused \begin{table} \begin{tabular}{l r r} \hline \hline Zero vs Few Shot & GPT 3.5 & GPT 4.0 \\ \hline Zero Shot & 49 & 0 \\ Few Shot & 12 & 193 \\ **Sum (Partially Answered)** & **61** & **193** \\ Zero Shot & 61 & 0 \\ Few Shot & 46 & 178 \\ **Sum (Fully Answered)** & **107** & **178** \\ **Sum (Partially and Fully Answered)** & **168** & **371** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of GPT 3.5 and GPT 4.0 performance conversations and highlighted techniques that may boost model performance. We've demonstrated the value of incorporating supplementary data to improve LLM outcomes but underscore that this field still holds considerable untapped potential. We suggest future research could explore the idea of external memory for LLMs to retain context over extended interactions, and investigate the effectiveness of new prompt engineering methods. To further progress conversational agents, understanding the concept of "understanding" is critical. Detecting answers that are technically correct but semantically incorrect is another challenge, referred to as 'hallucination', that needs to be addressed. Live system testing with real users could provide valuable insights into real-world effectiveness. Moreover, studying system responses to varied user interactions could highlight its resilience and versatility. Examining additional datasets within the process mining domain may also yield unique challenges and insights, potentially refining our framework further. In conclusion, our research paves the way for harnessing LLMs in process mining, aiming to alleviate some barriers for non-expert users. We aspire that our work will inspire continued exploration in this area, leading to more intuitive, accessible, and efficient process mining tools. ## Acknowledgments Thanks to the developers of ACM consolidated LaTeX styles [https://github.com/borisveytsman/acmart](https://github.com/borisveytsman/acmart) and to the developers of Elsevier updated LaTeX templates [https://www.ctan.org/tex-archive/macros/latex/contrib/els-cas-templates](https://www.ctan.org/tex-archive/macros/latex/contrib/els-cas-templates).
2302.05735
Divergence-Based Domain Transferability for Zero-Shot Classification
Transferring learned patterns from pretrained neural language models has been shown to significantly improve effectiveness across a variety of language-based tasks, meanwhile further tuning on intermediate tasks has been demonstrated to provide additional performance benefits, provided the intermediate task is sufficiently related to the target task. However, how to identify related tasks is an open problem, and brute-force searching effective task combinations is prohibitively expensive. Hence, the question arises, are we able to improve the effectiveness and efficiency of tasks with no training examples through selective fine-tuning? In this paper, we explore statistical measures that approximate the divergence between domain representations as a means to estimate whether tuning using one task pair will exhibit performance benefits over tuning another. This estimation can then be used to reduce the number of task pairs that need to be tested by eliminating pairs that are unlikely to provide benefits. Through experimentation over 58 tasks and over 6,600 task pair combinations, we demonstrate that statistical measures can distinguish effective task pairs, and the resulting estimates can reduce end-to-end runtime by up to 40%.
Alexander Pugantsov, Richard McCreadie
2023-02-11T16:04:38Z
http://arxiv.org/abs/2302.05735v2
# Divergence-Based Domain Transferability for Zero-Shot Classification ###### Abstract Transferring learned patterns from pretrained neural language models has been shown to significantly improve effectiveness across a variety of language-based tasks, meanwhile further tuning on intermediate tasks has been demonstrated to provide additional performance benefits, provided the intermediate task is sufficiently related to the target task. However, how to identify related tasks is an open problem, and brute-force searching effective task combinations is prohibitively expensive. Hence, the question arises, _are we able to improve the effectiveness and efficiency of tasks with no training examples through selective fine-tuning?_ In this paper, we explore statistical measures that approximate the divergence between domain representations as a means to estimate whether tuning using one task pair will exhibit performance benefits over tuning another. This estimation can then be used to reduce the number of task pairs that need to be tested by eliminating pairs that are unlikely to provide benefits. Through experimentation over 58 tasks and over 6,600 task pair combinations, we demonstrate that statistical measures can distinguish effective task pairs, and the resulting estimates can reduce end-to-end runtime by up to 40%. ## 1 Introduction As the accuracy of neural models continues to increase, so does the computational cost of training and storing them. One approach of mitigating such cost is through using pretrained models to enhance performance on a downstream task, a paradigm commonly referred to as _transfer learning_. However, when and why transfer learning works is not concretely understood. Traditionally, selecting the best settings, i.e. tasks and hyperparameters, for transfer often involves an extensive trial-and-error process over many combinations and can quickly make the prospect of applying transfer learning undesirable. As such, it would be valuable to estimate whether a task pair combination will be effective pre-training, i.e. estimate the _transferability_ of a source task to a target task. The most optimal transferability metric would be resource-efficient, such that it is capable of accurately predicting the final performance of the model whilst minimising the amount of processing required to compute it. To this end, several works (Van Asch and Daelemans, 2010; Ruder and Plank, 2017; Ramesh Kashyap et al., 2021) have focused on estimating transferability prior to fine-tuning, using statistical measures of divergence between the underlying feature spaces of model pairs. Domain divergence measures are used to produce a notion of distance between pairs of domains by comparing their representations and have seen significant usage in works which investigate the correlation between their estimations and performance change (Van Asch and Daelemans, 2010; Ramesh Kashyap et al., 2021). Subsequent transfer learning works have also demonstrated that competitive model performance can be achieved on some target tasks even if no training samples for that task are available, an approach known as _zero-data/shot learning_(Larochelle et al., 2008). In this work, we investigate the effectiveness of domain divergence measures in estimating the performance of zero-shot classification models, wherein models further tuned on one source task are used to directly predict on the test set of a target task without any target training samples. Specifically, we leverage the information captured by these measures as features to an auxiliary learner, whose outputs are used to rank the most effective source model for transfer to a given target task. Through the analysis of 58 sentiment classification domains, we: (1) perform a correlation analysis between each independent measure and each source-target, macro-averaged \(F_{1}\)-score performance output; (2) and, for each target task, we train a series of auxiliary regres sion models to predict their projected performance; (3) we then convert these into rankings of source-target pairs and evaluate the capability of our learners to find the best source model for each given target domain. ## 2 Experiment Setup **Measures:**Ramesh Kashyap et al. (2021) provide categories of divergence measures; two of which we use in our work: _Geometric_ measures which calculate distances between continuous representations such as word embeddings and _Information-theoretic_ measures which capture the distance between representations such as frequency-based distributions over co-occurring n-grams. We do not report higher-order measures as in the aforementioned work, but instead report _moments_-based features, which better describe the characteristics of our individual term distributions--namely the mean, variance, skewness, and kurtosis of our distributions--as features to our learner. Following prior work (Tsvetkov et al., 2016; Ruder and Plank, 2017), we further complement the above measures by making use of several metrics that capture diversity and prototypical such as entropy-based features; in our work, these measures are used with probability distributions, and are, as such, categorised here as information-theoretic. Specifically, we use the following metrics: * **Geometric**: Cosine distance, \(l_{1}\)- (or Manhattan dist.) and \(l_{2}\)-norm (or Euclidean dist.). * **Information-theoretic**: Renyi and Jensen-Shannon divergences (Wong and You, 1985; Renyi et al., 1961), Bhattacharyya Coeff. (Bhattacharyya, 1943), Wasserstein distance (Kantorovich, 1960), Entropy and Renyi Entropy (Shannon, 1948; Renyi et al., 1961), Simpson's Index (Simpson, 1949). * **Moments-based**: Mean, variance, skewness, and kurtosis (\(\sigma^{n}\) where \(n\in[1..4]\)). **Representations:** To compute the above metrics, we use two different representations from prior work by Ruder and Plank (2017), specifically 1) discrete probabilities of the most common terms across domains, using a fixed-size vocabulary \(V\), where \(|V|=10,000\); and 2) a summation over probability-weighted term embeddings in each document, averaged to produce a single vector: 1. **Term Distributions (TD)** (Plank and van Nord, 2011): \(t\in\mathbb{R}^{|V|}\) where \(t_{i}\) is the probability of the \(i\)-th word in the vocabulary \(V\). 2. **BERT Embeddings (BE)** (Devlin et al., 2018): \(\frac{1}{n}\sum_{i}v_{w_{i}}\sqrt{\frac{a}{p(w_{i})}}\) where \(n\) is the number of words with embeddings in the document, \(v_{w_{i}}\) is the pretrained embedding of the \(i\)-th term, \(p(w_{i})\) its probability, and \(a\) is a smoothing factor used to discount frequent probabilities. Following guidelines by Ruder and Plank (2017), we use this representation with geometric-based measures only, as embedding vectors can be negative. Generally, since we are using these representations in a zero-shot setting, we compute divergences between the source-task training set (\(D_{S}\)) and the target-task test set (\(D_{T}\)). Entropy and moments-based measures are not used to estimate divergence between domains but used only to compute within-domain characteristics, i.e. on individual term distributions. **Datasets and Domains:** We make use of two ratings prediction datasets with classes in the range 1-5 and, similarly to Zhang et al. (2015), reformulate the task as a binary sentiment classification task by merging the provided labels; 1-2: negative and 3-4: positive. We focus on similar, within-task (i.e. sentiment classification) datasets to (1) remove task variation as a variable, (2) and to highlight the effectiveness of using statistical measures to compute divergence between similar domains which may have very minute differences in semantics and other linguistic phenomena. The first is the _Amazon Product Reviews_ dataset, using the review title and review content fields as features and divide the dataset by the product category labels. As a supplementary contribution to our work, we create the _Multi-Domain Yelp Business Reviews_ dataset by extending the original _Reviews_ and _Business_ datasets provided by the _Yelp Dataset Challenge_, mapping top-level1 categories of businesses to their respective reviews. After filtering out low-sample (\(\leq 30,000\)) domains, we have 42 and 16 domains for the Amazon and Yelp datasets, respectively. Footnote 1: We determined which categories were top-level based on an article written by Yelp **Implementation Details:**We use BERT\({}_{base}\)(Devlin et al., 2018) as our base model in the experiments. With both runtime- and storage-efficiency in mind, we make use of adapter modules Pfeiffer et al. (2020) and train each of the domains as a source task adapter, leaving the rest of BERT's parameters frozen. More implementation and hyperparameter details can be found in Appendix A. We divide our experiments into two separate settings by source-task sample size, \(N_{S}\in[1000,25000]\). We train 116 source-task adapters (58 \(D_{S}\times 2\)\(N_{S}\) settings), and evaluate a total of 6,612 source-target combinations for analysis. For our auxiliary learner, we use an XGBoost Chen and Guestrin (2016) regression model. We split our training and test sets by the target task and train 2,900 regression models (for each of the 58 target domains, 2 sample sizes settings, 5 feature sets, and over 5 random seeds). ## 3 Experiments and Results To evaluate whether the aforementioned statistical measures are predictive of task pair transferability, we perform a correlation analysis between the source-target pairs within each domain, where we contrast the statistical measure (which provides information about \(D_{S}\), \(D_{T}\), or the differences between them) and the resultant performance (measured using macro-averaged \(F_{1}\)) when using \(D_{S}\) to tune a model for application on task \(T\). Table 1 reports Spearman's Rho (\(\rho\)) across all sample size settings for each statistical measure. Higher correlations (distance from 0) indicate increasing predictiveness of the statistical measure of transferability. Using the interpretation of Spearman's Rho (\(\rho\)) correlation coefficients by Dancey and Reidy (2007), we make the following observations: (1) Geometric measures exhibited a moderate-to-strong correlation for Term Distributions across both sample size settings, and strong correlations at \(N_{S}=25000\) for BERT Embeddings; (2) Between-domain Information-theoretic measures also showed moderate-to-strong performance correlations; (3) All entropy-based measures (aside from Simpson's Index for \(D_{T}\)) had a weak or negligible correlation with performance; (4) Out of all of the higher-order moments of Term Distributions, only the skewness and kurtosis of \(D_{T}\) (\(\sigma^{3}\) and \(\sigma^{4}\)) seemed to have a moderate relationship at \(N_{S}=1000\), and, generally, the moments of \(D_{T}\) seemed to be more correlated than that of \(D_{S}\). Overall, divergence measures with both representations seemed to be more predictive of source-target performances than with entropy or moments-based metrics. However, since it is unlikely that each measure was independently capable of predicting performance, we trained a series of regression models for each target task, combining these mea \begin{table} \begin{tabular}{|l|l||c|c|c|} \hline **Category** & **Measure** & **Term Distributions** & **BERT Embeddings** \\ \hline & & 1K & 25K & 1K & 25K \\ \hline \hline \multirow{4}{*}{Geometric} & Coarse Dist. & 0.3683\({}^{*}\) & 0.4801\({}^{*}\) & 0.3078\({}^{*}\) & -0.5792\({}^{*}\) \\ & \(L_{1}\) Dist. & 0.3097\({}^{*}\) & 0.6234\({}^{*}\) & -0.0792\({}^{*}\) & -0.4045\({}^{*}\) \\ \cline{2-5} & \(L_{2}\) Dist. & 0.3435\({}^{*}\) & 0.3515\({}^{*}\) & 0.0923\({}^{*}\) & -0.4228\({}^{*}\) \\ \hline \multirow{4}{*}{Info.} & Repair Div. & 0.4766\({}^{*}\) & 0.4273\({}^{*}\) & -0.5194\({}^{*}\) \\ \cline{2-5} & Dense-Shannon Div. & 0.3726\({}^{*}\) & -0.3914\({}^{*}\) \\ \cline{2-5} & Wasserstein Dist. & -0.2225\({}^{*}\) & -0.32266\({}^{*}\) \\ \cline{2-5} & Inf. & Blankability Credit & 0.3700\({}^{*}\) & 0.5743\({}^{*}\) \\ \cline{2-5} & Energetics (\(D_{S}\)) & 0.1318\({}^{*}\) & 0.2275\({}^{*}\) \\ \cline{2-5} & Entropy (\(D_{s}\)) & -0.1603\({}^{*}\) & 0.0486\({}^{*}\) \\ \cline{2-5} & Repair Entropy (\(D_{S}\)) & 0.1836\({}^{*}\) & -0.2284\({}^{*}\) \\ \cline{2-5} & Renyi Entropy (\(D_{T}\)) & 0.1618\({}^{*}\) & -0.0803\({}^{*}\) \\ \cline{2-5} & Simpson’s Index (\(D_{S}\)) & 0.0842\({}^{*}\) & 0.1359\({}^{*}\) \\ \cline{2-5} & Simonyi’s Index (\(D_{T}\)) & 0.3127\({}^{*}\) & 0.1442\({}^{*}\) \\ \hline \multirow{4}{*}{Based} & \(\sigma^{+}(D_{S}\)) & -0.1321\({}^{*}\) & -0.1792\({}^{*}\) \\ & \(\sigma^{+}(D_{T})\) & -0.1254\({}^{*}\) & -0.2272\({}^{*}\) \\ \cline{1-1} & \(\sigma^{+}(D_{S})\) & -0.1289\({}^{*}\) & -0.1253\({}^{*}\) \\ \cline{1-1} \cline{2-5} & \(\sigma^{+}(D_{T})\) & -0.1794\({}^{*}\) & -0.2549\({}^{*}\) \\ \cline{1-1} \cline{2-5} & \(\sigma^{+}(D_{T})\) & 0.0106\({}^{*}\) & 0.0087 \\ \cline{1-1} \cline{2-5} & \(\sigma^{+}(D_{T})\) & -0.3823\({}^{*}\) & -0.2634\({}^{*}\) \\ \cline{1-1} \cline{2-5} & \(\sigma^{+}(D_{S})\) & 0.0006 & 0.0234\({}^{*}\) \\ \cline{1-1} \cline{2-5} & \(\sigma^{+}(D_{T})\) & -0.3491\({}^{*}\) & -0.2473\({}^{*}\) \\ \hline \end{tabular} \end{table} Table 1: Spearman's \(\rho\) correlations between each measure and source-target macro-averaged \(F_{1}\)-score performance. Asterisk denotes measure was statistically significant (\(P\)\(\leq 0.05\)). Figure 1: NDCG@K averaged across tasks for different feature sets. Higher is better. \(DIV\), \(H\), \(\sigma\) denote divergence-, entropy-, and moments-based measures, respectively. sures. Specifically, we train an XGBoost (Chen and Guestrin, 2016) regression model (XGBRegressor) with each of the feature sets as our inputs, over five random seeds, for each of the 58 target domains and 2 sample size settings, producing 2,900 models for evaluation. Figure 1 shows the _Average NDCG@K_ values for each of these feature sets. We average the NDCG@K values across each of the 58 domains, and again over each of the 5 seeds. For both models, we achieve the best quality ranking using all of the features (\(ALL\)). Moreover, using divergence measures with both sets of representations (\(DIV_{TD,BE}\)) achieved a better ranking than using them in isolation (\(DIV_{TD}\) or \(DIV_{BE}\)) for both settings. It is also interesting to note that the feature set containing only the entropy and moments-based (\(H+\sigma\)) values achieve better performance than that of those estimated via divergence measures when the source sample size is significantly limited, coinciding with patterns found in our correlation analysis (Table 1); it may be the case that these features are more discriminative in cases where divergence measures are not as expressive. Finally, we evaluate the practical, downstream application of our regression models by considering how they may be used to reduce the search time in finding appropriate source models for transfer. For this experiment, we assume the user has a particular training budget \(K\) to train task pairs for transfer. The more task combinations that are tried, the more likely the user is to find a better-performing model for a particular task. We use our regression models to determine the order of task pairs to be tried, using the best feature set from our prior experiments (See Fig. 1). We compare with a random ordering of source-task models, which we average over five random seeds to reduce variance. Figure 2 shows the results of our experiments. For \(N_{S}=1000\), the best macro-averaged \(F_{1}\) performance score over all tasks is 0.8482 which, with a grid search over all task combinations, would require 4.7 hours of training. With our approach, we can achieve a 44% reduction in training time from 4.7 to 2.6 hours to achieve the same performance. For \(N_{S}=25000\), we can achieve the maximum score of 0.8899 through a grid search of all source-target combinations at a cost of 42.4 hours of training time. With our approach, we can achieve the same score with only 24.9 hours of training or a 41% reduction in training time. In determining the overall runtime of our approach, we factor in the computational cost associated with generating the features required to train our regression models. Our feature generation process consists of three stages: (1) the generation of term distributions and embedding representations, (2) the computation of statistical measures in Table 1, (3) and the execution of regression experiments using the \(ALL\) feature set. A total of 232 term distributions and an equivalent number of embedding representations (58 target domains each with separate training and test sets, in two different sample size settings) were generated. The generation of both sets of representations takes 5.7 minutes at \(N_{S}=1000\) and 45.9 minutes at \(N_{S}=25000\). The time taken to compute all statistical measures across both representations is 3 minutes at \(N_{S}=1000\) and 6.6 minutes at \(N_{S}=25000\). Finally, the time taken to run the regression experiments was 5.4 minutes in total. Despite the added computational cost, our approach has resulted in a substantial reduction in end-to-end runtime, boasting a 40% reduction at \(N_{S}=1000\) and a 39% Figure 2: F1@K averaged across tasks vs. Total Runtime@K of source-task adapters. Higher is better. Runtime is reported in hours. reduction at \(N_{S}=25000\), demonstrating the efficiency of our approach and the value-add of predicting which task pairs are transferable beforehand. ## 4 Conclusions and Future Work In this paper, we have shown that domain divergence measures and other statistical quantities are predictive of zero-shot transferability between tasks, and that this can be used to markedly reduce time when developing effective zero-shot models. Indeed, by predicting which source-target task pairs were likely transferable pre-tuning, we were able to reduce the end-to-end time taken to find the best source-target task pairs (trained on 1,000 source-task samples) by 40%. On the other hand, while we have demonstrated the value of using these metrics in performance estimation, there are a number of further directions worth investigating, namely: (1) examine the transferability across a wider range of domain and task types; (2) investigate more complex, higher-order measures such as those outlined by Ramesh Kashyap et al. (2021); (3) and to experiment with few-shot and other limited data settings. ## Limitations The most pronounced limitation in our work is the small variance in performance scores. As can be seen in Figure 2, the difference between the lower and maximum performances is small. The difference between the minimum and maximum average performance is 0.0305 and 0.0320 for \(N_{S}=1000\) and \(N_{S}=25000\), respectively. Even at the individual, source-target model level, the standard deviation of performance scores at each source-task sample size setting is 0.0363 and 0.0311. As such, the benefits of zero-shot transfer are not as apparent between these domains as they would be where the domains are more textually distinct. Nevertheless, we believe it is notable that statistical measures of domain divergence and the other metrics were sufficiently capable of discerning between more effective source-task pairs, even when the domains were similar, illustrating the promise of this approach.
2308.10405
Complex Hessian measures with respect to a background Hermitian form
We develop potential theory for $m$-subharmonic functions with respect to a Hermitian metric on a Hermitian manifold. First, we show that the complex Hessian operator is well-defined for bounded functions in this class. This allows to define the $m$-capacity and then showing the quasi-continuity of $m$-subharmonic functions. Thanks to this we derive other results parallel to those in pluripotential theory such as the equivalence between polar sets and negligible sets. The theory is then used to study the complex Hessian equation on compact Hermitian manifold with boundary, with the right hand side of the equation admitting a bounded subsolution. This is an extension of a recent result of Collins and Picard dealing with classical solutions.
Slawomir Kolodziej, Ngoc Cuong Nguyen
2023-08-21T00:47:07Z
http://arxiv.org/abs/2308.10405v2
# Complex Hessian measures with respect to a background Hermitian form ###### Abstract. We develop potential theory for \(m\)-subharmonic functions with respect to a Hermitian metric on a Hermitian manifold. First, we show that the complex Hessian operator is well-defined for bounded functions in this class. This allows to define the \(m\)-capacity and then showing the quasi-continuity of \(m\)-subharmonic functions. Thanks to this we derive other results parallel to those in pluripotential theory such as the equivalence between polar sets and negligible sets. The theory is then used to study the complex Hessian equation on compact Hermitian manifold with boundary, with the right hand side of the equation admitting a bounded subsolution. This is an extension of a recent result of Collins and Picard dealing with classical solutions. _To the memory of Jean-Pierre Demailly_ ## 1. Introduction The \(m\)-Hessian operator is defined in terms of elementary symmetric polynomials of degree \(m\) of eigenvalues of the Hessian matrix of the given function. If the degree is equal to the dimension of the space then one deals with the most important case of the Monge-Ampere operator. One can also consider more general symmetric functions of eigenvalues. The nonlinear equations involving such operators will be called in this article _Hessian type_ equations. They do appear in geometry in problems involving curvatures, like prescribed the Gauss curvature equation or the Lagrangian mean curvature equation. The \(m\)-Hessian equations in \(\mathbb{R}^{n}\) were first solved by Caffarelli-Nirenberg-Spruck [10] for smooth, non-degenerate data. The study of weak solutions for measures on the right hand side was initiated by Trudinger and Wang [14, 15, 16] (see also [17]). Here we are interested in the complex setting and weak solutions. For smooth data the first solutions in complex variables were obtained by Vinacua [13] and S.Y. Li [18] who followed the method of [10]. Blocki [19] adopted the methods of pluripotential theory (initiated by Bedford and Taylor [16, 17] in relation to the complex Monge-Ampere equation) to define the action of the \(m\)-Hessian operator on non smooth functions and study weak solutions of the associated equation. Let \(\Omega\subset\mathbb{C}^{n}\) be an open set and \(\omega\) is a positive Hermitian \((1,1)\)-form on \(\Omega\). Let \(1\leq m\leq n\) be an integer and consider a function \(u\in C^{2}(\Omega,\mathbb{R})\). The complex Hessian operator with respect to \(\omega\) acts on \(u\) by \[H_{m}(u)=(dd^{c}u)^{m}\wedge\omega^{n-m}.\] The operator is elliptic if we restrict ourselves to functions \(u\) whose eigenvectors \(\lambda=(\lambda_{1},...,\lambda_{n})\) of the complex Hessian matrix \([u_{i\bar{j}}]_{1\leq i,j\leq n}\), with respect to \(\omega\) belong to the Garding cone \[\Gamma_{m}=\left\{\lambda\in\mathbb{R}^{n}:S_{1}(\lambda)>0,...,S_{m}(\lambda)>0 \right\},\] where \(S_{k}(\lambda)\) is the \(k\)-th elementary symmetric polynomial on \(\lambda\). Such a function is called \(m-\omega\)-subharmonic (or \(m-\omega\)-sh for short). For \(\omega=dd^{c}|z|^{2}\) the standard Kahler form on \(\mathbb{C}^{n}\) Blocki defined non-smooth \(m\)-subharmonic functions. He showed that the Hessian operator acting on a bounded \(m\)-subharmonic function is a well-defined positive Radon measure, that the operator is stable under decreasing sequences, and that the homogeneous Dirichlet problem is solvable. The non-homogeneous one with the right hand side in \(L^{p},\ p>n/m\), was solved by Dinew and the first author in [1]. On a compact Hermitian manifold \((X,\omega)\) the right \(m\)-Hessian operator to consider is \[H_{m}(u)=(dd^{c}u+\omega)^{m}\wedge\omega^{n-m},\] or more generally \[H_{m,\alpha}(u)=(dd^{c}u+\alpha)^{m}\wedge\omega^{n-m},\] where \(\alpha\) is another \((1,1)\) form. For \(\omega\) Kahler the counterpart of Calabi-Yau theorem was shown by Dinew and the first author in [1], with a use of earlier \(C^{2}\) estimates of Hou-Ma-Wu [10]. Having this result an analogue of pluripotential theory yields weak solutions (see [1]). We refer the readers to [1], [11, 12, 13] and [14] for results in potential theory for \(m-\omega\)-sh functions on a compact Kahler manifold. Our first goal here is to develop potential theory for \(m\)-subharmonic functions (with respect to a Hermitian metric) on a Hermitian manifold. The results often parallel those of pluripotential theory. Now we assume that \(\omega\) is a general Hermitian metric. The complex \(m\)-Hessian equation on compact manifolds was solved independently by Szekelyhidi [15] and Zhang [16]. The authors [11] obtained weak continuous solutions for the right hand side in \(L^{p},\ p>n/m.\) This partially motivates the development of potential theory for \(m-\omega\)-sh functions on Hermitian manifolds. Unlike in the Kahler case, we have to deal with the non-zero torsion terms \(dd^{c}\omega\) and \(d\omega\wedge d^{c}\omega\). A direct computation shows that for a smooth \(m-\omega\)-sh function \(u,\ 0\leq p\leq n-m-1\) and \(k\geq 1\), the form \((dd^{c}u)^{k}\wedge\omega^{p}\) may not be positive. Those terms appear when we perform integration by parts. This makes the proofs of basic potential estimates in the Hermitian setting substantially more difficult. For example, we need to fully exploit the properties of the positive cone \(\Gamma_{m}\), and show new inequalities on elementary symmetric polynomials to prove the Chern-Levine-Nirenberg (CLN) inequality [11] and a variant of Cauchy-Schwarz inequality in this paper. This coupled with the uniform convergence allows us to define the complex Hessian measure of a continuous \(m-\omega\)-sh function \(u\) as the weak limit of \[H_{m}(u):=\lim_{\delta\to 0}H_{m}(u^{\delta})=\lim_{\delta\to 0}(dd^{c}u^{ \delta})^{m}\wedge\omega^{n-m}, \tag{1.1}\] where \(\{u^{\delta}\}\) is a sequence of smooth \(m-\omega\)-sh functions converging uniformly to \(u\). However, if \(u\) is a bounded \(m-\omega\)-sh function up till now we have not been able to define its complex Hessian measure. No variant of Bedford-Taylor approach seems to work in this case. The first main result of the paper (Theorem 3.4) addresses this problem. We show that the weak limit (1.1) also exists for bounded \(m-\omega\)-subharmonic functions when \(u^{\delta}\downarrow u\) point-wise as \(\delta\to 0\). The proof is based on the CLN inequality in [16] and a measure theoretic lemma (Lemma 3.2). This is the starting point for proving analogues of Bedford-Taylor results presented in Chapter 1 of [14]. Thus we obtain (Theorem 4.4) the quasi-continuity of \(m-\omega\)-sh functions with respect to a suitable \(m\)-capacity: for a Borel set \(E\subset\Omega\), \[cap_{m}(E)=\sup\left\{\int_{E}H_{m}(u):u\text{ is }m-\omega\text{-sh in }\Omega,-1\leq u\leq 0 \right\}. \tag{1.2}\] To define this capacity we needed Theorem 3.4. Once quasi-continuity is proven, one obtains weak convergence of mixed wedge products of the forms \(dd^{c}u_{j}\) for \(m-\omega\)-sh functions \(u_{j}\) under monotone convergence (Lemma 5.1 and Lemma 5.5) following the classical arguments in [1, 1]. Next we study the polar sets and negligible sets of \(m-\omega\)-sh functions. In this setting it seems impossible to obtain nice formulae for the capacity of compact or open sets in terms of Hessian measures of extremal functions as it is the case for Monge-Ampere measures. Exploiting further the properties of \(\Gamma_{m}\) in Section 2.4, especially Proposition 2.15 we are able to compare the outer capacity and the Hessian measures of relative extremal functions in Lemma 7.5. This suffices to give a characterization of a polar set by \(cap_{m}^{*}(E)=0\) in Proposition 7.7. Consequently, we conclude the equivalence of polar sets and negligible sets (Theorem 7.8). In the last sections we apply the above results to the complex \(m\)-Hessian equation. Recently, there is a lot of active research on fully non-linear elliptic equations on compact Hermitian manifolds with or without boundary (cf. [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30]) in various geometric contexts. In particular Collins and Picard [14] solved the Dirichlet problem for the \(m\)-Hessian equation in an open subset of a Hermitian manifold under the hypothesis of existence of a subsolution and smooth data. We extend it in Sections 8, 9 showing that the existence of a bounded subsolution implies the existence of a bounded solution for both bounded domains (Theorem 8.7) and compact complex manifolds with boundary (Theorem 9.1). In the proof the above equivalence of polar and negligible sets will play a crucial role (see Lemma 8.3). The homogeneous \(m\)-Hessian equation on a (Kahler) manifold with boundary was recently solved in a particular case in [20] in relation to the Wess-Zumino-Witten type equation proposed by Donaldson in [15]. _Acknowledgement._ The first author is partially supported by grant no. 2021/41/B/ST1/01632 from the National Science Center, Poland. The second author is partially supported by the National Research Foundation of Korea (NRF) grant no. 2021R1F1A1048185. This project was initiated during the visit of the first author in KAIST, and he wishes to express his gratitude for great hospitality and prefect working conditions. ## 2. Generalized \(m\)-subharmonic functions ### Elementary symmetric positive cones In this section we prove important point-wise estimates for elementary symmetric polynomials. Let \(1\leq m\leq n\) be two integers. The positive cone \(\Gamma_{m}\) is given by \[\Gamma_{m}=\{\lambda=(\lambda_{1},...,\lambda_{n})\in\mathbb{R}^{n}:S_{1}( \lambda)>0,...,S_{m}(\lambda)>0\}, \tag{2.1}\] where \(S_{k}(\lambda)=\sum_{1\leq i_{1}<\ldots<i_{k}\leq n}\lambda_{i_{1}}\cdots\lambda_{ i_{k}}\); and conventionally: \(S_{0}(\lambda)=1\) and \(S_{k}(\lambda)=0\) for \(k<0\) or \(k>n\). Let \(\lambda=(\lambda_{1},...,\lambda_{n})\in\Gamma_{m}\) be arranged in the decreasing manner, i.e., \[\lambda_{1}\geq\cdots\geq\lambda_{m}\geq\cdots\geq\lambda_{n}.\] Then, we know from [14, Lemma 8] that \(\lambda_{m}>0\) which is a consequence of a characterization of the cone \(\Gamma_{m}\). Namely, for \(\{i_{1},...,i_{t}\}\subset\{1,...,n\}\) such that \(k+t\leq m\), one has \[S_{k;i_{1}\cdots i_{t}}(\lambda)>0, \tag{2.2}\] where \(S_{k;i_{1}\cdots i_{t}}(\lambda)=S_{k}|_{\lambda_{i_{1}}=\cdots=\lambda_{i_{t }}=0}.\) This implies also that for \(1\leq k\leq m\), \[S_{k-1}(\lambda)\geq\lambda_{1}\cdots\lambda_{k-1}. \tag{2.3}\] **Lemma 2.1**.: _There exists \(\theta=\theta(n,m)>0\) such that the following statements hold._ 1. _for_ \(1\leq j\leq m\)_,_ \(\lambda_{j}S_{m-1;j}\geq\theta S_{m};\)__ 2. _for_ \(1\leq i\leq m-1\)_,_ \(\lambda_{i}S_{m-2;im}\geq\theta S_{m-1;m}.\)__ Proof.: The item (a) follows from [14, Eq. (2.7)], while (b) follows from (a) if we replace \(n,m\) and \(S_{m}\) by \(n-1,m-1\) and \(S_{m-1;m}\), respectively. **Lemma 2.2**.: _There exists a uniform constant \(C\), depending on \(n,m\), such that the following inequalities are satisfied._ 1. _For_ \(1\leq i\leq m-1\) _and_ \(\lambda\in\Gamma_{m}\)__ \[\frac{\lambda_{1}\cdots\lambda_{m}}{\lambda_{i}}\leq C(S_{m-1;i}\,S_{m-1})^{ \frac{1}{2}}.\] 2. _Generally, for_ \(1\leq\ell\leq n\) _and increasing multi-indices_ \((i_{1},...,i_{m-1})\)_,_ \[\prod_{i_{s}\neq\ell;s=1}^{m-1}|\lambda_{i_{s}}|\leq C\left(S_{m-1;\ell}S_{m-1 }\right)^{\frac{1}{2}}.\] Proof.: (a) The inequality is equivalent to saying that there exist uniform constants \(c_{1},c_{2}>0\) such that for every positive number \(a\), \[a\frac{\lambda_{1}\cdots\lambda_{m}}{\lambda_{i}}\leq c_{1}a^{2}S_{m-1;i}+c_{2 }S_{m-1}, \tag{2.4}\] where \(1\leq i\leq m-1\). In fact as we will see in the proof \(c_{1},c_{2}\) are explicitly given constants. We observe that if \(a\leq 1\), then we can easily get the claim as \[\lambda_{1}\cdots\lambda_{m}/\lambda_{i}\leq\lambda_{1}\cdots\lambda_{m-1} \leq S_{m-1}.\] Now we consider \(a>1\). We prove (a) for the case \(i=1\), the other cases \(1\leq i\leq m-1\) follow in the same way. The basic identities/inequalities are \[S_{m;1m}+\lambda_{1}S_{m-1;1m}=S_{m;m} =S_{m}-\lambda_{m}S_{m-1;m}\] \[\geq-\lambda_{m}S_{m-1;m}, \tag{2.5}\] and \[S_{m-1;1m}+\lambda_{1}S_{m-2;1m}=S_{m-1;m}. \tag{2.6}\] Multiplying (2.5) by \(S_{m-2;1m}\) and (2.6) by \(S_{m-1;1m}\) to eliminate \(\lambda_{1}\) we get that \[S_{m-1;1m}^{2}-S_{m;1m}S_{m-2;1m} \leq S_{m-1;m}(S_{m-1;1m}+\lambda_{m}S_{m-2;1m})\] \[=S_{m-1;m}S_{m-1;1}. \tag{2.7}\] The Newton inequality holds for every \(\lambda\in\mathbb{R}^{n}\) and tells us \[S_{m;1m}S_{m-2;1m}\leq\frac{(m-1)(n-m+1)}{m(n-m+2)}[S_{m-1;1m}]^{2}=:c_{m}[S_{m-1; 1m}]^{2}.\] Notice that \(0<c_{m}<1\). Hence, we derive from the above and (2.7) that \[S_{m-1;1m}^{2}-c_{m}[S_{m-1;1m}]^{2}\leq S_{m-1;m}S_{m-1;1}.\] Therefore, \[S_{m-1;1m}^{2}\leq\frac{1}{1-c_{m}}S_{m-1;m}S_{m-1;1}. \tag{2.8}\] Using Cauchy-Schwarz' inequality, the inequality (2.8) and the formula \[S_{m-1;1m}=S_{m-1;1}-\lambda_{m}S_{m-2;1m},\] we get \[a^{2}S_{m-1;1}+\frac{1}{4(1-c_{m})}S_{m-1;m} \geq a\left[\frac{S_{m-1;1}S_{m-1;m}}{1-c_{m}}\right]^{\frac{1}{2}}\] \[\geq a|S_{m-1;1m}|\] \[\geq a(-S_{m-1;1}+\lambda_{m}S_{m-2;1m}).\] This implies the inequality \[(a^{2}+a)S_{m-1;1}+\frac{1}{4(1-c_{m})}S_{m-1;m}\geq a\lambda_{m}S_{m-2;1m}. \tag{2.9}\] Since \(a\geq 1\), it follows that \(2a^{2}S_{m-1;1}+CS_{m-1}\geq a\lambda_{m}S_{m-2;1m}.\) So, using Lemma 2.1 and (2.3) we have \[(2a^{2}S_{m-1;1}+CS_{m-1})\lambda_{1} \geq a\lambda_{m}\lambda_{1}S_{m-2;1m}\] \[\geq a\theta^{2}\lambda_{m}S_{m-1}\] \[\geq a\theta^{2}\lambda_{1}\cdots\lambda_{m}. \tag{2.10}\] The proof of the lemma is completed with \(c_{1}=2\) and \(c_{2}=\frac{1}{4(1-c_{m})}\) and \(C=\sqrt{2/(1-c_{m})}\). (b) The characterization (2.2) implies that a sum of any \((n-m+1)\) entries of \(\lambda\) is positive. Hence, we have for \(\lambda_{i_{s}}\leq 0\), \[|\lambda_{i_{s}}|\leq(n-m)\lambda_{m}.\] So, for \(1\leq\ell\leq m-1\), \[\prod_{i_{s}\neq\ell;s=1}^{m-1}|\lambda_{i_{s}}|\leq(n-m)^{m-1}\frac{\lambda_ {1}\cdots\lambda_{m}}{\lambda_{\ell}}\leq C\left[S_{m-1;\ell}S_{m-1}\right]^{ \frac{1}{2}}, \tag{2.11}\] where we used (a) for the second inequality. Now we treat the remaining range \(m\leq\ell\leq n\). By a result of Lin and Trudinger [14, Theorem 1.1], we know that \(S_{m-1;\ell}\geq\theta S_{m-1}\) for a constant \(\theta=\theta(n,m)\) depending only on \(n,m\). This implies \[S_{m-1}\leq(S_{m-1;\ell}S_{m-1})^{\frac{1}{2}}/\sqrt{\theta}.\] Thus, the desired inequality easily follows from this and the bound \[\prod_{i_{s}\neq\ell;s=1}^{m-1}|\lambda_{i_{s}}|\leq(n-m)^{m-1}\lambda_{1} \cdots\lambda_{m-1} \tag{2.12}\] for \(m\leq\ell\leq n\). The proof of (b) is completed. ### Cauchy-Schwarz's inequality Let \(\omega\) be a Hermitian metric on \(\mathbb{C}^{n}\) and let \(\Omega\) be a bounded open set in \(\mathbb{C}^{n}\). The positive cone \(\Gamma_{m}(\omega)\), associated to \(\omega\), of real \((1,1)\)-forms is defined as follows. A real \(\gamma\) is said to belong \(\Gamma_{m}(\omega)\) if at any point \(z\in\Omega\), \[\gamma^{k}\wedge\omega^{n-k}(z)>0\quad\text{for }k=1,...,m.\] Equivalently, in the normal coordinate with respect to \(\omega\) at \(z\), diagonalizing \(\gamma=\sqrt{-1}\sum_{i}\lambda_{i}dz_{i}\wedge d\bar{z}_{i}\), we have \(\lambda=(\lambda_{1},...,\lambda_{n})\in\Gamma_{m}\). Now we will translate the estimates in Section 2.1 into the integral forms. We can state the following versions of Cauchy-Schwarz's inequality in this setting. Let \(h\) be a smooth real-valued function and \(\phi,\psi\) be Borel functions. Let \(T\) be a positive current of bidegree \((n-2,n-2)\). **Lemma 2.3**.: _There exists a uniform constant \(C\) depending on \(\omega\) such that_ \[\left|\int\phi\psi\;dh\wedge d^{c}\omega\wedge T\right|^{2}\;\;\leq C\int| \phi|^{2}\;dh\wedge d^{c}h\wedge T\int|\psi|^{2}\;\omega\wedge T.\] Proof.: The proof of [16, Proposition 1.4] can be easily adapted. This lemma can be applied for the case \(T=\gamma^{s}\wedge\omega^{n-m+\ell}\), where \(\gamma\in\Gamma_{m}(\omega)\) and \(0\leq s,\ell\leq m-1\) and \(s+\ell=m-1\). We also need to deal with possible non-positive forms \(T^{\prime}=\gamma^{m-1}\wedge\omega^{n-m-1}\) where the classical Cauchy-Schwarz is not immediately applicable. However, we still have **Lemma 2.4**.: _There exists a uniform constant \(C\) depending on \(\omega,n,m\) such for every \(\gamma\in\Gamma_{m}(\omega)\),_ \[\left|\int\phi\psi\;dh\wedge d^{c}\omega\wedge\gamma^{m-1}\wedge \omega^{n-m-1}\right|^{2}\] \[\leq C\int|\phi|^{2}\;dh\wedge d^{c}h\wedge\gamma^{m-1}\wedge \omega^{n-m}\times\int|\psi|^{2}\;\gamma^{m-1}\wedge\omega^{n-m+1}.\] Proof.: We express the integrands of both sides as follows. \[dh\wedge d^{c}\omega\wedge\gamma^{m-1}\wedge\omega^{n-m-1}=f_{1}(z)\omega^{n},\] \[dh\wedge d^{c}h\wedge\gamma^{m-1}\wedge\omega^{n-m}=[f_{2}(z)]^{2}\omega^{n},\] \[\gamma^{m-1}\wedge\omega^{n-m+1}=[f_{3}(z)]^{2}\omega^{n}.\] Thus, the inequality will follow from the classical Cauchy-Schwarz inequality if we have point-wise \(|f_{1}(z)|\leq Cf_{2}(z)f_{3}(z)\) for every \(z\in\Omega\). This is proved by using the normal coordinate at a given point \(z\) with respect to \(\omega\) which diagonalizes also \(\gamma\), i.e., \[\omega=\sqrt{-1}\sum_{i=1}^{n}dz_{i}\wedge d\bar{z}_{i},\quad\gamma=\sqrt{-1} \sum_{i}\lambda_{i}dz_{i}\wedge d\bar{z}_{i},\] where \(\lambda=(\lambda_{1},...,\lambda_{n})\in\Gamma_{m}\). Denote \(h_{i}=\partial h/\partial z_{i}\). Then, at the point \(z\), \[\binom{n}{m-1}(f_{2})^{2}=\sum_{i=1}^{n}|h_{i}|^{2}S_{m-1;i},\quad\binom{n}{m -1}(f_{3})^{2}=S_{m-1}. \tag{2.13}\] Now, observe that \(\gamma^{m-1}\wedge\omega^{n-m-1}\) is a \((n-2,n-2)\) form, so after taking the wedge product with \(dh\wedge d^{c}\omega\) the non-zero contribution give only \(\sqrt{-1}\partial h\wedge\overline{\partial}\omega\) and \(\sqrt{-1}\ \overline{\partial}h\wedge\partial\omega\). As \(h\) is a real valued function, these two forms are mutually conjugate. Let us write \[\overline{\partial}\omega=\sum\omega_{i\bar{j}\bar{k}}d\bar{z}_{k}\wedge dz_{i} \wedge d\bar{z}_{j}.\] Denote \(dV=(\sqrt{-1})^{n^{2}}dz_{1}\wedge\cdots dz_{n}\wedge d\bar{z}_{1}\wedge\cdots d \bar{z}_{n}.\) Let \(J=(j_{1},...,j_{n-m-1})\) be an increasing multi-index. Then, \[\frac{1}{(m-1)!}\partial h\wedge\overline{\partial}\omega\wedge\gamma^{m-1} \wedge dz_{J}\wedge d\bar{z}_{J}/dV=\sum_{j,\ell\not\in J;j\neq\ell}c_{j\bar{j} \bar{\ell}}\,h_{\ell}\prod_{i_{s}\not\in J\cup\{j,\ell\}}\lambda_{i_{s}},\] where \(c_{i\bar{i}\bar{\ell}}\) is \(\omega_{i\bar{i}\ell}\) or \(\omega_{i\bar{\ell}\,\bar{i}}\), and \(i_{s}\in I=(i_{1},..,i_{m-1})\) which is an increasing multi-index satisfying \(I\cap J=\emptyset\). Then, at the point \(z\), \[|f_{1}(z)|\leq c_{0}\sum_{|I|=m-1}\sum_{\ell=1}^{n}|h_{\ell}|\prod_{i_{s}\neq \ell;s=1}^{m-1}|\lambda_{i_{s}}|, \tag{2.14}\] where \(c_{0}\) is a uniform constant depending only on \(\omega\). By (2.13) and (2.14) we have reduced the inequality \(|f_{1}|\leq Cf_{2}f_{3}\) to the one for symmetric polynomials. To show the latter, by Lemma 2.2-(b) for \(1\leq\ell\leq n\), we have \[|h_{\ell}|\prod_{i_{s}\neq\ell;s=1}^{m-1}|\lambda_{i_{s}}| \leq C\left[|h_{\ell}|^{2}S_{m-1;\ell}S_{m-1}\right]^{\frac{1}{2}}\] \[\leq C\left(\sum_{i=1}^{n}|h_{i}|^{2}S_{m-1;i}\right)^{\frac{1}{ 2}}[S_{m-1}]^{\frac{1}{2}}.\] Taking the sum of (finitely many) terms on the left hand side of (2.14) the proof of the theorem follows. We also need this inequality for wedge products of two forms and more. This is done by solving a linear system of inequalities as in [11, page 2226]. **Corollary 2.5**.: _There exists a uniform constant \(C\), depending on \(\omega,n,m\), such that the following inequalities hold._ * _For_ \(\eta,\gamma\in\Gamma_{m}(\omega)\)_,_ \[\left|\int\phi\psi dh\wedge d^{c}\omega\wedge\eta^{k}\wedge\gamma ^{m-k-1}\wedge\omega^{n-m-1}\right|^{2}\] \[\leq C\int|\phi|^{2}dh\wedge d^{c}h\wedge(\eta+\gamma)^{m-1} \wedge\omega^{n-m}\int|\psi|^{2}(\eta+\gamma)^{m-1}\wedge\omega^{n-m+1}.\] * _Generally, for_ \(\gamma_{1},...,\gamma_{m-1}\in\Gamma_{m}(\omega)\)_,_ \[\left|\int\phi\psi dh\wedge d^{c}\omega\wedge\gamma_{1}\wedge\cdots\wedge\gamma _{m-1}\wedge\omega^{n-m-1}\right|^{2}\] \[\leq C\int|\phi|^{2}\ dh\wedge d^{c}h\wedge(\sum_{i=1}^{m-1} \gamma_{i})^{m-1}\wedge\omega^{n-m}\times\] \[\qquad\times\int|\psi|^{2}(\sum_{i=1}^{m-1}\gamma_{i})^{m-1} \wedge\omega^{n-m+1}.\] ### \(m\)-subharmonic functions on Hermitian manifolds Let us recall the definition of generalized \(m\)-subharmonic function in the Hermitian setting (see [10], [11] and [12]). Let \(\Omega\) be a bounded open set in \(\mathbb{C}^{n}\) and let \(\omega\) be a Hermitian metric on \(\Omega\). **Definition 2.6**.: An upper semi-continuous function \(u:\Omega\to[-\infty,+\infty)\) is called \(m-\omega\)-subharmonic if \(u\in L^{1}_{\mathrm{loc}}(\Omega)\) and for any collection \(\gamma_{1},...,\gamma_{m-1}\in\Gamma_{m}(\omega)\), \[dd^{c}u\wedge\gamma_{1}\wedge\cdots\wedge\gamma_{m-1}\wedge\omega^{n-m}\geq 0\] in the sense of currents. **Remark 2.7**.: By Garding's [1] results a function \(u\in C^{2}(\Omega)\) is \(m-\omega\)-sh if and only if \(dd^{c}u\in\overline{\Gamma_{m}(\omega)}\), that is \(dd^{c}u\wedge\gamma_{1}\wedge\cdots\wedge\gamma_{m-1}\geq 0\) point-wise in \(\Omega\). Thus, the estimates for forms in \(\Gamma_{m}(\omega)\) are applicable to \(dd^{c}u\) if \(u\) is a smooth \(m-\omega\)-sh function. It follows from Michelsohn [14] that for \(\gamma_{1},...,\gamma_{m-1}\in\Gamma_{m}(\omega)\) there is a unique \((1,1)\) positive form \(\alpha\) such that \[\alpha^{n-1}=\gamma_{1}\wedge\cdots\wedge\gamma_{m-1}\wedge\omega^{n-m}.\] The above definition of \(m-\omega\)-sh function can be expressed more familiarly, in terms of potential theory, using the notion of \(\alpha\)-subharmonicity (see e.g., [12, Definition 2.1, Lemma 9.10]). Thanks to this many potential-theoretic properties of \(m-\omega\)-sh functions can be derived from those of \(\alpha\)-sh functions. For example, if two \(m-\omega\)-sh functions are equal almost everywhere (with respect to the Lebesgue measure), then they are equal everywhere [12, Corollary 9.7]. One also has the "gluing property" that allows to modify the function outside a compact subset. This is an immediate consequence of [12, Lemma 9.5]. **Lemma 2.8**.: _Let \(U\subset\Omega\) be two open sets in \(\mathbb{C}^{n}\). Let \(u\) be \(m-\omega\)-sh in \(U\) and \(v\) be \(m-\omega\)-sh in \(\Omega\). Assume that \(\limsup_{z\to x}u(z)\leq v(x)\) for every \(x\in\partial U\cap\Omega\). Then, the function_ \[\widetilde{u}=\begin{cases}\max\{u,v\}&\text{on }U,\\ v&\text{on }\Omega\setminus U,\end{cases}\] _is \(m-\omega\)-sh in \(\Omega\)._ Because of this we have the following way of reducing a proof to a simpler case (see [1, page 7] for the proof). **Localization Principle**.: _If we are to prove the weak convergence or local estimate for a family of locally uniformly bounded \(m-\omega\)-sh functions, it is no loss of generality to we assume that the functions are defined in a ball and are all equal on some neighborhood of the boundary._ For a bounded psh function its convolution with a radial smoothing kernel provides locally a smooth, decreasing approximation of this function. It is no longer the case for generalized \(m-\omega\)-sh. However, in a strictly \(m\)-pseudoconvex domain \(\Omega\), that is for \(\Omega=\{\rho<0\}\), where \(\rho\in C^{2}(\overline{\Omega})\) is strictly \(m-\omega\)-sh and \(d\rho\neq 0\) on \(\partial\Omega\), we still have (non-explicit) way of approximation by smooth \(m-\omega\)-sh functions. **Proposition 2.9**.: _Let \(\Omega\subset\subset\mathbb{C}^{n}\) be strictly \(m\)-pseudoconvex domain. Let \(u\) be a bounded \(m-\omega\)-sh function in a neighborhood of \(\overline{\Omega}\). Then, there exists a sequence of smooth \(m-\omega\)-sh functions \(u_{j}\in C^{\infty}(\overline{\Omega})\) that decreases to \(u\) point-wise in \(\overline{\Omega}\)._ Proof.: The proof is the same as the one of [1, Theorem 3.18] if we replace the ball by a strictly \(m\)-pseudoconvex domain and invoke [1, Theorem 1.1] for the smooth solution of the Dirichlet problem on a strictly \(m\)-pseudoconvex domain. ### Integral estimates for smooth functions Let \(\Omega\) be a bounded open set in \(\mathbb{C}^{n}\). Let \(-1\leq v\leq w\leq 0\) be smooth \(m-\omega\)-sh functions in \(\Omega\) such that \(v=w\) in a neighborhood of \(\partial\Omega\). Let \(\rho\) be a smooth \(m-\omega\)-sh function such that \(-1\leq\rho\leq 0\). Using the notation \(\gamma:=dd^{c}\rho\), \(h=w-v\) we consider \[e_{(q,k,s)}=\int h^{q+1}\gamma^{k}\wedge(dd^{c}v)^{s}\wedge\omega^{n-k-s}, \tag{2.15}\] where \(q\geq 0\), the integers \(0\leq k\leq m\) and \(0\leq s\leq m-k\). We wish to bound \[e_{(q,m,0)}=\int h^{q+1}\gamma^{m}\wedge\omega^{n-m},\] by the integrals \[e_{(r,0,i)}=\int h^{r+1}(dd^{c}v)^{i}\wedge\omega^{n-i},\] where \(i=0,...,m\) and \(1\leq r<q\). This is done via repeated use of the integration by parts to replace \(\gamma\) by \(dd^{c}v\). However, there are three different cases depending on the total degree \(k+s\) of the form \(\gamma^{k}\wedge(dd^{c}v)^{s}\) that we need to deal with separately. * Case 1: \(k+s=m\); * Case 2: \(k+s=m-1\); * Case 3: \(k+s\leq m-2\). Let us start with an auxiliary inequality that we use frequently bellow. **Lemma 2.10**.: _Let \(p\geq 1\) and \(0\leq k\leq m-1\). There exists a constant \(C\) depending on \(\omega,n,m\) such that_ * _for_ \(0\leq s\leq m-1-k\)_:_ \[\int h^{p-1}dh\wedge d^{c}h\wedge\gamma^{k}\wedge(dd^{c}v)^{s} \wedge\omega^{n-k-s-1}\] \[\leq\int h^{p}\gamma^{k}\wedge(dd^{c}v)^{s+1}\wedge\omega^{n-k-s-1 }+C\int h^{p+1}(\gamma+dd^{c}v)^{k+s}\wedge\omega^{n-k-s}.\] * _for_ \(0\leq s\leq m-3-k\)_:_ \[\int h^{p-1}dh\wedge d^{c}h\wedge\gamma^{k}\wedge(dd^{c}v)^{s} \wedge\omega^{n-k-s-1}\] \[\leq\int h^{p}\gamma^{k}\wedge(dd^{c}v)^{s+1}\wedge\omega^{n-k-s- 1}+C\int h^{p+1}\gamma^{k}\wedge(dd^{c}v)^{s}\wedge\omega^{n-k-s}.\] Proof.: (a) Note first that \(0\leq h\leq 1\) and also \(T:=\gamma^{k}\wedge(dd^{c}v)^{s}\wedge\omega^{n-k-s-1}\) and \(dd^{c}w\wedge T\) are positive forms for \(n-s-k-1\geq n-m\). Therefore, \[p(p+1)h^{p-1}dh\wedge d^{c}h\wedge T =[dd^{c}h^{p+1}-(p+1)h^{p}dd^{c}h]\wedge T\] \[\leq[dd^{c}h^{p+1}+(p+1)h^{p}dd^{c}v]\wedge T.\] Hence, \[\int h^{p-1}dh\wedge d^{c}h\wedge\gamma^{k}\wedge(dd^{c}v)^{s} \wedge\omega^{n-s-k-1}\] \[\leq\int(dd^{c}h^{p+1}+h^{p}dd^{c}v)\wedge\gamma^{k}\wedge(dd^{c} v)^{s}\wedge\omega^{n-s-k-1}. \tag{2.16}\] It remains to estimate the product involving the first term in the bracket. By integration by parts and [Lemma 2.3, KN16], \[\begin{split}&\int dd^{c}h^{p+1}\wedge\omega^{n-s-k-1}\wedge \gamma^{k}\wedge(dd^{c}v)^{s}\\ &=\int h^{p+1}dd^{c}(\omega^{n-s-k-1})\wedge\gamma^{k}\wedge(dd^ {c}v)^{s}\\ &\leq C\int h^{p+1}(\gamma+dd^{c}v)^{k+s}\wedge\omega^{n-m+1}. \end{split} \tag{2.17}\] Combining the last two inequalities the proof of the lemma follows. (b) The proof is very similar, we first have (2.16). Then, in the middle integral of (2.17) one can express \(dd^{c}(\omega^{n-s-k-1})=\eta\wedge\omega^{n-m}\) for a smooth \((m-s-k,m-s-k)\)-form \(\eta\). Then, since \(\gamma,dd^{c}v\in\Gamma_{m}(\omega)\), in this case the inequality has a better form \[\left|\int h^{p+1}\eta\wedge\gamma^{k}\wedge(dd^{c}v)^{s}\wedge\omega^{n-m} \right|\leq C\int h^{p+1}\gamma^{k}\wedge(dd^{c}v)^{s}\wedge\omega^{n-k-s}.\] The item (b) is proven. Let us start with the simplest situation in Case 1. We are going to show that \[e_{(q,m,0)}\leq C\left(e_{(q-1,m-1,1)}+e_{(q-1,m-1,0)}\right), \tag{2.18}\] where \(C\) is a uniform constant depending on \(\omega,m,n\). Equivalently, **Lemma 2.11**.: _Let \(q\geq 1\). Then,_ \[\begin{split}\int_{\Omega}(w-v)^{q+1}\gamma^{m}\wedge\omega^{n-m} &\leq C\int_{\Omega}(w-v)^{q}\gamma^{m-1}\wedge dd^{c}v \wedge\omega^{n-m}\\ &\quad+C\int_{\Omega}(w-v)^{q}\gamma^{m-1}\wedge\omega^{n-m+1}. \end{split}\] Proof.: Recall that \(h:=w-v\geq 0\) and \(h=0\) near \(\partial\Omega\). A direct computation gives \[\begin{split}& dd^{c}(h^{q+1}\omega^{n-m})\\ &=q(q+1)h^{q-1}dh\wedge d^{c}h\wedge\omega^{n-m}\\ &\quad+(q+1)h^{q}dd^{c}h\wedge\omega^{n-m}\\ &\quad+(q+1)(n-m)h^{q}d\omega\wedge d^{c}h\wedge\omega^{n-m-1}\\ &\quad+(q+1)(n-m)h^{q}dh\wedge d^{c}\omega\wedge\omega^{n-m-1}\\ &\quad+h^{q+1}dd^{c}\omega^{n-m}\\ &\quad=:T_{0}+T_{1}+T_{2}+T_{3}+T_{4}.\end{split} \tag{2.19}\] By integration by parts, \[\begin{split}\int h^{q+1}dd^{c}\rho\wedge\gamma^{m-1}\wedge\omega ^{n-m}&=\int\rho dd^{c}(h^{q+1}\omega^{n-m})\wedge\gamma^{m-1}\\ &=\int\rho(T_{0}+T_{1}+T_{2}+T_{3}+T_{4})\wedge\gamma^{m-1}.\end{split} \tag{2.20}\] **Case 1a: Estimate of \(T_{0}\).** Since \(-1\leq\rho\leq 0\) and \(T_{0}\) is a positive current, \[\rho T_{0}\wedge\gamma^{m-1}\leq 0 \tag{2.21}\] **Case 1b: Estimate of \(T_{1}\)**. Similarly, because \(v\) is a \(m-\omega\)-sh function, \[\rho T_{1}\wedge\gamma^{m-1} =(q+1)\rho h^{q}(dd^{c}w-dd^{c}v)\wedge\omega^{n-m}\wedge\gamma^{m-1}\] \[\leq(q+1)h^{q}dd^{c}v\wedge\gamma^{m-1}\wedge\omega^{n-m}. \tag{2.22}\] **Case 1c: Estimate of \(T_{4}\)**. Using the inequality [Lemma 2.3, KN16] \[\gamma^{m-1}\wedge dd^{c}\omega^{n-m}\leq C_{m,n}\gamma^{m-1}\wedge\omega^{n- m+1}, \tag{2.23}\] we can estimate the last term \(T_{4}\), \[\left|\int\rho h^{q+1}\gamma^{m-1}\wedge dd^{c}\omega^{n-m}\right| \leq C\int|\rho|h^{q+1}\gamma^{m-1}\wedge\omega^{n-m+1}\] \[\leq Ce_{(q,m-1,0)}, \tag{2.24}\] where we used again the fact that \(|\rho|\leq 1\). **Case 1d: Estimate of \(T_{2}\) and \(T_{3}\).** We use Cauchy-Schwarz' inequality in Lemma 2.4, where an extra uniform constant appears. Let us estimate \(T_{2}\) (for \(T_{3}\) the estimate is completely the same). By Cauchy-Schwarz' inequality \[\left|\int\rho h^{q}d\omega\wedge d^{c}h\wedge\omega^{n-m-1} \wedge\gamma^{m-1}\right|^{2}\] \[\leq C\int|\rho|h^{q-1}dh\wedge d^{c}h\wedge\omega^{n-m}\wedge \gamma^{m-1}\int|\rho|h^{q+1}\omega^{n-m+1}\wedge\gamma^{m-1}\] \[\leq C\left(\int h^{q-1}dh\wedge d^{c}h\wedge\gamma^{m-1}\wedge \omega^{n-m}+\int h^{q+1}\gamma^{m-1}\wedge\omega^{n-m+1}\right)^{2}\] \[\leq C\left(e_{(q-1,m-1,1)}+e_{(q,m-1,0)}\right)^{2} \tag{2.25}\] where we used Lemma 2.10 with \(s=0,k=m-1\) and \(p=q\) for the first integral in the last inequality, namely, \[\int h^{q-1}dh\wedge d^{c}h\wedge\omega^{n-m}\wedge\gamma^{m-1}\leq C(e_{(q-1, m-1,1)}+e_{(q,m-1,0)}).\] Combining the estimates (2.21), (2.22), (2.24) and (2.25) and the fact that \(e_{(q,\bullet,\bullet)}\leq e_{(q-1,\bullet,\bullet)}\leq e_{(q-2,\bullet, \bullet)}\) we conclude the proof of the lemma. We can now state the general inequality in Case 1. **Corollary 2.12**.: _For \(1\leq k\leq m\) and \(s=m-k\geq 0\) and \(q\geq 2\),_ \[e_{(q,k,s)}\leq c_{k}\sum_{i=0}^{k-1}e_{(q-1,i,m-i)}+C\sum_{i=0}^{m-1}e_{(q-2, i,m-1-i)}.\] Proof.: The proof is by induction in \(k\) but "downward". For \(k=m\) it is Lemma 2.11. Assume that it is true for every \(k+1\leq\ell\leq m\), i.e., we have \[e_{(q,\ell,m-\ell)}\leq c_{\ell}\sum_{i=0}^{k}e_{(q-1,i,m-i)}+c_{\ell}\sum_{i=0 }^{m-1}e_{(q-2,i,m-1-i)}. \tag{2.26}\] We proceed to prove the conclusion holds for \(k\). The strategy is the same as in the proof of the last lemma, however we need to estimate \(T_{2}\) and \(T_{3}\) more carefully. We repeat the steps of the proof of Lemma 2.11 replacing \(dd^{c}\rho\wedge\Gamma\) by \(dd^{c}\rho\wedge\Gamma^{(s)}\), where \[\Gamma=\gamma^{m-1}\wedge\omega^{n-m}\text{ and }\Gamma^{(s)}=\gamma^{m-1-s} \wedge(dd^{c}v)^{s}\wedge\omega^{n-m},\] in the integrand on the left hand side. The corresponding estimates for \(T_{0},T_{1}\) are similar. Namely, (2.21 \[\rho T_{0}\wedge\gamma^{m-1-s}\wedge(dd^{c}v)^{s}\leq 0,\] and (2.22 \[\rho T_{1}\wedge\gamma^{m-s-1}\leq(q+1)h^{q}(dd^{c}v)^{s+1}\wedge\gamma^{m-1-s }\wedge\omega^{n-m}\] The one for \(T_{4}\) is \[\gamma^{m-1-s}\wedge(dd^{c}v)^{s}\wedge dd^{c}\omega^{n-m}\] (2.23 \[{}^{\prime}\] ) \[\leq C_{m,n}[\gamma^{m-1}+\gamma^{m-2}\wedge dd^{c}v+\cdots+(dd^{ c}v)^{m-1}]\wedge\omega^{n-m+1}.\] Hence, integrating both sides and using \(0\leq h\leq 1\), leads to (2.24 \[{}^{\prime}\] ) \[\left|\int h^{q+1}\gamma^{m-1-s}\wedge(dd^{c}v)^{s}\wedge dd^{c}\omega^{n-m} \right|\leq C\sum_{i=0}^{m-1}e_{(q,i,m-1-i)}.\] Lastly for \(T_{2}\) and \(T_{3}\), we need to use Corollary 2.5, \[I^{2} :=\left|\int\rho h^{q}d\omega\wedge d^{c}h\wedge\omega^{n-m-1} \wedge\gamma^{m-1-s}\wedge(dd^{c}v)^{s}\right|^{2}\] \[\leq C\int h^{q-1}(\gamma+dd^{c}v)^{m-1}\wedge\omega^{n-m+1}\] \[\qquad\times\int h^{q+1}dh\wedge d^{c}h\wedge(\gamma+dd^{c}v)^{m -1}\wedge\omega^{n-m}\] The standard Cauchy-Schwarz inequality (Lemma 2.3) gives for \(\varepsilon>0\) to be determined later, \[I \leq\frac{C}{\varepsilon}\int h^{q-1}(\gamma+dd^{c}v)^{m-1}\wedge \omega^{n-m+1}\] \[\qquad+\varepsilon\int h^{q+1}dh\wedge d^{c}h\wedge(\gamma+dd^{ c}v)^{m-1}\wedge\omega^{n-m}.\] By the last inequality in (2.23 \({}^{\prime}\)) the first integral is bounded by \[\frac{C}{\varepsilon}\sum_{i=0}^{m-1}e_{(q-2,i,m-1-i)}.\] To bound the second integral we use \[(\gamma+dd^{c}v)^{m-1}\wedge\omega^{n-m}\leq C_{m,n}\sum_{i=0}^{m-1}\gamma^{i }\wedge(dd^{c}v)^{m-1-i}\wedge\omega^{n-m},\] and then Lemma 2.10-(a) for \(k=i,s=m-1-i\) and \(p=q+2\). This gives a bound for the second integral by \[C\varepsilon\sum_{i=0}^{m-1}e_{(q+1,i,m-i)}+C\varepsilon\sum_{i=0}^{m-1}e_{( q+2,i,m-1-i)}.\] Let us consider the first sum above: \[\varepsilon\sum_{i=0}^{m-1}e_{(q+1,i,m-i)}=\varepsilon\sum_{i\geq k+1}e_{(q+1,i,m-i)}+\varepsilon\sum_{i=0}^{k-1}e_{(q+1,i,m-i)}+\varepsilon e_{(q+1,k,m-k )}.\] Applying the induction hypothesis (2.26) to the first term on the right, we derive \[\varepsilon\sum_{i=0}^{m-1}e_{(q+1,i,m-i)}\leq\varepsilon b_{k}\left( e_{(q,k,m-k)}+\sum_{i=0}^{k-1}e_{(q,i,m-i)}\right)+\varepsilon b_{k}\sum_{i=0}^{m-1 }e_{(q-1,i,m-1-i)}\] \[\qquad\qquad\qquad\qquad\qquad+\varepsilon\sum_{i=0}^{k-1}e_{(q+1,i,m-i)}+\varepsilon e_{(q+1,k,m-k)},\] where \(b_{k}=c_{m}+\cdots+c_{k+1}\). Since \(e_{(q+2,\bullet,\bullet)}\leq e_{(q+1,\bullet,\bullet)}\leq e_{(q,\bullet, \bullet)}\leq e_{(q-1,\bullet,\bullet)}\), it follows from the above estimates that (2.25 \[{}^{\prime}\] ) \[{}^{\prime}\] Thus, combining (2.21\({}^{\prime}\)), (2.22\({}^{\prime}\)), (2.24\({}^{\prime}\)) and (2.25\({}^{\prime}\)) we have \[e_{(q,k,s)}\leq Ce_{(q-1,k-1,s+1)}+C\sum_{i=0}^{m-1}e_{(q,i,m-1-i)}\] \[{}^{\prime}+\varepsilon(1+b_{k})e_{(q,k,s)}+\varepsilon\sum_{i=0} ^{k-1}e_{(q-1,i,m-i)}\] \[{}^{\prime}+[\varepsilon(b_{k}+1)+C/\varepsilon+C\varepsilon]\sum _{i=0}^{m-1}e_{(q-2,i,m-1-i)}.\] Now we can choose \(\varepsilon\) so that \(\varepsilon(1+b_{k})=1/2\). Since \(s=m-k\), regrouping terms on the right hand side (decreasing the first parameter in \(e_{(q,i,m-1-i)}\) if necessary) we get for possibly larger \(C>0\) that \[e_{(q,k,m-k)}/2\leq(C+\varepsilon)\sum_{i=0}^{k-1}e_{(q-1,i,m-i)}+(C+C/ \varepsilon)\sum_{i=0}^{m-1}e_{(q-2,i,m-1-i)}.\] The induction step is proven and the lemma follows. Next, we consider Case 2. **Lemma 2.13**.: _For \(1\leq k\leq m-1\) and \(s=m-1-k\geq 0\) and \(q\geq 1\), we have_ \[e_{(q,k,s)}\leq C\left(e_{(q-1,k-1,s+1)}+\sum_{i=0}^{m-2}e_{(q-1,i,m-2-i)} \right).\] Proof.: The basic computation using integration by parts that corresponds to (2.19) starts with (2.19 \[{}^{\prime\prime}\] ) \[dd^{c}(h^{q+1}\omega^{n-m+1}):=T_{0}+T_{1}+T_{2}+T_{3}+T_{4},\] where each term has higher exponent of \(\omega\). The estimates for \(T_{0},T_{1}\) are the same as the ones in (2.21\({}^{\prime}\)) and (2.22\({}^{\prime}\)). Precisely, (2.21 \[{}^{\prime\prime}\] ) \[\rho T_{0}\wedge\gamma^{m-2-s}\wedge(dd^{c}v)^{s}\leq 0,\] and (2.22\({}^{\prime\prime}\)) \[\rho T_{1}\wedge\gamma^{m-2-s}\leq(q+1)h^{q}\gamma^{m-2-s}\wedge(dd^{c}v)^{s+1} \wedge\omega^{n-m+1}.\] The one for \(T_{4}\) is (2.23\({}^{\prime\prime}\)) \[\gamma^{m-2-s}\wedge(dd^{c}v)^{s}\wedge dd^{c}(\omega^{n-m+1})\] \[\leq C_{m,n}(\gamma+dd^{c}v)^{m-2}\wedge\omega^{n-m+2}\] \[\leq C_{m,n}[\gamma^{m-2}+\gamma^{m-3}\wedge dd^{c}v+\cdots+(dd^{ c}v)^{m-2}]\wedge\omega^{n-m+2}.\] Integrating both sides and using the fact that \(0\leq h\leq 1\) yield (2.24\({}^{\prime\prime}\)) \[\left|\int\rho h^{q+1}\gamma^{m-2-s}\wedge(dd^{c}v)^{s}\wedge dd^{c}(\omega^{ n-m+1})\right|\leq C\sum_{i=0}^{m-2}e_{(q,i,m-2-i)}.\] Next, the corresponding inequalities for \(T_{2}\) and \(T_{3}\) are easier. This is due to the fact that \[T_{2}\wedge\gamma^{m-2-s}\wedge(dd^{c}v)^{s}=C_{0}h^{q}d\omega\wedge d^{c}h \wedge\omega^{n-m}\wedge\gamma^{m-2-s}\wedge(dd^{c}v)^{s}.\] Therefore, the classical Cauchy-Schwarz inequality (Lemma 2.3) is sufficient. Namely, \[I^{2} :=\left|\int\rho h^{q}d\omega\wedge d^{c}h\wedge\omega^{n-m} \wedge\gamma^{m-2-s}\wedge(dd^{c}v)^{s}\right|^{2}\] \[\leq C\int h^{q}\gamma^{m-2-s}\wedge(dd^{c}v)^{s}\wedge\omega^{n- m+2}\] \[\qquad\times\int h^{q}dh\wedge d^{c}h\wedge\gamma^{m-2-s}\wedge( dd^{c}v)^{s}\wedge\omega^{n-m+1}.\] The Cauchy-Schwarz inequality gives \[I\leq C\int h^{q}\gamma^{m-2-s}\wedge(dd^{c}v)^{s}\wedge\omega^{n-m+2}\] \[\qquad+C\int h^{q}dh\wedge d^{c}h\wedge\gamma^{m-2-s}\wedge(dd^{ c}v)^{s}\wedge\omega^{n-m+1}.\] Here, the first integral in the sum is \(e_{(q-1,k-1,s)}\). By applying Lemma 2.10-(a) for \(k-1,s\) and \(p=q+1\) one gets a bound for the second integral by \[\int h^{q+1}\gamma^{m-2-s}\wedge(dd^{c}v)^{s+1}\wedge\omega^{n-m+ 1}+C\int h^{q+2}(\gamma+dd^{c}v)^{m-2}\wedge\omega^{n-m+2}\] \[\leq e_{(q,k-1,s+1)}+C\sum_{i=0}^{m-2}e_{(q+1,i,m-2-i)}.\] Combining this with the decreasing property in the first parameter of \(e_{(q,\bullet,\bullet)}\) we get (2.25\({}^{\prime\prime}\)) \[I \leq C[e_{(q-1,k-1,s)}+e_{(q,k-1,s+1)}+\sum_{i=0}^{m-2}e_{(q+1,i,m-2 -i)}]\] \[\leq C[e_{(q-1,k-1,s+1)}+\sum_{i=0}^{m-2}e_{(q-1,i,m-2-i)}].\] Finally, combining (2.21\({}^{\prime\prime}\)), (2.22\({}^{\prime\prime}\)), (2.24\({}^{\prime\prime}\)) and (2.25\({}^{\prime\prime}\)) one completes the proof of the lemma. Lastly, we consider Case 3. **Lemma 2.14**.: _For \(1\leq k\leq m-2\) and \(0\leq s\leq m-2-k\) and \(q\geq 1\) we have_ \[e_{(q,k,s)}\leq C\left[e_{(q-1,k-1,s+1)}+e_{(q,k-1,s)}\right].\] Proof.: We need to estimate \(dd^{c}\rho\wedge\gamma^{k-1}\wedge(dd^{c}v)^{s}\wedge\omega^{n-k-s}\), where \(n-k-s\geq n-m+2\). Then, there is a significant change in basic computation of (2.19 \[\prime\prime\prime\] ) \[dd^{c}(h^{q+1}\omega^{n-k-s})=T_{0}+T_{1}+T_{2}+T_{3}+T_{4},\] where all forms \(T_{i}\) contain powers of \(\omega\) with the exponent at least \(n-m\). The estimates for \(T_{0},T_{1}\) are the same as in Corollary 2.12 and improved estimates for \(T_{2},T_{3}\) are as in Lemma 2.13. Moreover the bound for \(T_{4}\) is easier. Namely, since \(\gamma^{k-1}\wedge(dd^{c}v)^{s}\wedge\omega^{n-m}\) is a positive form, one obtains (2.23 \[\prime\prime\] ) \[\gamma^{k-1}\wedge(dd^{c}v)^{s}\wedge dd^{c}(\omega^{n-k-s})\leq C\gamma^{k-1 }\wedge(dd^{c}v)^{s}\wedge\omega^{n-k-s+1}.\] Hence, multiplying both sides by \(\rho h^{q+1}\) and then integrating we get (2.24 \[\left|\int\rho h^{q+1}\gamma^{k-1}\wedge(dd^{c}v)^{s}\wedge dd^{c}(\omega^{n-k -s})\right|\leq Ce_{(q,k-1,s)},\] where we used the fact that \(-1\leq\rho\leq 0\). Next, the estimates for \(T_{2}\) and \(T_{3}\) are \[I :=\left|\int\rho h^{q}d\omega\wedge d^{c}h\wedge\omega^{n-k-s-1} \wedge\gamma^{k-1}\wedge(dd^{c}v)^{s}\right|\] \[\leq C\int h^{q+1}\gamma^{k-1}\wedge(dd^{c}v)^{s}\wedge\omega^{n- k-s}\] \[\quad+C\int h^{q-1}dh\wedge d^{c}h\wedge\gamma^{k-1}\wedge(dd^{c }v)^{s}\wedge\omega^{n-k-s}.\] Using Lemma 2.10-(b) for \(k-1,s\) and \(p=q-1\) in the last integral above yields (2.25 \[\prime\prime\] ) \[I \leq Ce_{(q,k-1,s)}+C[e_{(q-1,k-1,s+1)}+e_{(q,k-1,s)}]\] \[\leq C[e_{(q-1,k-1,s+1)}+e_{(q,k-1,s)}].\] Combining the estimates for \(T_{0},T_{1}\) in (2.19 \[\prime\prime\] ), (2.24 \[\prime\prime\] ) and (2.25 \[\prime\prime\prime\] ) we complete the proof of lemma. We are ready to state the main inequality. **Proposition 2.15**.: _Let \(e_{(q,k,s)}\) be the numbers defined by (2.15). Then, for \(q=3m\),_ \[e_{(q,m,0)}\leq C\sum_{s=0}^{m}e_{(0,0,s)},\] _where \(C=C(\omega,n,m)\) is a uniform constant._ Proof.: We start with Lemma 2.11 which gives \[e_{(q,m,0)}\leq C\left[e_{(q-1,m-1,1)}+e_{(q-1,m-1,0)}\right]. \tag{2.27}\] Then, the first term in the bracket is estimated via Corollary 2.12. Applying this corollary \((m-1)\)-times and using decreasing property of \(e_{(p,k,s)}\) in the first parameter, we get \[e_{(q-1,m-1,1)}\leq Ce_{(q-m,0,m)}+C\sum_{i=0}^{m-1}e_{(q-m-2,i,m-1-i)}. \tag{2.28}\] The second term in the bracket in (2.27) satisfies \(e_{(q-1,m-1,0)}\leq e_{(q-m-2,m-1,0)}\). Next, we use Lemma 2.13 for each term \(e_{(q^{\prime},\ell,m-1-\ell)}\) with \(q^{\prime}=q-m-2\) in the sum above. Namely, applying the lemma \(\ell\)-times and using the decreasing property again, we get \[e_{(q^{\prime},\ell,m-1-\ell)}\leq Ce_{(q^{\prime}-\ell,0,m-1)}+C\sum_{i=0}^{m- 2}e_{(q^{\prime}-\ell-1,i,m-2-i)}.\] Note that the smallest value of the first parameter in the last sum is \(q^{\prime}-m\) for \(\ell=m-1\). Hence, \[\sum_{i=0}^{m-1}e_{(q^{\prime},i,m-1-i)}\leq Ce_{(q^{\prime}-\ell,0,m-1)}+C\sum _{i=0}^{m-2}e_{(q^{\prime}-m,i,m-2-i)}. \tag{2.29}\] It remains to apply Lemma 2.14 for each term \(e_{(q^{\prime\prime},\ell,m-2-\ell)}\) in the sum on the right hand side, where \(q^{\prime\prime}=q^{\prime}-m=q-2m-2\). Again, we have \[e_{(q^{\prime\prime},\ell,m-2-\ell)} \leq Ce_{(q^{\prime\prime}-1,\ell-1,m-\ell-1)}+Ce_{(q^{\prime \prime},\ell-1,m-\ell-2)}\] \[\leq\cdots\] \[\leq Ce_{(q^{\prime\prime}-\ell,0,m-2)}+\sum_{i=0}^{\ell-1}e_{(q^{ \prime\prime}-i,\ell-1-i,m-\ell-2+i)}.\] Therefore, an easy induction argument gives us \[e_{(q^{\prime\prime},\ell,m-2-\ell)}\leq C\sum_{s=0}^{m-2}e_{(q^{\prime\prime }-\ell,0,s)}. \tag{2.30}\] Combining (2.27), (2.28), (2.29) and (2.30) we arrive at \[e_{(q,m,0)}\leq C\sum_{s=0}^{m}e_{(q^{\prime\prime}-m+2,0,s)}=C\sum_{s=0}^{m} e_{(0,0,s)}\] as \(q^{\prime\prime}-m+2=q-3m=0\). So far all considered functions were smooth, however, by [16, Proposition 2.11] we know that the integrands on both sides of the above statements (Lemmas 2.10-2.14, Corollary 2.12 and Proposition 2.15) are well-defined for continuous \(m-\omega\)-sh functions. Let us record the following observation. **Remark 2.16**.: Let \(\Omega\) be a strictly \(m\)-pseudoconvex domain. The statements above are still valid for continuous \(m-\omega\)-sh functions \(v,w\) and \(-1\leq\rho\leq 0\) satisfying \(-1\leq v\leq w\leq 0\) and \(v=w\) in a neighborhood of \(\partial\Omega\). In fact, there exist decreasing sequences of \(m-\omega\)-sh functions \(v_{j},w_{j}\) and \(\rho_{j}\) belonging to \(C^{\infty}(\overline{\Omega})\) such that \(v_{j}\downarrow v\), \(w_{j}\downarrow w\) and \(\rho_{j}\downarrow\rho\) (uniformly) in \(\overline{\Omega}\), and moreover, \[-1\leq v_{j}\leq w_{j}\leq 0,\quad-1\leq\rho_{j}\leq 0.\] For plurisubharmonic functions the usual convolution with standard kernels produces the approximating sequence and hence, the propert: \(v_{j}=w_{j}\) near the boundary is preserved. Then, the integration by parts is not affected and passing to the limit as \(j\to+\infty\) gives the desired inequalities. However, in this new setting we used a different way to obtain the approximating sequence. The property that \(v_{j}=w_{j}\) near the boundary of \(\Omega\) needs to be verified, which is possible via the stability estimates for complex Hessian equations. However, we can get around this by showing the uniform convergence to zero of the sequence \(h_{j}=w_{j}-v_{j}\) near the boundary. Let \(\Omega^{\prime}\subset\subset\Omega\) be a smooth domain such that \(v=w\) outside \(\Omega^{\prime}\). Let \(T_{j}=(dd^{c}\rho_{j})^{k}\wedge(dd^{c}v_{j})^{s}\), where \(k+s\leq m\). Then, it follows from the weak convergence [11, Proposition 2.11] and the CLN inequality [11, Proposition 2.9] that for \(p\geq 1\), \[\lim_{j\to\infty}\int_{\Omega^{\prime}}h_{j}^{p}T_{j}\wedge\omega^{n-k-s}=\int _{\Omega}h^{p}T\wedge\omega^{k-s}.\] Therefore, we reduce the required inequality to the case of smooth functions \(v_{j},w_{j}\) and \(\rho_{j}\). However the integration by parts in (2.20) will contain the extra boundary terms: \[\int_{\Omega^{\prime}}h_{j}^{p}dd^{c}\rho_{j}\wedge T_{j}\wedge\omega^{n-k-s}= \int_{\Omega^{\prime}}\rho_{j}dd^{c}(h_{j}^{p}\omega^{n-k-s})\wedge T_{j}+E_{ 1}+E_{2},\] where \[E_{1} =\int_{\partial\Omega^{\prime}}h_{j}^{p}d^{c}\rho_{j}\wedge \omega^{n-k-s}\wedge T_{j};\] \[E_{2} =-\int_{\partial\Omega^{\prime}}\rho_{j}d^{c}(h_{j}^{p}\omega^{n- k-s})\wedge T_{j}\] \[=-\int_{\partial\Omega^{\prime}}\rho_{j}h_{j}^{p-1}(pd^{c}h_{j} \wedge\omega^{n-k-s}+h_{j}d^{c}\omega^{n-k-s})\wedge T_{j}.\] By the CLN inequality and \(h_{j}\to 0\) uniformly on a neighborhood of \(\partial\Omega^{\prime}\) as \(j\to\infty\), the two boundary terms go to zero when we pass to the limit. **Remark 2.17**.: We will see later that the above statements also hold for bounded \(m-\omega\)-sh functions once we define the wedge product for currents related to such functions and prove the weak convergence under decreasing sequences. ## 3. Wedge product for bounded functions In this section we prove the existence of the wedge product of \(dd^{c}\) operator applied to bounded \(m-\omega\)-sh functions. Let \(\Omega\subset\mathbb{C}^{n}\) be a bounded open set. **Lemma 3.1**.: _Let \(\mu_{j}\) be a sequence of positive Radon measures with compact support in \(\Omega\). Assume that \(\mu_{j}\) converges weakly to \(\mu\). Let \(\Omega\supset\supset F_{1}\supset F_{2}\supset\cdots\) be a sequence of decreasing closed subsets in \(\Omega\) satisfying_ \[\lim_{j\to\infty}\mu(F_{j})=0.\] _Then, \(\lim_{j\to+\infty}\mu_{j}(F_{j})=0.\)_ Proof.: Fix \(\varepsilon>0\). By the assumption there exists \(j_{0}>0\) such that \(\mu(F_{j_{0}})<\varepsilon\). Using the inclusions we have \(\mu_{j}(F_{j})\leq\mu_{j}(F_{j_{0}}),\) for \(j>j_{0}.\) The weak convergence implies \[\limsup_{j\to\infty}\mu_{j}(F_{j})\leq\limsup_{j\to\infty}\mu_{j}(F_{j_{0}}) \leq\mu(F_{j_{0}})<\varepsilon.\] It follows that \[0\leq\liminf_{j\to\infty}\mu_{j}(F_{j})\leq\limsup_{j\to\infty}\mu_{j}(F_{j}) \leq\varepsilon.\] This holds for every \(\varepsilon>0\). Thus, the conclusion follows. **Lemma 3.2**.: _Let \(\mu_{j}\) be a sequence of positive Radon measures with compact support in \(\Omega\). Assume that \(\mu_{j}\) converges weakly to \(\mu\) whose support is also compact in \(\Omega\). Let \(\Omega\supset\supset U_{1}\supset U_{2}\supset\cdots\) be a decreasing sequence of open subsets in \(\Omega\) satisfying_ \[\bigcap_{j\geq 1}U_{j}=\emptyset.\] _Then, \(\lim_{j\to+\infty}\mu_{j}(U_{j})=0\)._ Proof.: Assume that \(\operatorname{supp}\,\mu_{j},\operatorname{supp}\,\mu\subset\Omega^{\prime \prime}\subset\subset\Omega^{\prime}\subset\subset\Omega\) for fixed domains \(\Omega^{\prime\prime}\) and \(\Omega^{\prime}\). Without loss of generality we may assume that \(\mu(\Omega)=1\). By the compact support assumption, \(\lim_{j\to+\infty}\mu_{j}(\Omega)=1\). Denote \(F_{j}=\overline{\Omega}^{\prime}\setminus U_{j}\). This is an increasing sequence and \[\bigcup_{j\geq 1}F_{j}=\overline{\Omega}^{\prime}.\] Fix \(\varepsilon>0\). By the assumptions \(\lim_{j}\mu(U_{j})=0\), hence there exists \(j_{0}\) such that \(\mu(U_{j_{0}})<\varepsilon\). Let \(0\leq\phi\leq 1\) be a continuous function in \(\Omega\) with \(\operatorname{supp}\,\phi\subset\Omega^{\prime}\) and \(\phi=1\) on the compact set \(\overline{\Omega}^{\prime\prime}\setminus U_{j_{0}}\) (using the Urysohn lemma). From the weak convergence and \(\operatorname{supp}\,\mu\subset\Omega^{\prime\prime}\subset\subset\Omega^{\prime}\), we have \[\lim_{j\to+\infty}\mu_{j}(\phi)=\mu(\phi)\geq\mu(\Omega^{\prime}\setminus U_ {j_{0}})\geq 1-\varepsilon.\] Since \(\operatorname{supp}\,\phi\subset\Omega^{\prime}\) and \(F_{j}\) increase to \(\overline{\Omega}^{\prime}\), we have \(\mathbf{1}_{F_{j}}\geq\phi\) for every \(j>j_{0}\). Therefore, for large \(j\), \[\mu_{j}(U_{j})=\mu_{j}(1-\mathbf{1}_{F_{j}})\leq\mu_{j}(1-\phi)=\mu_{j}(\Omega )-\mu_{j}(\phi)\leq 2\varepsilon.\] This finishes the proof. We are in a position to prove the key technical result for defining the wedge product of cirrents related to bounded \(m-\omega\)-sh functions. **Lemma 3.3**.: _Let \(u\) be a bounded \(m-\omega\)-sh. Suppose \(\{u_{j}\}_{j\geq 1}\) is a decreasing sequence of continuous \(m-\omega\)-sh functions with \(\|u_{j}\|_{L^{\infty}}\leq 1\) such that \(u_{j}\downarrow u\) pointwise. Assume also that all \(u_{j}=u\) on a neighborhood of \(\partial\Omega\). Let \(-1\leq\rho_{j}\leq 0\) be a sequence of smooth \(m-\omega\)-sh functions. Then, for every \(0\leq s\leq m\),_ \[\lim_{j\to+\infty}\int_{\Omega}(u_{j}-u)(dd^{c}\rho_{j})^{s}\wedge\omega^{n-s }=0. \tag{3.1}\] Proof.: Let \(\Omega^{\prime}\subset\subset\Omega\) be a relatively compact set in \(\Omega\) such that \(u_{j}=u\) on \(\Omega\setminus\overline{\Omega}^{\prime}\). Fix a cut-off function \(\chi\) with compact support in \(\Omega\) and equal \(1\) on \(\Omega^{\prime}\). Denote by \(\mu_{j}\) the measure \(\chi(dd^{c}\rho_{j})^{s}\wedge\omega^{n-s}\). Notice that by the Chern-Levine-Nirenberg inequality [11, Proposition 2.9, Proposition 2.11] the sequence \(\{\mu_{j}\}_{j\geq 1}\) is weakly compact in the weak topology of measures. By passing to a subsequence argument we may assume that \(\mu_{j}\) converges weakly to \(\mu\). Let \(\varepsilon>0\) be fixed. By the fact that \(\|u_{j}\|_{L^{\infty}}\leq 1\) and Markov's inequality, \[\begin{split}\int_{\Omega}(u_{j}-u)d\mu_{j}&\leq \mu_{j}(u_{j}-u>\varepsilon)+\varepsilon\mu_{j}(\Omega)\\ &\leq\mu_{j}(u_{j}-u>\varepsilon)+C\varepsilon.\end{split} \tag{3.2}\] Since \(u\) is upper semi-continuous, the sets \(U_{j}=\{u_{j}-u>\varepsilon\}\) are open. Further, as \(u_{j}\) decreases to \(u\), the monotone convergence theorem gives \[\lim_{j\to+\infty}\int_{\Omega}(u_{j}-u)d\mu=0.\] Again, Markov's inequality implies \(\lim_{j\to+\infty}\mu(U_{j})=0\). Hence, the assumptions of Lemma 3.2 are satisfied and we get that \(\mu_{j}(U_{j})\leq\varepsilon\) for \(j>0\) large. This combined with (3.2) yields \(\int_{\Omega}(u_{j}-u)d\mu_{j}\leq(C+1)\varepsilon\). The proof of the lemma follows. By Garding's inequality [1] we know that if \(u_{1},..,u_{p}\) are smooth \(m-\omega\)-sh functions, then for \(1\leq p\leq m\), \[\mathscr{L}(u_{1},...,u_{p}):=dd^{c}u_{1}\wedge\cdots\wedge dd^{c}u_{p}\wedge \omega^{n-m} \tag{3.3}\] is a positive form. Clearly this operator is symmetric with respect to functions \(u_{1},...,u_{p}\). In the special case \(u_{1}=\cdots=u_{p}\), we denote \[\mathscr{L}_{p}(u)=\mathscr{L}_{p}(u,...,u)=\mathscr{L}(\underbrace{u,...,u} _{p-times}).\] Now we shall extend this operator to bounded \(m-\omega\)-sh functions. **Definition 3.4**.: (wedge product) Let \(u_{1},...,u_{p}\) be bounded \(m-\omega\)-sh functions, where \(1\leq p\leq m\). Let \(\{u_{s}^{j}\}_{j\geq 1}\downarrow u_{s}\) for \(s=1,...,p\), be a sequence of smooth \(m-\omega\)-sh functions that decreases point-wise to \(u_{s}\). The wedge product \(\mathscr{L}(u_{p},...,u_{1})\) is given by the weak limit \[\mathscr{L}(u_{p},...,u_{1})=\lim_{j\to\infty}dd^{c}u_{p}^{j}\wedge\cdots \wedge dd^{c}u_{1}^{j}\wedge\omega^{n-m}. \tag{3.4}\] It is a positive current of bi-degree \((n-m+p,n-m+p)\). Proof.: Notice first that on the right hand side of (3.4) there are positive forms. So, if the limit exits, then it is a positive current. The existence of such a limit follows from the CLN inequality [11, Proposition 2.9]. It remains to show that the weak limit is uniquely defined, that is it does not depend on particular smooth decreasing sequences approximating \(u_{1},...,u_{p}\). Let \(\{v_{1}^{j}\}_{j\geq 1}\downarrow u_{1},...,\{v_{p}^{j}\}_{j\geq 1}\downarrow u _{p}\) be another collection of sequences of smooth \(m-\omega\)-sh functions. Since the property is local, we can apply the localization principle and assume that \(\Omega\) is a ball and all functions \(u_{j},v_{j}\) and \(u\) equal to a fixed smooth psh function \(\psi\) on \(\Omega\setminus\Omega^{\prime}\) for some domain \(\Omega^{\prime}\subset\subset\Omega\). Moreover, by subtracting and then dividing by a large constant all functions in both sequences we may assume that for \(1\leq s\leq p\) and \(j\geq 1\), \[-1\leq v_{s}^{j},u_{s}^{j},u\leq 0\quad\text{ in }\Omega.\] Let \(\chi\) be a test form in \(\Omega^{\prime}\). We wish to show that \[\lim_{j\to+\infty}\int\chi[\mathscr{L}(v_{p}^{j},...,v_{1}^{j})-\mathscr{L}(u _{p}^{j},...,u_{1}^{j})]=0. \tag{3.5}\] (We skip writing the domain \(\Omega\) in integral formulas here and below.) In fact, \[[\mathscr{L}(v_{p}^{j},...,v_{1}^{j})-\mathscr{L}(u_{p}^{j},...,u_{1}^{j})]= \sum_{s=1}^{p}dd^{c}(v_{s}^{j}-u_{s}^{j})\wedge T_{s}\wedge\omega^{n-m}, \tag{3.6}\] where \[T_{s}=dd^{c}v_{1}^{j}\wedge\cdots\wedge dd^{c}v_{s-1}^{j}\wedge dd^{c}u_{s+1}^ {j}\wedge\cdots\wedge dd^{c}u_{p}^{j}.\] (Here one should use the obvious modifications for \(s=1\) and \(s=p\).) We are going to show that each term in the sum goes to zero. Let us consider the case \(s=p\), for example (the other cases are completely the same). Notice that \(T_{p}=dd^{c}v_{1}^{j}\wedge\cdots\wedge dd^{c}v_{p-1}^{j}\) is a smooth closed \((p-1,p-1)\)-form. By integration by parts, \[\int\chi\omega^{n-m}\wedge dd^{c}(v_{p}^{j}-u_{p}^{j})\wedge T_{p}=\int(v_{p}^ {j}-w_{p}^{j})dd^{c}(\chi\omega^{n-m})\wedge T_{p}.\] Since \(0\leq p-1\leq m-1\), it follows from [16, Corollary 2.4] that \[|dd^{c}(\chi\omega^{n-m})\wedge T_{p}|\leq 2^{m}C[dd^{c}(v_{1}^{j}+\cdots v_{p-1} ^{j})]^{p-1}\wedge\omega^{n-p+1}\] for a uniform constant \(C\) depending only on \(\omega\) and \(\chi\). Write \[\rho_{j}=\frac{\sum_{s=1}^{p-1}v_{s}^{j}}{p-1},\] then \[\left|\int(v_{p}^{j}-u_{p}^{j})dd^{c}(\chi\omega^{n-m})\wedge T_{p}\right| \leq C\int|v_{p}^{j}-u_{p}^{j}|(dd^{c}\rho_{j})^{p-1}\wedge\omega^ {n-p+1}\] \[\leq C\int(v_{p}^{j}-u)(dd^{c}\rho_{j})^{p-1}\wedge\omega^{n-p+1}\] \[\quad+C\int(u_{p}^{j}-u)(dd^{c}\rho_{j})^{p-1}\wedge\omega^{n-p+1}.\] The right hand side goes to zero as \(j\to\infty\) by Lemma 3.3. The proof for \(s=p\) is completed. Hence the conclusion of the theorem follows. **Definition 3.5**.: Let \(u\) be a bounded \(m-\omega\)-sh function. Then, the Hessian operator \(H_{m}(u)\) is defined by \[H_{m}(u):=(dd^{c}u)^{m}\wedge\omega^{n-m}=\mathscr{L}_{m}(u,...,u).\] Moreover, for \(1\leq s\leq m\), we also write \(H_{s}(u)=(dd^{c}u)^{s}\wedge\omega^{n-s}\). We obtain now the fundamental Chern-Levine-Nirenberg inequality for bounded functions. **Proposition 3.6** (CLN inequality).: _Let \(K\subset\subset U\subset\subset\Omega\), where \(K\) is compact and \(U\) is open. Let \(u,u_{1},...,u_{p}\), be bounded \(m-\omega\)-sh functions in \(\Omega\), where \(1\leq p\leq m\). Then, there exists a constant \(C\) depending on \(K,U,\omega\) such that_ * \(\int_{K}(dd^{c}u)^{p}\wedge\omega^{n-p}\leq C(1+\|u\|_{L^{\infty}(U)})^{p}\)_;_ * \(\int_{K}dd^{c}u_{1}\wedge\cdots\wedge dd^{c}u_{p}\wedge\omega^{n-p}\leq C \left(1+\sum_{s=1}^{p}\|u_{s}\|_{L^{\infty}(U)}\right)^{p}.\)__ Proof.: (a) Without loss of generality we may assume that \(\|u\|_{L^{\infty}(\Omega)}\leq 1\). We can cover \(K\) by finitely many small balls, hence we can assume that \(K\) and \(U\) are concentric balls. Let \(\{u^{\delta}\}_{\delta>0}\) be sequences of smooth \(m-\omega\)-sh functions such that \(u^{\delta}\downarrow u\) as \(\delta\to 0\). Let \(0\leq\chi\leq 1\) be a cut-off function such that \(\chi\equiv 1\) on \(K\) and \(\operatorname{supp}\chi\subset U\). By the CLN inequality for smooth functions \[\int\chi\mathscr{L}_{p}(u^{\delta})\wedge\omega^{m-p}\leq C\|u^{\delta}\|_{L^ {\infty}(U)}^{p}.\] By Hartogs lemma for \(\omega\)-sh functions [16, Lemma 9.14] it follows that \[\lim_{\delta\to 0}\|1+u^{\delta}\|_{L^{\infty}(U)}=\lim_{\delta\to 0}\sup_{ \overline{U}}(1+u^{\delta})=1+\sup_{\overline{U}}u\leq 1+\|u\|_{L^{\infty}(U)}.\] This combined with the monotone convergence theorem implies \[\int_{K}\mathscr{L}_{p}(u)\wedge\omega^{m-p} \leq\lim_{\delta\to 0}\int\chi\mathscr{L}_{p}(u^{\delta})\wedge \omega^{m-p}\] \[\leq C\lim_{\delta\to 0}\|u^{\delta}\|_{L^{\infty}}^{p}\] \[=C(1+\|u\|_{L^{\infty}(U)})^{p}.\] (b) Observe that for \(v:=u_{1}+\cdots+u_{p}\) we have \(\mathscr{L}_{p}(v)\geq\mathscr{L}(u_{1},...,u_{p})\) as positive currents. So, (b) is an immediate consequence of (a). **Remark 3.7**.: The smoothness assumption on the sequence \(\rho_{j}\) in Lemma 3.3 was only needed for the CLN inequality. Thanks to the above proposition we now can relax this assumption and admit just bounded \(m-\omega\)-sh functions in that lemma. ## 4. Quasi-continuity Having the Hessian measure defined for bounded \(m-\omega\)-sh functions, we can introduce the \(m\)-capacity (cf. [1]): for a Borel subset \(E\subset\Omega\), \[cap_{m}(E)=\sup\left\{\int_{E}(dd^{c}\rho)^{m}\wedge\omega^{n-m}:\rho\text{ is }m-\omega\text{-sh in }\Omega,-1\leq\rho\leq 0\right\}. \tag{4.1}\] Here in fact \(cap_{m}(E)=cap_{m}(E,\Omega)\) but we shall often suppress \(\Omega\) in the notation if the domain is fixed. Then, this is an inner capacity, namely, \[cap_{m}(E)=\sup\{cap_{m}(K):K\text{ is compact subset of }E\}.\] **Proposition 4.1**.: _Let \(\Omega\) be a open set in \(\mathbb{C}^{n}\) and \(cap_{m}(E)=cap_{m}(E,\Omega)\). Then,_ * _If_ \(E_{1}\subset E_{2}\)_, then_ \(cap_{m}(E_{1})\leq cap_{m}(E_{2})\)_._ * _If_ \(E\subset\Omega_{1}\subset\Omega_{2}\)_, then_ \(cap_{m}(E,\Omega_{2})\leq cap_{m}(E,\Omega_{1})\)_._ * \(cap_{m}(\cup_{j}E_{j})\leq\sum_{j}cap_{m}(E_{j})\)_._ * _If_ \(E_{1}\subset E_{2}\subset\cdots\) _are Borel set in_ \(\Omega\) _and_ \(E:=\cup_{j}E_{j}\)_, then_ \(cap_{m}(E)=\lim_{j}cap_{m}(E_{j})\)_._ **Definition 4.2** (Convergence in capacity).: A sequence of Borel functions \(u_{j}\) in \(\Omega\) is said to converge in capacity (or in \(cap_{m}(\bullet)\)) to \(u\) if for any \(\delta>0\) and \(K\subset\subset\Omega\), \[\lim_{j\to\infty}cap_{m}(K\cap|u_{j}-u|\geq\delta)=0.\] **Proposition 4.3**.: _If \(\{u_{j}\}_{j\geq 1}\) be a uniformly bounded sequence of continuous \(m-\omega\)-sh functions that decreases to a bounded \(m-\omega\)-sh function \(u\) in \(\Omega\). Then, \(u_{j}\) converges to \(u\) in \(cap_{m}(\bullet)\)._ Proof.: Because of (b) and (c) in Proposition 4.1 we can assume that \(\Omega\) is a ball and all functions are equal near the boundary. Let \(\delta>0\) and we wish to show that \[\lim_{j\to\infty}cap_{m}(\{u_{j}-u>\delta\})=0.\] Argument by contradiction. Suppose that the statement were not true. Then, there would exist \(\varepsilon>0\), a subsequence \(\{u_{j_{*}}\}\subset\{u_{j}\}\) and a sequence of \(m-\omega\)-sh functions \(\rho_{j_{*}}\) with \(-1\leq\rho_{j_{*}}\leq 0\) such that \[\limsup_{j_{*}\to+\infty}\int_{\{u_{j_{*}}-u>\delta\}}H_{m}(\rho_{j_{*}})\geq\varepsilon.\] On the other hand, by Markov's inequality, \[\int_{\{u_{j_{s}}-u>\delta\}}H_{m}(\rho_{j_{s}})\leq\frac{1}{\delta}\int_{\Omega} (u_{j_{s}}-u)H_{m}(\rho_{j_{s}}).\] Thanks to Remark 3.7 we can apply Lemma 3.3 and derive that the right hand side converges to zero. This leads to a contradiction. Thus, the proof of proposition is completed. **Theorem 4.4**.: _Let \(\Omega\) be a bounded open set in \(\mathbb{C}^{n}\). Let \(u\) be a \(m-\omega\)-sh function in \(\Omega\). Then, for every \(\varepsilon>0\), there exists an open subset \(U\subset\Omega\) with \(cap_{m}(U,\Omega)<\varepsilon\) such that \(u\) restricted to \(\Omega\setminus U\) is continuous._ Proof.: The result is local and moreover, it can be reduced to the bounded case by the property (7.5) whose proof will use only bounded functions. We can use the classical argument in [1, Theorem 3.5] since there exists a sequence of smooth \(m-\omega\)-sh functions that decrease to \(u\) point-wise (Proposition 2.9) and our capacity is subadditive Proposition 4.1. We obtain the convergence in capacity for monotone sequences of uniformly bounded functions (thus the continuity assumption in Proposition 4.3 can be relaxed). **Corollary 4.5**.: _Let \(\Omega\) be a bounded open set in \(\mathbb{C}^{n}\). Let \(\{u_{j}\}_{j\geq 1}\) be a uniformly bounded and monotone sequence of \(m-\omega\)-sh functions that either \(u_{j}\downarrow u\) point-wise or \(u_{j}\uparrow u\) almost everywhere for a bounded \(m-\omega\)-sh function \(u\) in \(\Omega\). Then, \(u_{j}\) converges to \(u\) in capacity._ Proof.: The localization principle applies in both cases, we can assume that \(\Omega\) is a ball and \(u_{j}\)'s are equal to a fixed smooth psh function in a neighborhood of the boundary. Hence, \(u-u_{j}=0\) on \(\Omega\setminus K\) for a fixed compact subset \(K\). Let \(\delta>0\), we wish to show that \[\lim_{j\to\infty}cap_{m}(|u-u_{j}|\geq\delta)=0.\] Arguing by contradiction, suppose that this were not true. Then, there would exist \(\varepsilon>0\) and a sequence of \(m-\omega\)-sh functions \(\rho_{j}\) with \(-1\leq\rho_{j}\leq 0\) such that \[\limsup_{j\to\infty}\int_{\{|u-u_{j}|\geq\delta\}}H_{m}(\rho_{j})\geq\varepsilon.\] Fix a cut-off function \(\chi\) with compact support in \(\Omega\) and equal \(1\) on \(K\). Again, thanks to Remark 3.7 we may assume that \(\mu_{j}:=\chi H_{m}(\rho_{j})\) converges weakly to a positive Radon measure \(\mu\). We use the quasi-continuity. Find an open set \(G\subset\Omega\) such that \(cap_{m}(G)\leq\varepsilon/2\) and the restrictions of \(u_{j},u\) to \(\Omega\setminus G\) are continuous functions. Since \(u_{j}\) is a monotone sequence, it follows from Dini's lemma that it converges uniformly on \(\Omega\setminus G\). In particular, the sets \(F_{j}:=\{|u-u_{j}|\geq\delta\}\setminus G\) are closed in \(\Omega\) and satisfy \(\lim_{j\to\infty}\mu(F_{j})=0\). Hence, \[\int_{\{|u-u_{j}|\geq\delta\}}H_{m}(\rho_{j})\leq\int_{F_{j}}\chi H_{m}(\rho_{ j})+cap_{m}(G)\leq\mu_{j}(F_{j})+\varepsilon/2.\] Letting \(j\to\infty\) and using Lemma 3.1 this leads to a contradiction. ## 5. Weak convergence ### Convergence theorems for decreasing sequences Let \(\Omega\) be a bounded open set in \(\mathbb{C}^{n}\). We have continuity of wedge products of \(m-\omega\)-sh functions under decreasing sequences. **Lemma 5.1**.: _Let \(v,u_{1},...,u_{p}\), \(1\leq p\leq m\), be a bounded \(m-\omega\)-sh functions in \(\Omega\). Let \(\{v^{j}\}_{j\geq 1}\) and \(\{u_{s}^{j}\}_{j\geq 1}\) be uniformly bounded sequences of \(m-\omega\)-sh such that \(v^{j}\downarrow v\) and \(u_{s}^{j}\downarrow u_{s}\) as \(j\to+\infty\), for each \(s=1,...,p\). Then,_ * \(\lim_{j\to\infty}\mathscr{L}(u_{1}^{j},...,u_{p}^{j})=\mathscr{L}(u_{1},...,u_ {p})\)_;_ * \(\lim_{j\to+\infty}v^{j}\mathscr{L}(u_{1}^{j},...,u_{p}^{j})=v\mathscr{L}(u_{1 },...,u_{p});\)__ _where the convergence is understood in the sense of currents._ Proof.: (a) By the localization principle we may assume that all functions are defined in a ball \(\Omega\) and they are equal to a fixed smooth psh function \(\psi\) outside \(\Omega^{\prime}\subset\subset\Omega\). Let \(\{u_{s}^{j,\delta}\}_{\delta>0}\) be decreasing sequences of smooth \(m-\omega\)-sh functions such that \(u_{s}^{j,\delta}\downarrow u_{s}^{j}\) as \(\delta\to 0\). Similarly, let \(\{u_{s}^{\delta}\}_{\delta>0}\) be approximating sequences for \(u_{s}\). We may assume that all involved functions are negative and of uniform norm less than one. Let \(\chi\) be a test form whose supp \(\chi=K\subset\subset\Omega\). We consider the difference \[M_{(j,\delta)}=\int\chi\left[\mathscr{L}(u_{1}^{j,\delta},...,u_{p}^{j,\delta })-\mathscr{L}(u_{1}^{\delta},...,u_{p}^{\delta})\right].\] Then, the proof is completed as soon as we show that \[\lim_{j\to\infty}\lim_{\delta\to 0}|M_{(j,\delta)}|=0.\] The argument justifying Definition 3.4 yields \[|M_{(j,\delta)}|\leq C\sum_{s=1}^{p}\int_{K}|u_{s}^{j,\delta}-u_{s}^{\delta}| (dd^{c}\rho^{j,\delta})^{p-1}\wedge\omega^{n-p+1},\] where \(\rho^{j,\delta}=\frac{1}{2p}\sum_{s=1}^{p}(u_{s}^{j,\delta}+u_{s}^{\delta})\). At this point we no longer have the continuity of \(u_{s}^{j}\) and \(u_{s}\), however, we can make use of the quasi-continuity. Let \(\varepsilon>0\). Find an open set \(G\) such that all \(u_{s}^{j,\delta}\), \(u_{s}^{j}\) and also \(u_{s}^{j},u_{s}\) are continuous on \(\Omega\setminus G\) and \(cap_{m}(G)<\varepsilon\). We know that \(\Omega\setminus G\) is compact in \(\Omega\) and by Dini's theorem for \(s=1,...,p\) we have \(u_{s}^{j,\delta}\to u_{s}^{j}\) and \(u_{s}^{\delta}\to u_{s}\) uniformly as \(\delta\to 0\) on that set. Therefore, \[\int_{K}|u_{s}^{j,\delta}-u_{s}^{\delta}|(dd^{c}\rho^{j,\delta})^ {p-1}\wedge\omega^{n-p+1}\leq\int_{\Omega\setminus G}|u_{s}^{j,\delta}-u_{s}^ {\delta}|(dd^{c}\rho^{j,\delta})^{p-1}\wedge\omega^{n-p+1}\\ +cap_{m}(G).\] To estimate the integral on the right hand side we use \[\int_{\Omega\setminus G}|u_{s}^{j,\delta}-u_{s}^{\delta}|(dd^{c}\rho ^{j,\delta})^{p-1}\wedge\omega^{n-p+1}\] \[\leq\int_{\Omega\setminus G}(u_{s}^{j,\delta}-u_{s}^{j})(dd^{c} \rho^{j,\delta})^{p-1}\wedge\omega^{n-p+1}\] \[\quad+\int_{\Omega\setminus G}(u_{s}^{\delta}-u_{s})(dd^{c}\rho ^{j,\delta})^{p-1}\wedge\omega^{n-p+1}\] \[\quad+\int_{\Omega\setminus G}(u_{s}^{j}-u_{s})(dd^{c}\rho^{j, \delta})^{p-1}\wedge\omega^{n-p+1}.\] By the uniform convergence and then the CLN inequality (Proposition 3.6) the first two terms go to zero as \(\delta\to 0\). Moreover, \(\rho^{j,\delta}\) is a sequence of smooth \(m-\omega\)-sh functions decreasing to \(\rho^{j}=\frac{1}{2p}\sum_{s=1}^{p}(u_{s}^{j}+u_{s})\) as \(\delta\to 0\). This implies \(\mathscr{L}_{p-1}(\rho^{j,\delta})\) converges weakly to \(\mathscr{L}_{p-1}(\rho^{j})\) as \(\delta\to 0\). Combining this with the continuity on \(\Omega\setminus G\) we get \[\lim_{\delta\to 0}\int_{\Omega\setminus G}(u_{s}^{j}-u_{s})\mathscr{L}( \rho^{j,\delta})=\int_{\Omega\setminus G}(u_{s}^{j}-u_{s})\mathscr{L}(\rho^{j}).\] It follows that \[\lim_{\delta\to 0}|M_{(j,\delta)}|\leq\int_{\Omega\setminus G}(u_{s}^{j}-u_{s} )\mathscr{L}(\rho^{j})+\varepsilon.\] Letting \(j\to+\infty\) we have by the uniform convergence, \[\lim_{j\to\infty}\lim_{\delta\to 0}|M_{(j,\delta)}|\leq\varepsilon.\] Since \(\varepsilon>0\) is arbitrary, the proof is completed. (b) For simplicity we assume that \(u_{1}=\cdots u_{p}=u\) and also \(\{u_{s}^{j}\}=\{u^{j}\}\). The general case follows in the same way. Then we write \[\mathscr{L}_{p}(u)=\mathscr{L}_{p}(u,...,u).\] Since \(v^{j}\) decreases to \(v\) and \(\mathscr{L}_{p}(u^{j})\) converges weakly to \(\mathscr{L}_{p}(u)\) thanks to (a), any weak limit \(\Theta\) of the sequence \(v^{j}\mathscr{L}_{p}(u^{j})\) satisfies \[\Theta\leq v\mathscr{L}_{p}(u).\] Hence, \(v\mathscr{L}_{p}(u)-\Theta\) is a positive current. In particular, \[v\mathscr{L}_{p}(u)\wedge\omega^{m-p}-\Theta\wedge\omega^{m-p}=vH_{p}(u)- \Theta\wedge\omega^{m-p}\] is a positive Radon measure. To show the converse let \(\chi\geq 0\) be a test function in \(\Omega\), we will show that \[\int\chi\Theta\wedge\omega^{m-p}\geq\int\chi vH_{p}(u).\] In fact, since \(v^{j}H_{p}(u^{j})\) converges weakly to \(\Theta\wedge\omega^{m-p}\), it is enough to show \[\lim_{j\to+\infty}\int\chi v^{j}H_{p}(u^{j})\geq\int\chi vH_{p}(u). \tag{5.1}\] Let \(\varepsilon>0\) and choose an open set \(G\subset\Omega\) such that \(cap_{m}(G,\Omega)\leq\varepsilon\) and \(v,v_{s}\) are all continuous on \(F=\Omega\setminus G\). Since \(v\) is continuous on \(F\), there is a continuous extension \(g\) to \(\Omega\) such that \(v=g\) on \(F\) with the same uniform norm. Without loss of generality we also assume that \(0\leq v,v_{s},g\leq 1\). Hence, \[\int\chi vH_{p}(u) \leq\int_{F}\chi vH_{p}(u)+cap_{m}(G)\] \[=\int_{F}\chi gH_{p}(u)+cap_{m}(G)\] \[\leq\int\chi gH_{p}(u)+cap_{m}(G)\] \[=\lim_{j\to\infty}\int\chi gH_{p}(u^{j})+cap_{m}(G)\] \[\leq\lim_{j\to\infty}\int_{F}\chi gH_{p}(u^{j})+2cap_{m}(G).\] By Dini's theorem \(v^{j}\) converges to \(v=g\) uniformly on \(F\), so the last integral does not exceed \[\lim_{j\to\infty}\int_{F}\chi v^{j}H_{p}(u^{j})+\varepsilon\leq\lim_{j\to \infty}\int\chi v^{j}H_{p}(u^{j})+\varepsilon.\] Therefore, we have proved that \[\int\chi vH_{p}(u)\leq\lim_{j\to\infty}\int\chi v^{j}H_{p}(u^{j})+2cap_{m}(G)+\varepsilon.\] Since \(cap_{m}(G)\leq\varepsilon\) and \(\varepsilon>0\) is arbitrary, the proof of the inequality (5.1) follows, so does the one of the lemma. **Corollary 5.2**.: _Let \(u,v\) be bounded \(m-\omega\)-sh functions and \(T=dd^{c}v_{1}\wedge\cdots dd^{c}v_{m-p}\wedge\omega^{n-m}\) for bounded \(m-\omega\)-sh functions \(v_{1},...,v_{m-p}\), where \(1\leq p\leq m\). Then,_ \[\mathbf{1}_{\{u<v\}}(dd^{c}\max\{u,v\})^{p}\wedge T=\mathbf{1}_{\{u<v\}}(dd^ {c}v)^{p}\wedge T.\] _Consequently,_ \[(dd^{c}\max\{u,v\})^{p}\wedge T\geq\mathbf{1}_{\{u\geq v\}}(dd^{c}u)^{p}\wedge T +\mathbf{1}_{\{u<v\}}(dd^{c}v)^{p}\wedge T.\] Proof.: Given the weak convergence results under decreasing sequences in the above lemma, the proof of [1, Theorem 3.27] can be easily adapted to the current case. **Corollary 5.3**.: _Let \(u_{1},....,u_{p}\) be bounded \(m-\omega\)-sh functions. Then, the associated wedge product can be obtained consecutively as follows._ \[\begin{split}\mathscr{L}(u_{p},u_{p-1},...,u_{1})&= \lim_{j\to\infty}dd^{c}u_{p}^{j}\wedge\mathscr{L}(u_{p-1},...,u_{1})\\ &=\cdots\\ &=\lim_{j_{p}\to\infty}dd^{c}u_{p}^{j_{p}}\wedge\cdots\wedge \lim_{j_{1}\to\infty}dd^{c}u_{1}^{j_{1}}\wedge\omega^{n-m},\end{split} \tag{5.2}\] _where \(\{u_{s}^{j_{s}}\}_{j_{s}\geq 1}\) are uniformly bounded sequences of smooth \(m-\omega\)-sh functions such that \(u_{s}^{j_{s}}\downarrow u_{s}\) point-wise for each \(s=1,...,p\)._ Proof.: We only need to show the first identity and the localization principle is applicable for the proof. It will be a consequence of Lemma 5.1. Let \(\{u_{s}^{\delta}\}_{\delta>0}\) for \(s=1,...,p-1\), be uniformly bounded families of smooth \(m-\omega\)-sh functions such that \(u_{s}^{\delta}\downarrow u_{s}\) as \(\delta\to 0^{+}\). Let \(\chi\) be a test form and let us use notation supp \(\chi=K\). Consider the following difference \[\sum_{s=1}^{p-1}\int\chi dd^{c}(u_{s}^{j}-u_{s}^{\delta})\wedge\mathscr{L}(u_{p} ^{j},u_{1}^{j},...,u_{s-1}^{j},\widehat{u}_{s},u_{s+1}^{\delta},...,u_{p-1}^{ \delta}),\] where the hat symbol means that the function is missing. For each \(s\), we denote \[M_{(j,\delta)}:=\int\chi dd^{c}(u_{s}^{j}-u_{s}^{\delta})\wedge\mathscr{L}(u_{p }^{j},u_{1}^{j},...,u_{s-1}^{j},\widehat{u}_{s},u_{s+1}^{\delta},...,u_{p-1}^{ \delta}).\] We need to show that \[\lim_{j\to\infty}\lim_{\delta\to 0}|M_{(j,\delta)}|=0. \tag{5.3}\] For simplicity let us consider only one case \(s=1\), the other cases are similar. Assume \(j\) is fixed for the moment and consider \(\delta>0\). Arguing as in the proof of Lemma 5.1-(a) we get that \[|M_{(j,\delta)}| \leq C(\omega,\chi)\int_{K}|u_{1}^{j}-u_{1}^{\delta}|\mathscr{L} _{p-1}(\rho^{j,\delta})\wedge\omega^{m-p+1}\] \[\leq C\int_{K}[(u_{1}^{j}-u_{1})+(u_{1}^{\delta}-u_{1})]\mathscr{ L}_{p-1}(\rho^{j,\delta})\wedge\omega^{m-p+1}\] where \(\rho^{j,\delta}=u_{p}^{j}+\sum_{s=1}^{p-1}u_{s}^{\delta}\) are smooth functions. Clearly, \(\rho^{j,\delta}\downarrow\rho^{j}:=u_{p}^{j}+\sum_{s=1}^{p-1}u_{s}\). By letting \(\delta\to 0\), Lemma 5.1-(b) yields \[\lim_{\delta\to 0}|M_{(j,\delta)}|\leq C\int(u_{1}^{j}-u_{1})\mathscr{L}_{p-1}( \rho^{j})\wedge\omega^{m-p+1}.\] Using Lemma 5.1-(b) once more and letting \(j\to\infty\), we get that the last integral converges to zero. The proof of (5.3) is completed. Hence, the proof of the corollary follows. We conclude this section by going back to the extensions of results in Section 2.4, noted in Remark 2.17. We give here only the most important statement that will be used later. **Corollary 5.4**.: _Let \(\Omega\subset\subset\mathbb{C}^{n}\) be strictly \(m\)-pseudoconvex. Let \(-1\leq v\leq w\leq 0\) be bounded \(m-\omega\)-sh functions such that \(\lim_{z\to\partial\Omega}(w-v)=0\). Let \(\rho\) be a bounded \(m-\omega\)-sh function such that \(-1\leq\rho\leq 0\). There is a constant \(C=C(\omega,n,m)\) such that_ \[\int_{\Omega}(w-v)^{3m}(dd^{c}\rho)^{m}\wedge\omega^{n-m}\leq C\sum_{s=0}^{m} \int_{\Omega}(w-v)(dd^{c}v)^{s}\wedge\omega^{n-s}.\] Proof.: Let us replace \(w\) by \(w_{\varepsilon}=\max\{w-\varepsilon,v\}\) for \(\varepsilon>0\), so that \(w_{\varepsilon}=v\) in a neighborhood of \(\partial\Omega\). If we could prove the inequality for \(w_{\varepsilon}\) and \(v\), then by letting \(\varepsilon\to 0\), the domination convergence theorem would imply the required inequality. Let \(\Omega^{\prime}\subset\subset\Omega\) be a smooth subdomain such that \(w=v\) on \(\Omega\setminus\Omega^{\prime}\). Then the integrals on both sides will not change if we modify \(v,w\) outside \(\Omega^{\prime}\). Hence, we may further assume that \(w=v=\psi\) on \(\Omega\setminus\Omega^{\prime}\) with \(\psi\) a smooth \(m-\omega\)-sh defining function for \(\Omega\). Using the quasi-continuity it is easy to see from Lemma 5.1-(b) that for smooth decreasing sequences \(w_{j}\downarrow w\), \(v_{j}\downarrow v\) and \(\rho_{j}\downarrow\rho\) we have \[\lim_{j\to\infty}\int_{\Omega}(w_{j}-v_{j})^{3m}H_{m}(\rho_{j})=\int_{\Omega}(w -v)^{3m}H_{m}(\rho),\] and for \(0\leq s\leq m\), \[\lim_{j\to\infty}\int_{\Omega}(w_{j}-v_{j})H_{s}(v_{j})=\int_{\Omega}(w-v)H_{s} (v).\] Therefore, it is enough to prove the inequality for smooth functions \(v_{j}\leq w_{j}\leq 0\) and \(-1\leq\rho\leq 0\). Notice that \(w_{j}\to w\) and \(v_{j}\to v\) uniformly on \(\Omega\setminus\Omega^{\prime}\) (this is the reason why we modify \(w,v\) near the boundary). Thus, we can follow the argument in Remark 2.16 and conclude that the extra terms will vanish after passing to the limit \(j\to+\infty\). Hence, the proof for the bounded functions case follows. ### Convergence theorems for increasing sequences With a similar proof as that of Lemma 5.1 we get **Lemma 5.5**.: _Let \(v,u_{1},...,u_{p}\) be bounded \(m-\omega\)-sh functions. Suppose that \(\{v^{j}\}_{j\geq 1}\), \(\{u^{j}_{s}\}_{j\geq 1}\) are uniformly bounded increasing sequences of \(m-\omega\)-sh functions such that \(v^{j}\uparrow v\) and \(u^{j}_{s}\uparrow u_{s}\) (almost everywhere) as \(j\to\infty\) for \(s=1,...,p\). Then,_ \[\lim_{j\to+\infty}v^{j}\mathscr{L}(u^{j}_{1},...u^{j}_{p})=v\mathscr{L}(u_{1},...,u_{p}) \tag{5.4}\] _in the sense of currents._ **Corollary 5.6**.: _Let \(\Omega\) be a bounded open set. Let \(\mathcal{U}_{m}\) be a uniformly bounded family of \(m-\omega\)-sh functions in \(\Omega\). Denote \(v(x)=\sup\{v_{\alpha}(x):v_{\alpha}\in\mathcal{U}_{m}\}\). Then, the set_ \[N:=\{v<v^{*}\}\] _has zero measure with respect to any measure \(\mathscr{L}(u_{1},...,u_{m})=dd^{c}u_{1}\wedge\cdots\wedge dd^{c}u_{m}\wedge \omega^{n-m}\), where \(u_{i}\)'s are bounded \(m-\omega\)-sh functions. In particular, \(cap_{m}(N,\Omega)=0\)._ Proof.: By Choquet's lemma we can reduce the argument to the case when \(\mathcal{U}_{m}\) is an increasing sequence \(\{v_{j}\}_{j\geq 1}\) with \(w=\sup_{j}v_{j}\) and \(N=\{w<v^{*}\}\). It follows from the proof of [13, Corollary 9.9] that \(w=v^{*}\) almost everywhere. Therefore, Lemma 5.5 implies \(v_{j}\mathscr{L}(u_{1},...,u_{m})\) converges weakly to \(v^{*}\mathscr{L}(u_{1},...,u_{m})\). Then, the positive currents \((v^{*}-v_{j})\mathscr{L}(u_{1},...,u_{m})\) converge weakly to zero and hence, for any compact set \(K\subset\Omega\), \[\lim_{j\to\infty}\int_{K}(v^{*}-v_{j})\mathscr{L}(u_{1},...,u_{m})=0.\] By monotone convergence theorem, \(\lim_{j\to\infty}\int_{K}(w-v_{j})\mathscr{L}(u_{1},...,u_{m})=0.\) Therefore, \(\int_{K}(v^{*}-w)\mathscr{L}(u_{1},...,u_{m})=0\). In other words, \(v^{*}=w\) a.e on \(K\) with respect to \(\mathscr{L}(u_{1},...,u_{m})\). The last conclusion follows from the inner regularity of capacity. ## 6. Comparison principle Let \(\Omega\) be a bounded open set which is relatively compact in a strictly \(m\)-pseudoconvex bounded domain \(D\) in \(\mathbb{C}^{n}\). Fix a constant \(\mathbf{B}\) such that on \(\overline{\Omega}\), \[-\mathbf{B}\;\omega^{2}\leq dd^{c}\omega\leq\mathbf{B}\;\omega^{2},\quad- \mathbf{B}\;\omega^{3}\leq d\omega\wedge d^{c}\omega\leq\mathbf{B}\;\omega^{ 3}.\] Let \(\rho\) be a strictly psh function sasifying \(\rho\leq 0\) and \(dd^{c}\rho\geq\omega\) in \(D\). In this section we assume all function are defined in \(D\) which means that they can be approximated by a decreasing sequence of smooth \(m-\omega\)-sh functions in a neighborhood of \(\overline{\Omega}\). **Theorem 6.1**.: _Let \(u,v\) be bounded \(m-\omega\)-sh functions in \(\Omega\) such that \(d=\sup_{\Omega}(v-u)>0,\) and \(\liminf_{z\to\partial\Omega}(u-v)(z)\geq 0\). Fix \(0<\varepsilon<\min\{\frac{1}{2},\frac{d}{2\left\lVert\rho\right\rVert_{\infty} }\}\). Let us denote for \(0<s<\varepsilon_{0}:=\varepsilon^{3}/16\mathbf{B}\),_ \[U(\varepsilon,s):=\{u<(v+\varepsilon\rho)+S(\varepsilon)+s\},\quad\text{where }S( \varepsilon)=\inf_{\Omega}[u-(v+\varepsilon\rho)].\] _Then,_ \[\int_{U(\varepsilon,s)}H_{m}(v+\varepsilon\rho)\leq\left(1+\frac{Cs}{ \varepsilon^{m}}\right)\int_{U(\varepsilon,s)}H_{m}(u),\] _where \(C\) is a uniform constant depending on \(m,n,\omega\)._ Proof.: If \(u,v\) are smooth, then the proof follows from [12, Lemmas 3.8, 3.9 and 3.10]. To pass from the smooth case to the bounded case we use the quasi-continuity of \(m-\omega\)-sh functions and the argument as the one in [1, Theorem 4.1] (see also [13, Theorem 1.16]). The proof is readily adaptable with obvious changes of notation. Here we only indicate the points of difference that we need to take care of. Firstly, replacing \(u\) by \(u+\delta\) with \(\delta>0\) and then letting \(\delta\downarrow 0\) we may assume that \(\{u<v\}\subset\subset\Omega^{\prime}\subset\subset\Omega\) and \(u\geq v+\delta\) on \(\Omega\setminus\Omega^{\prime}\). By restricting \(u,v\) to a smaller domain we may assume that \(u,v\) are defined in a neighborhood of \(\overline{\Omega}\). Let \(\{u_{k}\}_{k\geq 1},\{v_{j}\}_{j\geq 1}\) be sequences of smooth \(m-\omega\)-sh functions in a neighborhood of \(\overline{\Omega}\) (Proposition 2.9) such that \(u_{k}\downarrow u\) and \(v_{j}\downarrow v\) pointwise in \(\overline{\Omega}\). Denote \(d_{jk}=\sup_{\overline{\Omega}}(v_{j}-u_{k})\). Then, for \(j\geq k>0\) large we have \[d_{jk}\geq d/2>0.\] In fact, for small \(\epsilon>0\) there exits \(x\in\Omega\) such that \(d-\epsilon\leq v(x)-u(x)\). So, for \(k>k_{0}\) large enough, \[d-2\epsilon\leq v(x)-u_{k}(x)\leq v_{j}-u_{k}\leq d_{jk}.\] We get the desired inequality by letting \(j\to\infty\) and then \(\epsilon\to 0\). Next, since \(u\geq v+\delta\) on a compact set \(K=\overline{\Omega}\setminus\Omega^{\prime}\), we have \(u_{k}\geq v+\delta\) for every \(k\geq 1\). Since \(u_{k}\) is continuous, by Hartogs' lemma for \(\omega\)-sh functions [12, Lemma 9.14], there is \(j_{k}\geq k>0\) large enough such that for \(j\geq j_{k}\), \[v_{j}+\delta\leq u_{k}\quad\text{on }K.\] Thus, there exist subsequences of \(\{u_{k}\}\) and \(\{v_{j}\}\), which can be used in the argument from [1, Theorem 4.1]. **Corollary 6.2**.: _Let \(u,v\) be bounded \(m-\omega\)-sh functions in a neighborhood of \(\overline{\Omega}\) such that \(\liminf_{z\to\partial\Omega}(u-v)(z)\geq 0\). Assume that \(H_{m}(v)\geq H_{m}(u)\) in \(\Omega\). Then, \(u\geq v\) on \(\Omega\)._ Proof.: Arguing by contradiction, suppose that \(\sup_{\Omega}(v-u)=d>0\). Hence, there exist \(\delta,a>0\) so small that \(\sup_{\Omega}[(1+a)v-(u+\delta)]>d/2\) and \(\liminf_{z\to\partial\Omega}[(u+\delta)-(1+a)v](z)\geq 0\). Applying Theorem 6.1 for \(\widetilde{u}=u+\delta\) and \(\widetilde{v}=(1+a)v\), we have for \(0<s<\varepsilon_{0}\), \[\int_{U(\varepsilon,s)}H_{m}(\widetilde{v}+\varepsilon\rho)\leq\left(1+\frac {Cs}{\varepsilon^{m}}\right)\int_{U(\varepsilon,s)}H_{m}(u).\] Observe that \[H_{m}(\widetilde{v}+\varepsilon\rho)\geq(1+a)^{m}H_{m}(v)+\varepsilon^{m}H_{m}( \rho)\geq(1+a)^{m}H_{m}(u)+\varepsilon^{m}H_{m}(\rho).\] Hence, we derive from the above inequality that \[\varepsilon^{m}\int_{U(\varepsilon,s)}H_{m}(\rho)\leq 0\] for \(s>0\) so small that \((1+a)^{m}\geq 1+Cs/\varepsilon^{m}\). Therefore, the Lebesgue measure of \(U(\varepsilon,s)\) is zero. This is impossible as it is non-empty quasi-open set for \(0<s<\varepsilon_{0}\). The above argument also gives **Corollary 6.3** (domination principle).: _Let \(u,v\) be bounded \(m-\omega\)-sh such that \(\limsup_{z\to\partial\Omega}|u(z)-v(z)|=0\) and \(\int_{\{u<v\}}H_{m}(u)=0\). Then, \(u\geq v\) in \(\Omega\)._ ## 7. Polar sets and negligible sets In this section we study the polar sets and negligible sets of \(m-\omega\)-sh functions. We obtain here results analogous to those in pluripotential theory from [1]. Let us first give the definitions. **Definition 7.1** (\(m\)-polar sets).: A set \(E\) in \(\mathbb{C}^{n}\) is \(m\)-polar if for each \(z\in E\) there is an open set \(z\in U\) and a \(m-\omega\)-sh function \(u\) in \(U\) such that \(E\cap U\subset\{u=-\infty\}\). Let \(\{u_{\alpha}\}\) be a family of \(m-\omega\)-sh functions in \(\Omega\) which is locally bounded from above. Then, the function \[u(z)=\sup_{\alpha}u_{\alpha}(z)\] need not be \(m-\omega\)-sh, but its upper semicontinuous regularization \[u^{*}(z)=\limsup_{x\to z}u(x)\geq u(z)\] is \(m-\omega\)-sh (see [1, Proposition 2.6-(c)]). A set of the form \[N=\{z\in\Omega:u(z)<u^{*}(z)\} \tag{7.1}\] is called \(m\)_-negligible_. Notice that a \(n\)-polar/\(n\)-negligible set is pluripolar/negligible. Clearly a pluripolar set is a \(m\)-polar (or \(m\)-negligible sets) for every \(1\leq m\leq n\). More generally, a \(m\)-polar (resp. \(m\)-negligible) is a \((m-1)\)-polar (resp. \((m-1)\)-negligible) set. An effective way to study these sets is by extremal functions. **Definition 7.2**.: Let \(E\) be a subset of a bounded open set \(\Omega\subset\mathbb{C}^{n}\). We define \[u_{E}=u_{E,\Omega}=\sup\{v(x):v\text{ is }m-\omega\text{-sh in }\Omega,\;u\leq 0,\;u\leq-1\text{ on }E\}\] By Choquet's lemma \(u_{E}\) is the limit of an increasing sequence of \(m-\omega\)-sh functions. It follows from [1, Corollary 9.9] that \(u_{E}^{*}\) is \(m-\omega\)-sh and \(u_{E}=u_{E}^{*}\) almost everywhere. Moreover, \(u_{E}^{*}\equiv 0\) if and only if there exists an increasing sequence of \(m-\omega\)-sh functions \(\{v_{j}\}_{j\geq 1}\) satisfying \[v_{j}\leq 0,\quad v_{j}\leq-1\text{ on }E,\quad\int_{\Omega}|v_{j}|dV_{2n} \leq 2^{-j}. \tag{7.2}\] **Lemma 7.3**.: _Let \(\Omega\) be bounded open set in \(\mathbb{C}^{n}\). Then_ * _If_ \(E_{1}\subset E_{2}\)_, then_ \(u_{E_{2}}\leq u_{E_{2}}\) _._ 2. _If_ \(E\subset\Omega_{1}\subset\Omega_{2}\)_, then_ \(u_{E,\Omega_{2}}\leq u_{E,\Omega_{1}}\)_._ 3. _Let_ \(K_{j}\) _be non-increasing sequence of compact subset in_ \(\Omega\) _and_ \(K=\cap_{j}K_{j}\)_. Then,_ \(u_{K_{j}}^{*}\) _increases almost everywhere to_ \(u_{K}^{*}\)_._ 4. _If_ \(u_{E_{j}}^{*}\equiv 0\) _and_ \(E=\cup_{j=1}^{\infty}E_{j}\)_, then_ \(u_{E}^{*}\equiv 0\)_._ _Suppose moreover that_ \(\Omega\) _is strictly_ \(m\)_-pseudoconvex. Then_ 5. _If_ \(E\subset\subset\Omega\) _, then_ \(\lim_{z\to\partial\Omega}u_{E}^{*}=0\)_._ 6. _For every set_ \(E\subset\Omega\)_,_ \(H_{m}(u_{E}^{*})\equiv 0\) _on_ \(\Omega\setminus\overline{E}\)_._ Proof.: The properties (i) and (ii) are obvious from the definition, and also \(\lim_{j}u_{K_{j}}\leq u_{K}^{*}\) in (iii). To prove the reverse inequality let \(v\) be a \(m-\omega\)-sh with \(v\leq 0\) and \(u\leq-1\) on \(K\). For \(\varepsilon>0\), the open set \(U_{\varepsilon}=\{u<-1+\varepsilon\}\) contains \(K\). Hence, \(K_{j}\subset U_{\varepsilon}\) for \(j\) large enough. So, \(v-\varepsilon\leq u_{K_{j}}^{*}\). Taking supremum over all such functions \(v\) we get \(u_{K}-\varepsilon\leq u:=\lim_{j}u_{K_{j}}\). Letting \(\varepsilon\to 0\) we obtain the conclusion. Notice again that the statement that \(u=u^{*}\) almost everywhere follows from [1, Corollary 9.9]. (iv) Let \(\varepsilon>0\). By (7.2) we can choose a sequence \(v_{j}\leq 0\), \(v_{j}\leq-1\) on \(E_{j}\) and \(\int_{\Omega}|v_{j}|dV_{2n}\leq\varepsilon 2^{-j}\). Then, \(v=\sum_{j}v_{j}\) is a \(m-\omega\)-sh function satisfying \(v\leq 0\), \(v\leq-1\) on \(E\) and \(\int_{\Omega}|v|dV_{2n}\leq\varepsilon\). Hence, \(u_{E}^{*}\equiv 0\). (v) Let \(\psi\) be a strictly \(m-\omega\)-sh defining function of \(\Omega\). Then, for \(A>1\) large enough, \(A\psi\leq u_{E}^{*}\). This finishes the proof. (vi) Given the unique continuous solution of the Dirichlet problem for the homogeneous Hessian equation [1, Theorem 3.15] in small balls, the result follows from a classical balayage argument. The outer capacity \(cap_{m}^{*}(\bullet)\) is defined as follows. \[cap_{m}^{*}(E)=\inf\left\{cap_{m}(U):E\subset U,\;U\subset\Omega\text{ is open}\right\}. \tag{7.3}\] Then, we have basic properties which follow easily from the corresponding ones of the capacity \(cap_{m}\). **Proposition 7.4**.: _Let \(\Omega\) be a bounded open set in \(\mathbb{C}^{n}\). Then,_ 1. \(cap_{m}^{*}(E_{1})\leq cap_{m}^{*}(E_{2})\) _if_ \(E_{1}\subset E_{2}\subset\Omega\)_;_ 2. \(cap_{m}^{*}(E_{1},\Omega_{1})\geq cap_{m}^{*}(E,\Omega_{2})\) _if_ \(E\subset\Omega_{1}\subset\Omega_{2}\)_;_ 3. \(cap_{m}^{*}(\cup_{j}E_{j})\leq\sum_{j}cap_{m}^{*}(E_{j})\)_._ **Lemma 7.5**.: _Let \(\Omega\subset\subset\mathbb{C}^{n}\) be a strictly \(m\)-pseudoconvex domain. Let \(E\subset\subset\Omega\) a Borel subset._ \[\int_{\Omega}H_{m}(u_{E}^{*})\leq cap_{m}^{*}(E)\leq C\sum_{s=0}^{m}\int_{ \Omega}(-u_{E}^{*})H_{s}(u_{E}^{*}).\] Proof.: We prove first the left hand side inequality. Assume that \(E=\overline{E}\) is compact. The property Lemma 7.3-(vi) implies \[\int_{\Omega}H_{m}(u_{K}^{*})=\int_{K}H_{m}(u_{K}^{*})\leq cap_{m}(K)\leq cap_{ m}^{*}(K).\] Assume \(E=G\) is an open subset. We can find an increasing sequence of compact sets \(K_{j}\) such that \(\cup_{j}K_{j}=G\). It is easy to see that \(u_{K_{j}}^{*}\) decreases to \(u_{G}=u_{G}^{*}\) on \(\Omega\). Hence, by the weak convergence theorem for decreasing sequences, \(H_{m}(u_{K_{j}}^{*})\to H_{m}(u_{G})\) weakly. This implies \[\int_{\Omega}H_{m}(u_{G})=\lim_{j\to\infty}\int_{\Omega}H_{m}(u_{K_{j}}^{*})\leq \lim_{j\to\infty}cap_{m}(K_{j})\leq cap_{m}(G).\] Since \(cap_{m}(G)=cap_{m}^{*}(G)\), the conclusion follows. Now let \(E\) be a Borel subset. By definition there exists a sequence of open sets \(\{O_{j}\}\) in \(\Omega\) containing \(E\) such that \(cap_{m}^{*}(E)=\lim_{j}cap_{m}(O_{j})\). Replacing \(O_{j}\) by \(\cap_{1\leq s\leq j}O_{j}\) we may assume that \(\{O_{j}\}_{j\geq 1}\) is decreasing. Moreover, by Choquet's lemma there exists an increasing sequence \(\{v_{j}\}\) of negative \(m-\omega\)-sh functions in \(\Omega\) such that \(v_{j}=-1\) on \(E\) and \(\lim_{j}v_{j}=u_{E}\) almost everywhere on \(\Omega\). Set \(G_{j}=O_{j}\cap\{v_{j}<-1+1/j\}\). Then, \(E\subset G_{j}\subset O_{j}\) and \[v_{j}-1/j\leq u_{G_{j}}\leq u_{E}.\] So, \(\lim_{j\to\infty}cap_{m}(G_{j})=cap_{m}^{*}(E)\) and \(u_{G_{j}}\) increases to \(u_{E}\) almost everywhere on \(\Omega\). Therefore, bythe weak convergence for increasing sequences (Lemma 5.5), \[\int_{\Omega}H_{m}(u_{E}^{*})=\lim_{j\to\infty}\int_{\Omega}H_{m}(u_{G_{j}}) \leq\lim_{j\to\infty}cap_{m}(G_{j})=cap_{m}^{*}(E).\] Thus, the proof of left hand side inequality is completed. Next we prove the other one. Let \(E\subset\subset\Omega\) be a Borel subset and consider the sets \(G_{j}\) defined as above. Then, \(\lim_{j}cap_{m}(G_{j})=cap_{m}^{*}(E)\). We also have for \(0\leq s\leq m\), \[\lim_{j\to\infty}\int_{\Omega}(-u_{G_{j}})H_{s}(u_{G_{j}})=\int_{\Omega}(-u_{E }^{*})H_{s}(u_{E}^{*})\] by the weak convergence for increasing sequence again. Thus, it is enough to prove the inequality for \(E=G\subset\subset\Omega\) an open subset. To this end let \(-1\leq\rho\leq 0\) be a \(m-\omega\)-sh function in \(\Omega\). Since \(G\subset\{u_{G}=-1\}\) and \(u_{G}=u_{G}^{*}\) it follows that for \(q\geq 1\), \[\int_{G}H_{m}(\rho)\leq\int(-u_{G})^{q}H_{m}(\rho).\] Applying Corollary 5.4 for \(v=0\) and \(u=u_{G}\) we get \[\int_{\Omega}(-u_{G})^{3m}H_{m}(\rho)\leq C\sum_{s=0}^{m}\int_{\Omega}(-u_{G}) H_{s}(u_{G}).\] Taking supremum over all such functions \(\rho\), we get the desired inequality. **Remark 7.6**.: For a compact set \(K\) in a strictly \(m\)-pseudoconvex domain \(\Omega\), \(cap(K,\Omega)=0\) if and only if \(cap_{m}^{*}(K,\Omega)=0\). **Proposition 7.7**.: _In a strictly \(m\)-pseudoconvex domain \(\Omega\) the following are equivalent:_ * \(u_{E,\Omega}^{*}=0\)_._ * \(E\subset\{u=-\infty\}\) _for a_ \(m-\omega\)_-sh function_ \(u<0\) _in_ \(\Omega\)_._ * \(cap_{m}^{*}(E,\Omega)=0\)_._ Proof.: (a) \(\Rightarrow\) (b) follows from the property (7.2) by setting \(u=\sum_{j\geq 1}v_{j}\). Conversely, \(E\subset\{v=-\infty\}\), where \(v<0\) and \(m-\omega\)-sh, implies \(u_{E}\geq v/j\) for \(j=1,2...\). So \(u_{E}=0\) outside \(\{v=-\infty\}\) whose Lebesgue measure is zero. Hence, \(u_{E}^{*}=0\) by [18, Corollary 9.7]. The implication (c) \(\Rightarrow\) (a) follows from the fact that \(H_{m}(u_{E}^{*})\equiv 0\) and the domination principle (Corollary 6.2) if \(E\) is relatively compact in \(\Omega\). The general case follows from the countable subadditivity of \(cap_{m}^{*}\) and the corresponding property of \(u_{E}^{*}\) above. To prove \((b)\Rightarrow(c)\) let us fix an open subset \(V\subset\subset\Omega\) and denote \(\mathcal{O}_{j}=\{u<-j\}\cap V\). Let \(\varepsilon>0\). We wish to find an open subset \(E\subset G\subset\Omega\) with \(cap_{m}(G)<\varepsilon\). Indeed, we have \(0\geq u_{\mathcal{O}_{j}}\geq\max\{u/j,-1\}\), where \(u_{\mathcal{O}_{j}}\) is the relative extremal function. Then, \(u_{\mathcal{O}_{j}}\uparrow 0\) a.e on \(\Omega\) by using \(\omega\)-subharmonicity. Now the right hand side inequality in Lemma 7.5 gives \[cap_{m}(\mathcal{O}_{j})\leq C\sum_{s=0}^{m}e_{(0,0,s)},\] where \(e_{(0,0,s)}=\int_{\Omega}(-u_{\mathcal{O}_{j}})H_{s}(u_{\mathcal{O}_{j}})\). Applying the weak convergence theorem for increasing convergence sequences we get that \(H_{s}(u_{\mathcal{O}_{j}})\to 0\) weakly in \(\Omega\), \(1\leq s\leq m\). Furthermore, for \(s=m\) and \(s=0\), \[\lim_{j\to\infty}\int_{\Omega}(-u_{\mathcal{O}_{j}})H_{m}(u_{\mathcal{O}_{j}}) =0=\lim_{j\to\infty}\int_{\Omega}(-u_{\mathcal{O}_{j}})\omega^{n}.\] Now we claim that for \(1\leq s\leq m-1\), \[\lim_{j\to\infty}\int_{\Omega}(-u_{\mathcal{O}_{j}})H_{s}(u_{\mathcal{O}_{j}}) =0. \tag{7.4}\] Assume this is true for a moment and let us finish the proof. The above facts imply that \[\lim_{j\to\infty}cap_{m}(\mathcal{O}_{j})=0. \tag{7.5}\] Take a sequence of open sets \(V_{s}\) exhausting \(\Omega\). Choose \(\mathcal{O}_{j_{s}}=\{u<-j_{s}\}\cap V_{s}\) such that \(cap_{m}(\mathcal{O}_{j_{s}})<\varepsilon/2^{s}\). Define \(G=\cup_{s\geq 1}\mathcal{O}_{j_{s}}\) which is an open set containing \(E\) and which has capacity less than \(\varepsilon\). Finally, let us verify (7.4). Let \(\rho\) be strictly \(m-\omega\)-sh defining function for \(\Omega\). Since \(\mathcal{O}_{j}\subset V\subset\subset\Omega\), we have \(u_{\mathcal{O}_{j}}\geq u_{V}\geq A\rho\) for a constant \(A>0\) depending only on \(V,\Omega\) by the proof of Lemma 7.3-(v). Hence, \(u_{\mathcal{O}_{j}}\)'s can be extended to a neighborhood \(\widetilde{\Omega}\) of \(\overline{\Omega}\) by \(A\rho\) (see e.g., (8.9)). By the CLN inequality there is a uniform constant \(C=C(\Omega,\widetilde{\Omega})\) such that for every \(j\geq 1\), \[\int_{\Omega}H_{s}(u_{\mathcal{O}_{j}})\leq C.\] Then, for a fixed \(\varepsilon>0\), \[\int_{\Omega}(-u_{\mathcal{O}_{j}})H_{s}(u_{\mathcal{O}_{j}}) \leq A\int_{\Omega}|\rho|H_{s}(u_{\mathcal{O}_{j}})\] \[\leq A\varepsilon\int_{\{|\rho|<\varepsilon\}}H_{s}(u_{\mathcal{ O}_{j}})+A\int_{\{|\rho|\geq\varepsilon\}}H_{s}(u_{\mathcal{O}_{j}})\] \[\leq AC\varepsilon+A\int_{\{|\rho|\geq\varepsilon\}}H_{s}(u_{ \mathcal{O}_{j}}).\] Since \(H_{s}(u_{\mathcal{O}_{j}})\to 0\) weakly as \(j\to\infty\) and \(\{|\rho|\geq\varepsilon\}\subset\Omega\) is compact, letting \(j\to\infty\) we get \[\lim_{j\to\infty}\int_{\Omega}(-u_{\mathcal{O}_{j}})H_{s}(u_{\mathcal{O}_{j}} )\leq AC\varepsilon.\] This holds for arbitrary \(\varepsilon>0\), where \(A,C\) are uniform constants independent of \(\varepsilon\). Hence, the proof of (7.4) is completed. **Theorem 7.8**.: \(m\)_-negligible sets are \(m\)-polar._ Proof.: The result is local, so we may assume that all functions are defined on a bounded strictly \(m\)-pseudoconvex domain \(\Omega\). Thanks to the characterization in Proposition 7.7 it is enough to show that a negligible set \(E\) has outer capacity zero. Let \(\{u_{j}\}\) be the sequence in the definition of the negligible set and put \(u=\sup_{j}u_{j}\). By Choquet's lemma we may assume this is an increasing sequence. Let \(\varepsilon>0\). By quasi-continuity we can find an open set \(G\subset\Omega\) such that \(cap_{m}(G)<\varepsilon\) and \(u,u_{j}\)'s are continuous on \(F:=\Omega\setminus G\). Since \(cap_{m}^{*}(\bullet)\) is countably subadditive, it is enough to show that \[cap_{m}^{*}(E\cap K)=0\] for a fixed compact subset \(K\subset\Omega\). Observe that for all rational numbers \(r<t\), the sets \[K_{rt}=K\cap F\cap\{u\leq r<t\leq u^{*}\}\] are compact, because \(u\) is lower semi-continuous in \(K\cap F\) and \(u^{*}\) is upper-semi continuous. Also, \((K\cap E)\setminus G\) is contained in the countable union of such compact sets. Thus, by countable subadditivity, it remains to verify \(cap_{m}^{*}(K_{rt})=0\). Since \(u\) is lower semi-continuous on \(K\), there exists a constant \(c\) such that \(u\geq c\) on \(K\). Denote \(u_{c}=\max\{u,c\}\) and notice that \(K_{rt}\subset K_{rt}^{\prime}\), where \[K_{rt}^{\prime}=K\cap F\cap\{u_{c}\leq r<t\leq u_{c}^{*}\}\] are compact sets. Since \(\{u_{c}<u_{c}^{*}\}\) has inner capacity \(cap_{m}\) zero, we have \(cap_{m}(K_{rt}^{\prime})=0\). This implies that \(cap_{m}^{*}(K_{rt}^{\prime})=0\) by Remark 7.6. We have the following analogue of Josefson's theorem whose proof is the same as the one of [10, Theorem 1.23]. **Theorem 7.9**.: _For any \(m\)-polar subset \(E\) of \(\mathbb{C}^{n}\), there exists a \(m-\omega\)-sh function \(h\) on \(\mathbb{C}^{n}\) such that \(E\subset\{h=-\infty\}\)._ ## 8. Dirichlet problem in domains in \(\mathbb{C}^{n}\) Let \(\Omega\) be a bounded strictly \(m\)-pseudoconvex domain in \(\mathbb{C}^{n}\). The comparison principle in Corollary 6.2 coupled with the proof of [1, Lemma 3.13] gives the following stability estimate for the complex Hessian equation: **Proposition 8.1**.: _Let \(u,v\in C^{0}(\overline{\Omega})\) be \(m-\omega\)-sh in \(\Omega\) and satisfy_ \[H_{m}(u)=fdV_{2n},\quad H_{m}(v)=gdV_{2n}\] _with \(0\leq f,g\in L^{p}(\Omega)\) and \(p>n/m\). Then_ \[\|u-v\|_{L^{\infty}}\leq\sup_{\partial\Omega}|u-v|+C\|f-g\|_{L^{p}(\Omega)}^{ \frac{1}{m}},\] _where \(C=C(m,n,p,\Omega)\)._ Let \(\psi\in C^{\infty}(\partial\Omega)\). Given a smooth positive function \(f\in C^{\infty}(\overline{\Omega},\mathbb{R})\), there is always a smooth \(m-\omega\)-sh subsolution \(\underline{u}\in C^{\infty}(\overline{\Omega})\), that is \[H_{m}(\underline{u})\geq f(z),\quad\underline{u}=\psi\quad\text{on }\partial\Omega. \tag{8.1}\] The stability estimate and an easy approximation argument implies that we can solve the Dirichlet problem when the right hand side in \(L^{p}\), \(p>n/m\) after invoking the solution for the smooth data due to Collins and Picard [10]. **Theorem 8.2**.: _Let \(0\leq f\in L^{p}(\Omega)\) for some \(p>n/m\). Suppose \(\varphi\in C^{0}(\partial\Omega)\). Then there exists a unique continuous \(m-\omega\)-sh functions \(u\in C^{0}(\overline{\Omega})\) solving the Dirichlet problem_ \[H_{m}(u)=f\omega^{n},\quad u=\varphi\text{ on }\partial\Omega.\] Now we wish to solve the equation with the right hand side just being a positive Radon measure assuming the existence of a subsolution. We use recent ideas from [11], however several steps require very different proofs. Assume \(\widetilde{\Omega}\) is a neighborhood of \(\overline{\Omega}\). Let us define a slightly modified Cegrell class \[\widetilde{\mathcal{E}}_{0}(\Omega)=\left\{u\text{ is bounded and }\omega-m \text{-sh in }\widetilde{\Omega}:\lim_{z\to\partial\Omega}u(z)=0\right\}. \tag{8.2}\] The set \(\widetilde{\Omega}\) is suppressed in this notation. By the CLN inequality for \(u\in\widetilde{\mathcal{E}}_{0}(\Omega)\) we have \(\int_{\Omega}H_{m}(u)<+\infty\). We introduce this modified class to control the integrals of the wedge products of currents associated to bounded \(m-\omega\)-sh functions. Now we follow the steps in [11]. The first one corresponds to [11, Lemma 2.1] which in turn was inspired by [10, Lemma 5.2]. **Lemma 8.3**.: _Let \(\lambda\) be a finite positive Radon measure on \(\Omega\) which vanishes on \(m\)-polar sets. Let \(\{u_{j}\}_{j\geq 1}\subset\widetilde{\mathcal{E}}_{0}(\Omega)\) be a uniformly bounded in \(\widetilde{\Omega}\) sequence that converges \(dV\)-a.e to \(u\in\widetilde{\mathcal{E}}_{0}(\Omega)\). Then, there exists a subsequence \(u_{j_{*}}\) such that_ \[\lim_{j_{*}\to\infty}\int_{\Omega}u_{j_{*}}d\lambda=\int_{\Omega}ud\lambda.\] Proof.: By the comparison principle, all functions are negative on \(\Omega\). The proof of [11, Lemma 2.1] is applicable provided that the \(m\)-negligible sets are \(m\)-polar and this is the content of Theorem 7.8. Applying the lemma twice for the sequences \(\{u_{j}\},\max\{u_{j},u\}\) and combining with the identity \(2\max\{u_{j},u\}=u_{j}+u+|u_{j}-u|\) we easily get a corollary. **Corollary 8.4**.: _Let \(\lambda\) and \(\{u_{j}\}_{j\geq 1}\) be as in Lemma 8.3. Then, there exists a subsequence, still denoted by \(\{u_{j}\}\), such that \(\lim_{j\to\infty}\int_{\Omega}|u_{j}-u|d\lambda=0.\)_ The following result is crucial for proving the weak convergence later. **Lemma 8.5**.: _Let \(d\lambda\) and \(\{u_{j}\}_{j\geq 1}\) be as in Lemma 8.3. Let \(\{w_{j}\}_{j\geq 1}\subset\widetilde{\mathcal{E}}_{0}(\Omega)\) be uniformly bounded in \(\widetilde{\Omega}\). Suppose that \(w_{j}\) converges in capacity - \(cap_{m}(\bullet,\Omega)\) - to \(w\in\widetilde{\mathcal{E}}_{0}(\Omega)\). Then,_ \[\lim_{j\to\infty}\int_{\Omega}|u-u_{j}|H_{m}(w_{j})=0.\] **Remark 8.6**.: Since all functions are uniformly bounded on the fixed neighborhood \(\widetilde{\Omega}\) of \(\overline{\Omega}\), with \(\|u_{j}\|_{L^{\infty}},\|w_{j}\|_{L^{\infty}}\leq A\), by Proposition 3.6 there exist two positive constants \(C_{1},C_{2}\) depending only on sup-norm of \(u_{j}^{\prime}s\) and \(w_{j}\)'s (and the domains \(\Omega,\widetilde{\Omega}\)), such that \[\sup_{j}\int_{\Omega}H_{m}(u_{j})\leq C_{1},\quad\sup_{j}\int_{\Omega}H_{m}( w_{j})\leq C_{2}.\] Proof.: Note that \(|u-u_{j}|=(\max\{u,u_{j}\}-u_{j})+(\max\{u,u_{j}\}-u)\). Observe first that by the Hartogs lemma and quasi-continuity of \(u\) ([12, Lemma 9.14] and Theorem 4.4) \(\phi_{j}:=\max\{u,u_{j}\}\to u\) in capacity. Fix \(\varepsilon>0\). We have for \(j\) large, \[\int_{\Omega}(\max\{u,u_{j}\}-u)H_{m}(w_{j}) \leq\int_{\{|\phi_{j}-u|>\varepsilon\}}H_{m}(w_{j})+\varepsilon \int_{\Omega}H_{m}(w_{j})\] \[\leq A^{m}\ cap_{m}(|\phi_{j}-u|>\varepsilon)+C_{2}\varepsilon.\] Therefore, \(\lim_{j\to\infty}\int(\phi_{j}-u)H_{m}(w_{j})=0\). Here and in what follows we drop the domain \(\Omega\) in the integrals if no confusion arises. Next, we consider for \(j>k\), \[\int(\phi_{j}-u_{j})H_{m}(w_{j})-\int(\phi_{j}-u_{j})H_{m}(w_{k})=\int(\phi_{ j}-u_{j})dd^{c}(w_{j}-w_{k})\wedge T\wedge\omega^{n-m},\] where \(T=T(j,k)=\sum_{s=1}^{n-1}(dd^{c}w_{j})^{s}\wedge(dd^{c}w_{k})^{m-1-s}\). Let us write \(h_{j}=\phi_{j}-u_{j}\). Now the proof gets more complicated than the one in [12, Lemma 2.3] as the integration by parts produces more terms involving the torsion of \(\omega\). By the integration by parts \[\int h_{j}dd^{c}(w_{j}-w_{k})\wedge T\wedge\omega^{n-m}=\int(w_{j}-w_{k})dd^{ c}(h_{j}\omega^{n-m})\wedge T. \tag{8.3}\] This integration by parts formula is justified by an approximation argument as follows. All functions are continuous on the boundary \(\partial\Omega\) with zero value there, so the approximating sequences of smooth functions converge to zero uniformly on \(\partial\Omega\). Moreover, the functions are defined and uniformly bounded on a neighborhood \(\widetilde{\Omega}\) so the total masses of wedge products are uniformly bounded by the CLN inequality. Hence, the boundary terms vanish after passing to the limit (see Remark 2.16 and also [11, Proposition 3.7]). By a direct calculation, \[\begin{split} dd^{c}(h_{j}\omega^{n-m})&=dd^{c}h_{j }\wedge\omega^{n-m}+h_{j}dd^{c}\omega^{n-m}\\ &\quad+(n-m)[dh_{j}\wedge d^{c}\omega+d\omega\wedge d^{c}h_{j}] \wedge\omega^{n-m-1}.\end{split} \tag{8.4}\] For the first term we obtain a bound \[\int(w_{j}-w_{k})dd^{c}h_{j}\wedge\omega^{n-m}\wedge T\leq\int|w_{j}-w_{k}|dd^ {c}(\phi_{j}+u_{j})\wedge T\wedge\omega^{n-m}, \tag{8.5}\] and using inequality [12, Lemma 2.3] for the second term \[\begin{split}&\int(w_{j}-w_{k})h_{j}dd^{c}\omega^{n-m}\wedge T \\ &\leq C\int|w_{j}-w_{k}|h_{j}[dd^{c}(w_{j}+w_{k})]^{m-1}\wedge \omega^{n-m+1}.\end{split} \tag{8.6}\] Next, since two terms in the bracket of (8.4) are mutually conjugate, we only estimate the first one. To this end we will use Cauchy-Schwarz' inequality (Corollary 2.5): \[\left|\int(w_{j}-w_{k})dh_{j}\wedge d^{c}\omega\wedge\omega^{n-m-1} \wedge T\right|^{2}\] \[\leq C\int|w_{j}-w_{k}|dh_{j}\wedge d^{c}h_{j}\wedge[dd^{c}(w_{j} +w_{k})]^{m-1}\wedge\omega^{n-m}\] \[\qquad\times\int|w_{j}-w_{k}|[dd^{c}(w_{j}+w_{k})]^{m-1}\wedge \omega^{n-m+1}.\] Let us consider the second factor in the product. Since \(\|w_{j}\|_{L^{\infty}},\|u_{j}\|_{L^{\infty}}\leq A\) in \(\Omega\), it follows that \[\begin{split}&\int_{\Omega}|w_{j}-w_{k}|[dd^{c}(\phi_{j}+u_{j})]^{m- 1}\wedge\omega^{n-m+1}\\ &\leq A\int_{\{|w_{j}-w_{k}|>\varepsilon\}}[dd^{c}(\phi_{j}+u_{j} )]^{m-1}\wedge\omega^{n-m+1}\\ &\quad+\varepsilon\int_{\{|w_{j}-w_{k}|\leq\varepsilon\}}[dd^{c}( \phi_{j}+u_{j})]^{m-1}\wedge\omega^{n-m+1}\\ &\leq(2A)^{m}cap_{m}(|w_{j}-w_{k}|>\varepsilon)+C\varepsilon, \end{split} \tag{8.7}\] where the uniform bound for the integral in the third line follows from the CLN inequality as all functions are uniformly bounded in \(\widetilde{\Omega}\). It means that the left hand side of the inequality (8.7) is less than \(2C\varepsilon\) for some \(k_{0}\) and every \(j>k\geq k_{0}\). As for the first factor in the product we observe that \[dh_{j}\wedge d^{c}h_{j}\leq 2du_{j}\wedge d^{c}u_{j}+2d\phi_{j}\wedge d^{c} \phi_{j},\] and \(2d\phi_{j}\wedge d^{c}\phi_{j}=dd^{c}\phi_{j}^{2}-2\phi_{j}dd^{c}\phi_{j}\) and similarly for \(u_{j}\). Therefore, we can apply the estimate as in (8.7) for this integral. The same for the integrals on the right hand sides of (8.5) and (8.6). Thus, \[\begin{split}\int(\phi_{j}-u_{j})H_{m}(w_{j})&\leq \int(\phi_{j}-u_{j})H_{m}(w_{k})\\ &\quad+\left|\int(\phi_{j}-u_{j})H_{m}(w_{j})-\int(\phi_{j}-u_{j} )H_{m}(w_{k})\right|\\ &\leq\int(\phi_{j}-u_{j})H_{m}(w_{k})+8C\varepsilon\\ &\leq\int|u-u_{j}|H_{m}(w_{k})+8C\varepsilon.\end{split}\] Fix \(k=k_{0}\) and apply Corollary 8.4 for \(d\lambda=H_{m}(w_{k_{0}})\) to get that for \(j\geq k_{1}\geq k_{0}\) \[\int(\phi_{j}-u_{j})H_{m}(w_{j})\leq(8C+1)\varepsilon.\] Since \(\varepsilon>0\) was arbitrary, the proof of the lemma is completed. Let \(\mu\) be a positive Radon measure on a bounded strictly \(m\)-pseudoconvex domain \(\Omega\). Assume that there exists a bounded \(m-\omega\)-sh function \(\underline{u}\) in \(\Omega\) such that \[H_{m}(\underline{u})\geq\mu,\quad\lim_{x\to z\in\partial\Omega}\underline{u}(x )=0. \tag{8.8}\] This function \(\underline{u}\) is called a subsolution for \(d\mu\). Our goal is to prove the following. **Theorem 8.7**.: _Let \(\varphi\in C^{0}(\partial\Omega)\) and let \(\mu\) be a positive Radon measure in \(\Omega\). Assume that \(\mu\) admits a bounded subsolution \(\underline{u}\) as in (8.8). Then, there exists a unique bounded \(m-\omega\)-sh function \(u\) solving \(\lim_{z\to x}u(z)=\varphi(x)\) for \(x\in\partial\Omega\),_ \[H_{m}(u)=\mu\quad\text{in }\Omega.\] We first make some reduction steps. In fact it is enough to prove the statement under the following additional assumptions on the measure \(d\mu\), the boundary data \(\varphi\) and the subsolution \(\underline{u}\). **Lemma 8.8**.: _We may assume additionally that_ 1. \(\varphi\) _is smooth on_ \(\partial\Omega\)_;_ 2. \(\mu\) _has compact support in_ \(\Omega\)_;_ 3. _the support of_ \(\nu=(dd^{c}\underline{u})^{m}\wedge\omega^{n-m}\) _is compact in_ \(\Omega\)_;_ 4. \(\underline{u}\) _can be extended as a_ \(m-\omega\)_-sh function to a neighborhood_ \(\widetilde{\Omega}\) _of_ \(\overline{\Omega}\)_._ Proof.: _Step 1:_ **(c)\(\Rightarrow\)(d)**. Denote \(K:=\operatorname{supp}\,\nu\). Let \(\rho\) be a strictly \(m-\omega\)-sh defining function for \(\Omega\). In particular, \(\rho\) is defined in a neighborhood \(\widetilde{\Omega}\) of \(\overline{\Omega}\). For \(A>0\) (to be chosen), we define \[v=\begin{cases}\max\{\underline{u},A\rho\}\quad\text{in }\Omega,\\ A\rho\quad\text{on }\widetilde{\Omega}\setminus\Omega.\end{cases} \tag{8.9}\] Notice that \(\lim_{x\to\partial\Omega}(\underline{u}-A\rho)=0\), so the function \(v\) is well-defined. We claim that for a sufficiently large \(A\) we have \(\underline{u}\geq A\rho\) on \(\Omega\), and then \(v\) is a required extension. Indeed, let \(U\subset\subset\Omega\) be a neighborhood of \(K\). Since \(\sup_{\overline{U}}\rho\leq-\delta\) for some \(\delta>0\) and \(\underline{u}\) is bounded, we can choose \(A>0\) large enough so that \(v\geq A\rho\) on \(\overline{U}\). Furthermore, \((dd^{c}v)^{m}\wedge\omega^{n-m}\equiv 0\) on \(\Omega\setminus\overline{U}\) and \(v\geq A\rho\) on the boundary \(\partial(\Omega\setminus\overline{U})\). The domination principle implies that \(v\geq A\rho\) on \(\Omega\setminus\overline{U}\). **Remark 8.9**.: The constant \(A>0\) chosen in this argument depends only on the defining function \(\rho\), the support of \(\nu\) and the \(\sup\)-norm of \(\underline{u}\). _Step 2:_ **(b)\(\Rightarrow\)(c)**. This follows from the classical balayage argument. However, several ingredients are only available recently. We give a detailed argument. Assume that \(\operatorname{supp}\,\mu\subset U\) which is an open subset relatively compact in \(\Omega\). We define an envelope \[v=\sup\left\{w:w\text{ is }m-\omega\text{-sh in }\Omega,\,w\leq\underline{u} \text{ on }U,\,w\leq 0\right\}.\] It follows from [10, Proposition 2.6] that the upper semicontinuous regularization \(v^{*}\geq v\) is also \(m-\omega\)-sh function in \(\Omega\). Hence, \(v^{*}=v\) belongs to the family in the definition of the envelope. So \(\underline{u}\leq v\). Thus, \(\underline{u}=v\) on \(U\) containing \(\operatorname{supp}\,\mu\). Therefore, \[H_{m}(v)\geq\mu\quad\text{ in }\Omega.\] Now we verify that \[H_{m}(v)\equiv 0\quad\text{on }\Omega\setminus\overline{U}.\] Let \(B(a,r)\subset\subset\Omega\setminus\overline{U}\) be a small ball. By using the solution to the homogeneous Hessian equation (Theorem 8.2) one can find a continuous \(m-\omega\)-sh function \(h\geq v\) in \(\Omega\) which is maximal in \(B(a,r)\) in the sense that \(H_{m}(h)\equiv 0\) in \(B(a,r)\) and \(h=v\) on \(\Omega\setminus B(a,r)\). Observe also \(U\subset\Omega\setminus B(a,r)\) and the function \(h\) is a candidate in the envelope. So, \(h\leq v\) in \(\Omega\). Hence, \(h=v\) everywhere. We have \(H_{m}(v)\equiv 0\) on \(B(a,r)\). The ball is arbitrary so we get the desired property for \(v\). _Step 3:_ **(b).** If the problem is solvable for measures with compact support, then it is solvable for a general measure. In fact, let \(\eta_{j}\uparrow 1\) be a sequence of cut-off functions. Then, \(\eta_{j}\mu\) admits \(\underline{u}\) as a bounded subsolution. Solve the equation \(H_{m}(w)=\eta_{j}\mu\) to obtain a bounded \(m-\omega\)-sh function \(u_{j}\) with \(u_{j}=\varphi\) on \(\partial\Omega\). Denote by \(h\in C^{0}(\overline{\Omega})\) a unique \(m-\omega\)-sh solution to \(H_{m}(h)\equiv 0\) in \(\Omega\) with \(h=\varphi\) on \(\partial\Omega\). Then, \(H_{m}(\underline{u}+h)\geq H_{m}(u_{j})\). The domination principle gives \[\underline{u}+h\leq u_{j}\leq h,\] and the sequence \(u_{j}\) is decreasing. Define \(u=\lim_{j}u_{j}\). Then, it is a solution to \(H_{m}(u)=\mu\) with the boundary data \(\varphi\). _Step 4:_ **(a).** If we can solve the Dirichlet problem for smooth boundary data, then it is solvable for the continuous one. Indeed, let \(\varphi_{j}\) be a sequence of smooth functions decreasing to \(\varphi\). Find a bounded \(m-\omega\)-sh function \(u_{j}\) in \(\Omega\) such that \[H_{m}(u_{j})=\mu,\quad\lim_{z\to x}u_{j}(z)=\varphi_{j}(x)\text{ for every }x\in\partial\Omega.\] Let \(h_{j}\) be the solution the homogeneous equation \(H_{m}(h_{j})\equiv 0\) with the boundary data \(\varphi_{j}\). By the domination principle, \(u_{j}\geq h_{j}+\underline{u}\) and \(u_{j}\) is a decreasing sequence. Since \(\varphi_{j}\) is uniformly bounded, so is \(h_{j}\). Therefore, the function \(u=\lim_{j}u_{j}\) is a required solution to the equation. We proceed to prove the theorem under the assumptions (a)-(d). Proof.: Recall that we assumed that subsolution \(\underline{u}\) is defined in a neighborhood \(\widetilde{\Omega}\) of \(\overline{\Omega}\). Hence, \(\underline{u}\in\widetilde{\mathcal{E}}_{0}(\Omega)\) in the sense of (8.2). By Proposition 2.9 we can find a decreasing sequence of smooth \(m-\omega\)-sh functions \(v_{j}\) defined in \(\widetilde{\Omega}\) and such that \(v_{j}\downarrow\underline{u}\) point-wise. Since \(\underline{u}\) is bounded, \(\{v_{j}\}\) is uniformly bounded. Next, let us write \[H_{m}(v_{j})=f_{j}dV\quad\text{in }\widetilde{\Omega},\] where \(dV\) is the Euclidean volume form on \(\mathbb{C}^{n}\). Observe that \(v_{j}\) is no longer zero on the boundary of \(\Omega\), however we can modify it by solving the Dirichlet problem to find \(\tilde{v}_{j}\in C^{0}(\overline{\Omega})\), \(m-\omega\)-sh satisfying \[H_{m}(\tilde{v}_{j})=\chi f_{j}dV\quad\text{in }\Omega,\quad\tilde{v}_{j}=0 \text{ on }\partial\Omega,\] where \(0\leq\chi\leq 1\) is the cut-off function in \(\Omega\) such that \(\chi\equiv 1\) on a neighborhood of \(\text{supp }\nu\) and \(\text{supp }\chi\subset\subset\Omega\). Since \(\underline{u}\) continuous on \(\partial\Omega\), by Dini's theorem, \(v_{j}\) converges uniformly to \(\underline{u}\) on \(\partial\Omega\). As a consequence of stability estimate for the right hand side in \(L^{p}\), we have \[\|\tilde{v}_{j}-v_{j}\|_{L^{\infty}(\Omega)}\leq\sup_{\partial\Omega}|\tilde{ v}_{j}-v_{j}|=\sup_{\partial\Omega}|v_{j}|.\] Hence, \(\{\tilde{v}_{j}\}\) is also uniformly bounded on \(\overline{\Omega}\). Also by Dini's theorem \(v_{j}\to\underline{u}\) uniformly on compact sets where \(\underline{u}\) is continuous. By the stability estimate above \(\tilde{v}_{j}\to\underline{u}\) uniformly on such compact sets. Combining with the quasi-continuity of \(\underline{u}\), we get also that \[\tilde{v}_{j}\text{ converges in capacity to }\underline{u}\text{ as }j\to\infty. \tag{8.10}\] Thus \(\check{v}_{j}\) have zero boundary values, but in general those are only continuous functions. Note also that \(H_{m}(\check{v}_{j})\) converges weakly to \(\nu\) whose support is compact in \(\Omega\). Thus, \[\sup_{j}\int_{\Omega}H_{m}(\check{v}_{j})\leq C.\] Moreover all \(\check{v}_{j}\) can be extended so that they form a uniformly bounded sequence in \(\widetilde{\mathcal{E}}_{0}(\Omega)\). In fact, since the supports of \(H_{m}(\check{v}_{j})\) are contained in supp \(\chi\), we can extend \(\check{v}_{j}\) to the neighborhood \(\widetilde{\Omega}\) by \(A\rho\) as in (8.9). We are now ready to produce a sequence of functions whose limit point will be the desired solution. By Radon-Nikodym's theorem we can write \(\mu=g\nu\) for a Borel measurable function \(0\leq g\leq 1\). Assume first that \(g\) is continuous (we will relax the assumption on \(g\) at the end of the proof) and solve for \(u_{j}\in SH_{m}(\omega)\cap C^{0}(\overline{\Omega})\), the equation \[H_{m}(u_{j})=g\chi f_{j}dV,\quad u_{j}=\varphi\text{ on }\partial\Omega.\] Set \[u=(\limsup_{j\to\infty}u_{j})^{*}. \tag{8.11}\] Let us show first that \(\{u_{j}\}\) is uniformly bounded. Indeed, let \(\psi\) be a smooth \(m-\omega\)-sh solution to the equation \((dd^{c}\psi)^{m}\wedge\omega^{n-m}=1\) in \(\overline{\Omega}\) with \(\psi=\varphi\) on \(\partial\Omega\). Thus, we may assume that \(\psi\) is \(m-\omega\)-sh on \(\widetilde{\Omega}\). At this point we used the smoothness of \(\varphi\). Furthermore, let \(h\in C^{0}(\overline{\Omega})\) be a \(m-\omega\)-sh solution to \[H_{m}(h)=0\quad\text{in }\Omega,\quad h=\varphi\text{ on }\partial\Omega.\] It follows from the domination principle that \[\psi+\check{v}_{j}\leq u_{j}\leq h.\] Note that the above lower and upper bounds of \(u_{j}\) are continuous on the boundary \(\partial\Omega\) and equal to \(\varphi\) there. So \(u_{j}\) and \(u\) have the same property. Thus passing to a subsequence we may assume that \[u_{j}\to u\text{ in }L^{1}(\Omega),\quad u_{j}\to u\quad\text{a.e. in }dV. \tag{8.12}\] The next step is to prove that \(H_{m}(u)=\mu\). Observe that \(\psi+\check{v}_{j}\) is defined in the neighborhood \(\widetilde{\Omega}\) of \(\overline{\Omega}\). This combined with the inequality \(\psi+\check{v}_{j}\leq u_{j}\) allows to extend \(u_{j}\) to \(\widetilde{\Omega}\) by setting \[\widetilde{u}_{j}=\begin{cases}\max\{u_{j},\psi+\check{v}_{j}\}&\text{ on } \Omega,\\ \psi+\check{v}_{j}&\text{ on }\widetilde{\Omega}\setminus\Omega.\end{cases}\] Using again the smoothness of \(\varphi\), we can find \(\psi^{\prime}\) a strictly \(m\)-sh function on a (possibly smaller) neighborhood \(\widetilde{\Omega}\) of \(\overline{\Omega}\) satisfying \[\psi^{\prime}=-\varphi\text{ on }\partial\Omega.\] Then, we have clearly \[\widehat{u}_{j}:=u_{j}+\psi^{\prime}\in\widetilde{\mathcal{E}}_{0}(\Omega) \tag{8.13}\] for all \(j\geq 1\). Consequently, we get the important uniform bound for the total mass of mixed Hessian operators. **Lemma 8.10**.: _Let \(T_{j,k}=(dd^{c}\tilde{u}_{j})^{s}\wedge(dd^{c}\check{v}_{k})^{\ell}\wedge\omega^{n -s-\ell}\), where \(0\leq s+\ell\leq m\). There exists a uniform constant \(C\) independent of \(j,k\) such that_ \[\int_{\Omega}T_{j,k}\leq C.\] Proof.: All functions belong to \(\widetilde{\mathcal{E}}_{0}(\Omega)\) defined on the fixed neighborhood \(\widetilde{\Omega}\) of \(\overline{\Omega}\). Thanks to Remark 8.9 we know that the extensions of \(\check{v}_{i}\) are uniformly bounded on \(\widetilde{\Omega}\). Hence, the extensions of \(u_{j}\) above are uniformly bounded as well. The proof follows from the CLN inequality. The next result corresponds to [13, Lemma 3.5]. **Lemma 8.11**.: _There exists a subsequence \(\{u_{j_{s}}\}\) such that for_ \[w_{s}:=\max\{u_{j_{s}},u-1/s\}\] _the following claims hold_ 1. \(\lim_{s\to\infty}\int_{\Omega}|u_{j_{s}}-u|H_{m}(u)=0\)_;_ 2. \(\lim_{s\to\infty}\int_{\Omega}|u_{j_{s}}-u|H_{m}(w_{s})=0\)_;_ 3. \(\lim_{s\to\infty}\int_{\Omega}|u_{j_{s}}-u|H_{m}(u_{j_{s}})=0\)_._ Proof.: (a) Since \(u\) is bounded in \(\Omega\) by Proposition 7.7 the measure \(H_{m}(u)\) vanishes on \(m\)-polar sets. Denote by \(\widehat{u}=u+\psi^{\prime}\), then \(\widehat{u}\) is the limit (a.e-\(dV\)) of the sequence \(\{\widehat{u}_{j}\}_{j\geq 1}\subset\widetilde{\mathcal{E}}_{0}(\Omega)\) from (8.13). Thus, \(\widehat{u}\in\widetilde{\mathcal{E}}_{0}(\Omega)\). Notice that \(\widehat{u}_{j}-u=u_{j}-u\), so the assumptions of Corollary 8.4 are satisfied and it concludes the proof of (a). Let us prove (b). Clearly, \(\widehat{w}_{s}=w_{s}+\psi^{\prime}\in\widetilde{\mathcal{E}}_{0}(\Omega)\). Since \(w_{s}\to u\) in capacity as \(s\to\infty\), the same convergence holds for \(\widehat{w}_{s}\to\widehat{u}\). It follows from Lemma 8.5 that \[0=\lim_{s\to\infty}\int_{\Omega}|\widehat{w}_{s}-\widehat{u}|H_{m}(\widehat{ w}_{s})\geq\lim_{s\to\infty}\int_{\Omega}|u_{j_{s}}-u|H_{m}(w_{s}).\] The proof of (b) finished. For the last item (c), we use the equation \[H_{m}(u_{j_{s}})=g\chi f_{j_{s}}dV\leq H_{m}(\check{v}_{j_{s}}),\] where the last inequality followed from the fact \(0\leq g\leq 1\). Taking into account the convergence in capacity of \(\check{v}_{j_{s}}\) to \(\underline{u}\), as \(j_{s}\to\infty\), the proof of (c) follows again from Lemma 8.5. We are in the position to conclude that \(u\) from (8.11) is indeed the solution. The argument of [13, Lemma 3.6] is readily applicable to conclude that there exists a subsequence \(\{u_{j_{s}}\}_{s\geq 1}\) of \(\{u_{j}\}_{j\geq 1}\) such that \[H_{m}(u_{j_{s}})\to H_{m}(u)\quad\text{weakly}.\] Hence, if \(0\leq g\leq 1\) is a continuous function whose support is compact in \(\Omega\), then there exists a unique bounded \(m-\omega\)-sh function with \(u=\varphi\) on \(\partial\Omega\) and \(H_{m}(u)=gH_{m}(\underline{u})\). The general case of a Borel function \(0\leq g\leq 1\) follows from the argument in [13, page 11] at the end of the proof of Theorem 3.1. ## 9. Hessian equations on Hermitian manifolds with boundary Let \((\overline{M},\omega)\) be a smooth compact Hermitian manifold of dimension \(n\) with non-empty boundary \(\partial M\). Then, \(\overline{M}=M\cup\partial M\), where \(M\) is a complex manifold of dimension \(n\). Let \(1\leq m\leq n\) be an integer and \(\alpha\in\Gamma_{m}(\omega)\) be a real \((1,1)\)-form. Recently Collins and Picard [13] solved the Dirichlet problem in \(M\) for the Hessian equation \((\alpha+dd^{c}u)^{m}\wedge\omega^{n-m}=f\omega^{n}\), for smooth data, assuming the existence of a subsolution. The goal of this section is to extend this result to the case of bounded functions. The special case of the Monge-Ampere equation was treated in [16, Theorem 1.2]. The theorem below is also a significant improvement of [14, Theorem 1.3]. Recall from [14, Definition 2.4, Lemma 9.10] that a function \(u:M\to[-\infty,+\infty)\) is called \((\alpha,m)-\omega\)-subharmonic if it can be written locally as a sum of a smooth function and a \(\omega\)-sh function, and globally for any collection \(\gamma_{1},...,\gamma_{m-1}\in\Gamma_{m}(M,\omega)\), \[(\alpha+dd^{c}u)\wedge\gamma_{1}\wedge\cdots\wedge\gamma_{m-1}\wedge\omega^{n -m}\geq 0\quad\text{ on }M \tag{9.1}\] in the weak sense of currents. Denote \(SH_{\alpha,m}(M,\omega)\) or \(SH_{\alpha,m}(\omega)\) be the set of all \((\alpha,m)-\omega\)-sh function on \(M\). If \(\Omega\) is a local coordinate chart on \(M\) and \(\rho\) is a strictly psh function on \(\Omega\) such that \[dd^{c}\rho\geq\alpha\quad\text{on }\Omega,\] then \(u+\rho\) is a \(m-\omega\)-sh function on \(\Omega\). Using this fact, we can easily extend the definition of the wedge product for currents associated to bounded \((\alpha,m)-\omega\)-sh functions by using partition of unity and the local one (Definition 3.4). Namely, write \(\tau=\alpha-dd^{c}\rho\) which is a smooth \((1,1)\)-form. Then, \(\alpha+dd^{c}u=dd^{c}(u+\rho)+\tau\). We define \[(\alpha+dd^{c}u)^{m}\wedge\omega^{n-m} :=\sum_{k=0}^{m}\binom{m}{k}[dd^{c}(u+\rho)]^{k}\wedge\tau^{m-k} \wedge\omega^{n-m}\] \[=\sum_{k=0}^{m}\binom{m}{k}\mathcal{L}_{k}(u+\rho)\wedge\tau^{m-k}.\] This gives a positive Radon measure on \(M\) by the weak convergence theorem. Similarly, the wedge product for bounded \((\alpha,m)-\omega\)-sh functions \(u_{1},...,u_{m}\) \[(\alpha+dd^{c}u_{1})\wedge\cdots\wedge(\alpha+dd^{c}u_{m})\wedge\omega^{n-m}\] is a well-defined positive Radon measure. Since the definition is local, all local results for \(m-\omega\)-sh functions in a local coordinate chart transfer to \((\alpha,m)-\omega\)-sh functions on the manifold \(M\). For simplicity we denote \(\alpha_{u}:=\alpha+dd^{c}u\) and \[H_{m,\alpha}(u)=(\alpha+dd^{c}u)^{m}\wedge\omega^{n-m}.\] Now, given a positive Radon measure \(\mu\) on \(M\) and a continuous boundary data \(\varphi\in C^{0}(\partial M,\mathbb{R})\) we wish to solve the Dirichlet problem \[\begin{cases}u\in SH_{\alpha,m}(\omega)\cap L^{\infty}(\overline{M}),\\ H_{m,\alpha}(u)=\mu,\\ \lim_{z\to x}u(x)=\varphi(x)\quad\text{for }x\in\partial M.\end{cases} \tag{9.2}\] Let us state a general existence result. **Theorem 9.1**.: _Assume there exists a bounded \((\alpha,m)-\omega\)-sh function \(\underline{u}\) on \(M\) such that \(\lim_{z\to x}\underline{u}(z)=\varphi(x)\) for \(x\in\partial M\) and \(H_{m,\alpha}(\underline{u})\geq\mu\quad\text{on }M.\) Then, there is a solution to the Dirichlet problem (9.2)._ **Remark 9.2**.: In the general setting the uniqueness is not known, unlike in a bounded strictly \(m\)-pseudoconvex domain (Theorem 8.7). On the other hand, if we assume further that either the manifold \(M\) is Stein, or both \(\omega\) and \(\alpha\) are closed forms, then the solution will be unique. As we use the Perron envelope method to show the theorem the most important ingredient is the proof of the special case \(M\equiv\Omega\) a ball in \(\mathbb{C}^{n}\). **Lemma 9.3**.: _Let \(\varphi\in C^{0}(\partial\Omega,\mathbb{R})\). Suppose \(\mu\leq H_{m}(v)\) for some bounded \(m-\omega\)-sh function \(v\) in \(\Omega\) with \(\lim_{z\to x}v(z)=0\) for \(x\in\partial\Omega\). Then, there exists a unique \((\alpha,m)-\omega\)-sh function \(u\) in \(\Omega\) solving_ \[H_{m,\alpha}(u)=\mu\quad\text{in }\Omega,\quad\lim_{z\to x}u(z)=\varphi(x) \text{ for }x\in\partial\Omega. \tag{9.3}\] The proof of this lemma is a straightforward extension of Theorem 8.7 so we omit the proof. Proof of Theorem 9.1.: Let us proceed with the proof of the bounded subsolution theorem on \(\overline{M}\). Consider the following set of functions \[\mathcal{B}(\varphi,\mu):=\left\{w\in SH_{\alpha,m}(M,\omega)\cap L^{\infty}( \overline{M}):H_{m,\alpha}(w)\geq\mu,w^{*}_{|_{\partial M}}\leq\varphi\right\}, \tag{9.4}\] where \(w^{*}(x)=\limsup_{M\geq z\to x}w(z)\) for every \(x\in\partial M\). Clearly, \(\underline{u}\in\mathcal{B}(\varphi,\mu)\). Let us solve the linear PDE finding \(h_{1}\in C^{0}(\overline{M},\mathbb{R})\) such that \[(\alpha+dd^{c}h_{1})\wedge\omega^{n-1}=0,\] \[h_{1}=\varphi\quad\text{on }\partial M. \tag{9.5}\] Since \((\alpha+dd^{c}w)\wedge\omega^{n-1}\geq 0\) for \(w\in SH_{\alpha,m}(M,\omega)\), the maximum principle for the Laplace operator with respect to \(\omega\) gives \[w\leq h_{1}\quad\text{for all }w\in\mathcal{B}(\varphi,\mu).\] Set \[u(z)=\sup_{w\in\mathcal{B}(\varphi,\mu)}w(z)\quad\text{for every }z\in M. \tag{9.6}\] Then, by Choquet's lemma and the fact that \(\mathcal{B}(\varphi,\mu)\) satisfies the lattice property, \(u=u^{*}\in\mathcal{B}(\varphi,\mu)\). Again by the definition of \(u\), we have \(\underline{u}\leq u\leq h_{1}\). It follows that \[\lim_{z\to x}u(z)=\varphi(x)\quad\text{for every }x\in\partial M. \tag{9.7}\] **Lemma 9.4** (Lift).: _Let \(v\in\mathcal{B}(\varphi,\mu)\). Let \(B\subset\subset M\) be a small coordinate ball (a chart biholomorphic to a ball in \(\mathbb{C}^{n}\)). Then, there exists \(\widetilde{v}\in\mathcal{B}(\varphi,\mu)\) such that \(v\leq\widetilde{v}\) and \(H_{m,\alpha}(\widetilde{v})=\mu\) on \(B\)._ Proof.: Given the solution in a small coordinate ball in Lemma 9.3, the proof from [10, Lemma 3.7] is readily adaptable here. By (9.7) it remains to show that the function \(u\) above satisfies \(H_{m,\alpha}(u)=\mu\). Let \(B\subset\subset M\) be a small coordinate ball. It is enough to check \(H_{m,\alpha}(u)=\mu\) on \(B\). Let \(\widetilde{u}\) be the lift of \(u\) as in Lemma 9.4. It follows that \(\widetilde{u}\geq u\) and \(H_{m,\alpha}(\widetilde{u})=\mu\) on \(B\). However, by the definition \(\widetilde{u}\leq u\) on \(M\). Thus, \(\widetilde{u}=u\) on \(B\), in particular on \(B\) we have \(H_{m,\alpha}(\widetilde{u})=H_{m,\alpha}(u)=\mu\) **Remark 9.5**.: We can also study the continuity of the solution to the Dirichlet problem (9.2) for a measure that is well-dominated by capacity as in [14, Section 4] and the weak solution to the complex Hessian type equations such as a generalization of the Monge-Ampere equation in [14]. We leave these to the future projects.
2308.04470
D-Score: A Synapse-Inspired Approach for Filter Pruning
This paper introduces a new aspect for determining the rank of the unimportant filters for filter pruning on convolutional neural networks (CNNs). In the human synaptic system, there are two important channels known as excitatory and inhibitory neurotransmitters that transmit a signal from a neuron to a cell. Adopting the neuroscientific perspective, we propose a synapse-inspired filter pruning method, namely Dynamic Score (D-Score). D-Score analyzes the independent importance of positive and negative weights in the filters and ranks the independent importance by assigning scores. Filters having low overall scores, and thus low impact on the accuracy of neural networks are pruned. The experimental results on CIFAR-10 and ImageNet datasets demonstrate the effectiveness of our proposed method by reducing notable amounts of FLOPs and Params without significant Acc. Drop.
Doyoung Park, Jinsoo Kim, Jina Nam, Jooyoung Chang, Sang Min Park
2023-08-08T08:45:08Z
http://arxiv.org/abs/2308.04470v1
# D-Score: A Synapse-inspired Approach for Filter Pruning ###### Abstract This paper introduces a new aspect for determining the rank of the unimportant filters for filter pruning on convolutional neural networks (CNNs). In the human synaptic system, there are two important channels known as excitatory and inhibitory neurotransmitters that transmit a signal from a neuron to a cell. Adopting the neuroscientific perspective, we propose a synapse-inspired filter pruning method, namely Dynamic Score (D-Score). D-Score analyzes the independent importance of positive and negative weights in the filters and ranks the independent importance by assigning scores. Filters having low overall scores, and thus low impact on the accuracy of neural networks are pruned. The experimental results on CIFAR-10 and ImageNet datasets demonstrate the effectiveness of our proposed method by reducing notable amounts of FLOPs and Params without significant Acc. Drop. ## 1 Introduction Convolutional neural networks (CNNs) have proven their usefulness in many computer vision fields: face recognition, image classification, and human pose estimation [1, 11, 29, 36, 37]. However, to fully utilize their superior performances, high computational power and large memory space are mandatory [6]. Such limitations refrain from deploying CNNs on resource-constrained devices like embedded systems and mobile phones [26, 38]. To solve the problems of CNNs with a large number of parameters and model size, numerous model compression methods which are mainly classified as low-rank factorization [31], knowledge distillation [5, 17], quantization [3, 12], and pruning [13, 22] have been widely studied over the past years. Among the compression methods, pruning is a technique to remove unimportant and redundant parameters from CNNs and has demonstrated its effectiveness by reducing the computational cost and increasing the inference speed [7]. Depending on the type of parameters for elimination, pruning can be renamed as weight pruning [13], neuron pruning [35], and filter pruning [22]. Due to the technical mechanism of weight pruning and neuron pruning, they remove specific weight connections and neurons, and this introduces sparsity in the network. On the contrary, filter pruning discards the entire unnecessary filters from the neural networks and this maintains the structured network that guarantees the compatibility of the filter-pruned networks with existing libraries and hardware [22, 28]. In the current literature on filter pruning, many studies [15, 22, 42] analyze the weight values in the filters to select less important filters for pruning. Recent study by [22] employed \(\ell_{1}\)-norm method to select the unimportant filters for filter pruning (Figure 1). However, to the best of our knowledge, no study has separately considered the positive and negative weights in the filters for determining unimportant filters for pruning. In the synaptic transmission system of a biological neural network, the basis of an artificial neural network, there coexists excitatory and inhibitory neurotransmitters. The excitatory neurotransmitter enhances or increases the activation in the postsynaptic membrane, while the inhibitory neurotransmitter decreases or prevents the activation [4, 18, 19]. Similarly, filters of CNN models are composed of positive weights and negative weights. Considering the neuroscientific perspective, we propose a new filter pruning approach that separately analyzes the positive and negative weights in the filters, namely Dynamic Score (D-Score). D-Score assigns scores to the filters for filter pruning based on their independent importance of positive and negative weights (Figure 1). In addition to D-Score, two variants called Dynamic Step (D-Step) and Dynamic Step with Geometric Median (D-Step GM) are also introduced as applied concepts of our proposed approach. Our contributions of this paper are summarized as below: * We propose a new filter pruning technique called D-Score, that independently processes and assigns scores to the positive and negative weights in the filters based on their independent importance. In addition to D-Score, two more applied concepts, namely D-Step and D-Step GM, are also introduced. All three approaches yield improved results as compared to other proposed filter pruning methods in terms of the amount of reduction in floating points operation per second (FLOPs) and the number of parameters (Params), and accuracy drop (Acc. Drop). * We prove that the correlation of positive and negative weights when determining the rank of the unimportant filters is crucial. ## 2 Related works Several studies have stated that one of the essential directions of filter pruning is to develop innovative methods for selecting redundant filters for pruning [7, 24]. As discussed in the introduction, [22] employed \(\ell_{1}\)-norm method to select and prune filters having a small effect on the accuracy of the neural network. [40] proposed an iterative filter pruning method that used Taylor expansion as the estimation for determining the rank of the unimportant filters. [15] introduced Figure 1: Difference in the concept of \(\ell_{1}\)-norm approach and D-Score approach for determining the rank of the filters. Different filters are pruned as the rank of the filters varies depending on the analyzing approaches. For \(\ell_{1}\)-norm approach, the rank of the filters is determined by calculating the importance using \(\ell_{1}\)-norm of each filter. For D-Score approach, the rank of the filters is determined by calculating the independent importance of positive and negative weights and assigning scores to them. a soft pruning method which used \(\ell_{p}\)-norm method to select the unimportant filters and prune them by adjusting weights as zero. For practical purpose, [15] employed \(\ell_{2}\)-norm to select the unimportant filters. [39] introduced a stochastic training method that froze certain channels to be constant for pruning. [16] adopted geometric median to calculate the Euclidean spaces [9] to determine filters for pruning based on their redundancy, not importance. [24] analyzed the corresponding feature maps of the filters and pruned the filters having low-rank feature maps. [42] employed a clustering algorithm called spectral clustering to select a group of filters and evaluate the importance by \(\ell_{p}\)-norm for pruning. [33] proposed an adaptive filter pruning method that iteratively pruned the filters when the accuracy drop was within the acceptable range. [28] detected weak channels first and pruned their associated filters in the previous layer. ## 3 The Proposed Methods ### Independent ranking of positive and negative filters Let \(\mathcal{F}^{i}_{j}\) denotes the \(j\)th filter in \(i\)th layer, and each filter \(\mathcal{F}^{i}_{j}\) comprises of \(\mathds{R}^{C_{i}\times K_{i}\times K_{i}}\), where \(C_{i}\) stands for channel, and \(K_{i}\) for width and height of a kernel. \(\mathcal{F}^{i}_{j}\) is composed of positive and negative weights (Eq. 1), \[\mathcal{F}^{i}_{j}=\mathcal{F}^{i}_{j,+}+\mathcal{F}^{i}_{j,-} \tag{1}\] and the independent sum of the positive weights and negative weights for each filter are denoted as (Eq. 2). \[\mathcal{F}^{i}_{j,+}=\sum_{n=1}^{N}\mathcal{W}_{n,+}\leavevmode\nobreak\ \leavevmode\nobreak\ and \leavevmode\nobreak\ \leavevmode\nobreak\ \mathcal{F}^{i}_{j,-}=\sum_{n=1}^{N}\mathcal{W}_{n,-} \tag{2}\] Dynamic ScoreThe detailed procedure of D-Score method is as follows: 1. Using the values calculated from Eq. 2, sort the positive filters in ascending order and the negative filters in descending order. 2. Assign scores of \([1,j]\) to the positive and negative filters independently according to their sorted orders. 3. Find the overall score of the filter \(\mathcal{F}^{i}_{j}\) using the Eq. 1, and sort the filters \(\mathcal{F}^{i}_{j}\) in ascending order of their overall scores. 4. Prune the filters with small scores based on the sensitivity analysis (Section 3.2). Dynamic StepThe detailed procedure of D-Step method is as follows: 1. Using the values calculated from Eq. 2, sort the positive filters in ascending order and the negative filters in descending order. 2. Set a buffer size equivalent to the pruning threshold (Section 3.3) for step-wise comparison of sorted positive and negative filters. 3. Fill the buffer with filters \(\mathcal{F}^{i}_{j}\) in which the values of their positive and negative filters are simultaneously positioned close to 0. 4. Prune the filters positioned front in the buffer based on the sensitivity analysis (Section 3.2). Figure 2: Sensitivity analysis of VGG-16 trained on CIFAR-10 dataset using D-Score, D-Step, and D-Step GM. Different methods produce different sensitivity patterns. Dynamic Step with Geometric MedianThe detailed procedure of D-Step GM method is as follows: 1. Applying the idea discussed in [9, 16], calculate the independent Euclidean distances of positive and negative filters and sort them in the ascending order of their distances. 2. Set a buffer size equivalent to the pruning threshold (Section 3.3) for step-wise comparison of sorted positive and negative filters. 3. Fill the buffer with filters \(\mathcal{F}_{j}^{i}\) in which the values of their positive and negative filters are simultaneously positioned close to the shortest distance. 4. Prune the filters positioned close to the shortest distance in the buffer based on the sensitivity analysis (Section 3.2). ### Pruning Sensitivity Analysis To determine the sensitivity of individual layers to pruning, we iteratively pruned each layer and evaluated the accuracy of the pruned network in every step [22]. This procedure was repeated for all the proposed methods as shown in Figure 2, 3. Figure 2 shows that different pruning methods produced different sensitivities to pruning in the same layer. Figure 3 shows that the accuracy of layers that are sensitive to pruning decreased significantly as more filters were pruned, and this pattern is clearly noticeable in the first convolutional layer of ResNets. ### Parallel Pruning and Retraining Pruned Models Several studies have demonstrated that pruning convolutional layers decreases computational load and pruning fully connected layers decreases sizes of neural networks [2, 22, 35]. For neural networks comprised of convolutional and fully connected layers such as VGGNet [32], all relevant layers were pruned away based on the pruning sensitivity analysis. For neural networks containing residual blocks such as ResNet [14], we omitted to prune the last convolutional layer in each block due to the specificity of their architectures. Based on the pruning sensitivity analysis, we set a pruning threshold accuracy (acc) that was used to calculate the number of filters to be eliminated in all applicable layers for parallel pruning. The parallel pruning is a time-saving technique as it prunes the different number of filters in all applicable layers at once. For example, D-Score in Figure 2 Figure 3: Sensitivity analysis of ResNet18, 34, 50 trained on ImageNet dataset using D-Score. shows that setting the pruning threshold accuracy as 0.9 (acc. 90%) pruned 30%, 45%, 90% of filters at once in layer 1, 7, 11 respectively. Upon completion of parallel pruning of a neural network, the pruned network has poor performance compared to the original model. The performance of the pruned network can be restored close to the original network by retraining for fewer epochs than the one used for training the original model. ## 4 Experiments We evaluated the performance of the proposed techniques based on the sensitivity analysis with two representative datasets and various network structures: CIFAR-10 dataset [20] with VGG-16 [32], and ImageNet ILSVRC-2012 dataset [30] with ResNet18, 34, and 50 [14]. Since D-Step and D-Step GM are the applied concepts of D-Score, the performance of D-Step and D-Step GM were only experimented with CIFAR-10 dataset. We calculated top-1 accuracy for CIFAR-10 dataset, and top-1 and top-5 accuracy for ImageNet dataset respectively. All three techniques yielded outperforming results. ### Experimental Setup Initial model for CIFAR-10Since CIFAR-10 is a small dataset, we trained VGG-16 from scratch for 450 epochs with a batch size of 64. The initial learning rate was set to 0.01 and decayed by 10% in every 20 epochs of training. During training the baseline model, the data augmentation with horizontal flip, random width and height shifts, and rotation of 15 degrees was used. Initial model for ImageNetSince ImageNet is a large dataset, we adopted pre-trained models for ResNet18, 34, and 50. Before pruning ResNet models, the pre-trained ResNet18, 34, and 50 were retrained with ImageNet in TFRecord format for 10 epochs with batch sizes of 512, 256, and 128 respectively. During this process, the upper and lower bounds of the learning rates for respective models were derived by the learning rate finder [34], and the data augmentation with horizontal flip, random cropping, random color distortion, and rotation of 15 degrees was used. ### Experiment Results Cifar-10We compared the performance of our proposed methods with other pruning methods in terms of the Acc. Drop, reduction in Params and FLOPs. Note that we only calculated the top-1 accuracy for CIFAR-10 dataset as it only contains 10 classes, and the negative accuracy drop indicates that the accuracy of the pruned and retrained model is higher than the accuracy of the original model. According to Table 1, all of our proposed methods significantly outperformed other methods on VGG-16. In comparison with [22], another filter importance ranking-based approach, the performance of D-Step GM was substantially better as D-Step GM yielded similar Acc. Drop with significantly higher reduction in both Params and FLOPs. Among our proposed methods, D-Step GM yielded the highest reduction in Params by 87.16% with an increase in the final accuracy by 0.1%. For VGG-16 with CIFAR-10 dataset, the experimental result implies that the independent ranking of the positive and negative weights to select the unimportant filters for pruning led to a high reduction in Params and FLOPs without significant Acc. Drop. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Approach} & \multirow{2}{*}{Acc. Drop(\%)} & Params & FLOPs \\ & & & Reduction(\%) & Reduction(\%) \\ \hline \multirow{8}{*}{VGG-16} & PFEC [22] & **-0.15** & 64.0 & 34.2 \\ & FPGM [16] & 0.04 & **–** & 34.2 \\ \cline{1-1} & NSP [41] & -0.04 & **–** & 54.0 \\ \cline{1-1} & HRank [24] & 0.53 & 82.9 & 53.5 \\ \cline{1-1} & NS [27] & -0.14 & **88.52** & 51.0 \\ \cline{1-1} & Ours (D-Score) & 0.16 & 87.03 & 64.81 \\ \cline{1-1} & Ours (D-Step) & 0.12 & 86.70 & **65.40** \\ \cline{1-1} & Ours (D-Step GM) & -0.10 & 87.16 & 64.37 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of our methods and other pruning methods for VGG-16 trained on CIFAR-10 dataset. For CIFAR-10, top-1 Acc. Drop, Params reduction, and FLOPs reduction are compared. The best performance is in bold. ImageNetWe compared the performance of D-Score with other pruning methods in terms of Acc. Drop and reduction in FLOPs. In contrast to CIFAR-10, we calculated the top-1 and top-5 accuracy for ImageNet. According to Table 2, when D-Score yielded the highest reduction in FLOPs for ResNet18 (49.24%) and ResNet34 (43.01%), the top-1 Acc. Drop by D-Score in each model was smaller compared to that of the other pruning methods with a similar reduction in FLOPs. When D-Score yielded the highest reduction in FLOPs for ResNet50 (53.64%), it also had higher top-1 and top-5 Acc. Drop than several other methods. This result can be due to the tradeoff between the reduction in FLOPs and recovery for accuracy. For ResNet18, D-Score showed remarkably better performance with the smallest top-1 Acc. Drop (0.16%) and top-5 Acc. Drop (0.03%) with a higher reduction in FLOPs than [23]. For ResNet34, D-Score was substantially better than [22], another filter importance ranking based-approach, by resulting in small Acc. Drop with higher reduction in FLOPs. This result implies that the D-Score was an efficient method to maintain a fine balance between the Acc. Drop and reduction in FLOPs. ### Analysis In this section, we compared one of our proposed methods, D-Score, with another filter importance ranking based-approach [22], that employed \(\ell_{1}\)-norm for calculating the filter importance. Visualization of Filters and Feature mapsBased on the pruning sensitivity analysis (Figure 2), we pruned 20% of filters using D-Score. Figure 4 reveals that depending on the methods for ranking the importance of filters, their overall ranking varied. Different ranking of filters pruned different combinations of filters and this notably influenced the reduction in FLOPs and Params. According to Table 1, while showing similar Acc. Drop, D-Score yielded 87.03% reduction in Params and 64.81% reduction in FLOPs while \(\ell_{1}\)-norm-based approach [22] resulted in 64% reduction in Params and 34.3% reduction in FLOPs. Figure 4 shows that certain corresponding feature maps of pruned filters were replaceable by the remaining feature maps. The feature maps of pruned filters 0 and 14 by D-Score were similar to the feature maps of the remaining filters 7 and 51. On the contrary, \(\ell_{1}\)-norm removed filters 4 and 17 which produced unique feature maps. Weight Distribution after PruningFor comparison, we pruned the same number of filters, 20% of filters as discussed in the previous subsection, using both D-Score and \(\ell_{1}\)-norm. Figure 5 shows that a neural network pruned by D-Score contained filters composed of either more positive or more negative weights. As discussed in the above subsection, retaining positive-prone and negative-prone filters in CNNs facilitated a higher reduction in Params and FLOPs while \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Approach} & Top-1 & Top-5 & FLOPs \\ & & Acc. Drop(\%) & Acc. Drop(\%) & Reduction(\%) \\ \hline \multirow{8}{*}{ResNet18} & PFP [23] & 1.08 & 0.50 & 19.99 \\ & Ours (D-Score) & **0.16** & **0.03** & 23.93 \\ & FPGM [16] & 1.87 & 1.15 & 41.8 \\ & FBS [10] & 2.54 & 1.46 & – \\ & LCCN [8] & 3.65 & 2.30 & 34.6 \\ & SFP [15] & 3.18 & 1.85 & 41.8 \\ & Ours (D-Score) & 1.76 & 0.96 & **49.24** \\ \hline \multirow{8}{*}{ResNet34} & PFEC [22] & 1.06 & – & 24.2 \\ & Ours (D-Score) & 0.78 & 0.36 & 30.25 \\ & FPGM[16] & 1.29 & 0.54 & 41.1 \\ & LCCN [8] & **0.43** & **0.17** & 24.8 \\ & SFP [15] & 2.09 & 1.29 & 41.1 \\ & Ours (D-Score) & 1.72 & 0.87 & **43.01** \\ \hline \multirow{8}{*}{ResNet50} & PFP [23] & **0.22** & **0.06** & 10.82 \\ & Ours (D-Score) & 1.23 & 1.01 & 14.78 \\ \cline{1-1} & ThiNet [28] & 0.84 & 0.47 & 36.7 \\ \cline{1-1} & SFP [15] & 14.01 & 8.27 & 41.8 \\ \cline{1-1} & GDFP [25] & 2.52 & 1.25 & 41.97 \\ \cline{1-1} & FPGM [16] & 1.32 & 0.55 & 53.5 \\ \cline{1-1} & Ours (D-Score) & 1.99 & 1.25 & **53.64** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of our method and other filter pruning methods for ResNet18, 34, 50 trained on ImageNet dataset. For ImageNet, top-1 and top-5 Acc. Drop, and FLOPs reduction are compared. The best performance is in bold. Figure 4: Visualization of filters and the corresponding feature maps of the second convolutional layer of VGG-16 ranked by D-Score and \(\ell_{1}\)-norm. The filters and feature maps are displayed horizontally in the ascending order of their ranks. For filters of D-Score, we color the positive-prone and negative-prone weights red and blue respectively. Filters and feature maps in the green boxes are to be pruned. For D-Score, 20% of filters are to be pruned based on the sensitivity analysis. For comparison, the same number of filters and feature maps to be pruned using \(\ell_{1}\)-norm are marked with the green boxes. Figure 5: Ratio of positive and negative weights in the remaining filters after pruning by D-Score and \(\ell_{1}\)-norm. For comparison, only the filters with the different indices among the remaining filters are visualized. maintaining similar Acc. Drop compared to \(\ell_{1}\)-norm-based approach. Higher reduction in Params and FLOPs leads to higher compression rate for model size and faster inference speed. Therefore, it can be deduced that preserving positive-prone or negative-prone filters in CNNs played an important role in reducing more Params and FLOPs without significant Acc. Drop, similar to the importance of the excitatory and inhibitory neurons in the human synaptic system. ## 5 Conclusion In this paper, we propose a synapse-inspired innovative method for determining the rank of the unimportant filters in CNNs for filter pruning. For neurotransmission in the synapse, both excitatory and inhibitory neurotransmitters, responsible for increasing and decreasing the activation respectively, play a decisive role in signal transmission. Similarly, we selected the unimportant filters for pruning by measuring the independent importance of positive and negative weights in the filter. To the best of our knowledge, this is the first study to consider the correlation of positive and negative weights in the filters when determining the rank of the filters for filter pruning. We showed that neural networks pruned by our method preserved positive-prone or negative-prone filters and this resulted in reducing more Params and FLOPs without significant Acc. Drop. However, our study includes several limitations. First, our study was only conducted with two types of models, VGGNet and ResNet. Second, the applied concepts of D-Score, namely D-Step and D-Step GM were not experimented with ResNet. Therefore, validation of the performance of D-Score and the applied concepts, D-Step and D-Step GM, with other types of models such as MobileNet remains as our future work. Through this study, we demonstrated that the correlation of positive and negative weights in the filters is crucial when determining the unimportant filters for pruning.
2306.12966
Whitham modulation theory for the Zakharov-Kuznetsov equation and transverse instability of its periodic traveling wave solutions
We derive the Whitham modulation equations for the Zakharov-Kuznetsov equation via a multiple scales expansion and averaging two conservation laws over one oscillation period of its periodic traveling wave solutions. We then use the Whitham modulation equations to study the transverse stability of the periodic traveling wave solutions. We find that all such solutions are linearly unstable, and we obtain an explicit expression for the growth rate of the most unstable wave numbers. We validate these predictions by linearizing the equation around its periodic solutions and solving the resulting eigenvalue problem numerically. Finally, we calculate the growth rate of the solitary waves analytically. The predictions of Whitham modulation theory are in excellent agreement with both of these approaches.
Gino Biondini, Alexander Chernyavsky
2023-06-22T15:24:57Z
http://arxiv.org/abs/2306.12966v1
Whitham modulation theory for the Zakharov-Kuznetsov equation and transverse instability of its periodic traveling wave solutions ###### Abstract We derive the Whitham modulation equations for the Zakharov-Kuznetsov equation via a multiple scales expansion and averaging two conservation laws over one oscillation period of its periodic traveling wave solutions. We then use the Whitham modulation equations to study the transverse stability of the periodic traveling wave solutions. We find that all such solutions are linearly unstable, and we obtain an explicit expression for the growth rate of the most unstable wave numbers. We validate these predictions by linearizing the equation around its periodic solutions and solving the resulting eigenvalue problem numerically. Finally, we calculate the growth rate of the solitary waves analytically. The predictions of Whitham modulation theory are in excellent agreement with both of these approaches. Dedicated to Thanksis Fokas on the occasion of his seventieth birthday. ## 1 Introduction One of the most striking effects that can arise from the combination of dispersion and nonlinearity is the formation of dispersive shock waves (DSW), which are coherent, non-stationary oscillatory structures which typically arise in the context of small dispersion problems, and which provide a dispersive counterpart to classical shock waves [46] (e.g., see the review [20] and references therein). Dispersive shock waves are known to form in surface water waves (where they are known as undular bores), internal waves, nonlinear optics, the atmosphere, Bose-Einstein condensates, and beyond. Because of their ubiquity in nature, the study of DSWs continues to attract considerable interest worldwide. A powerful tool to study small dispersion problems is Whitham modulation theory [48, 49] (or Whitham theory for brevity). Looking at a DSW as a slow modulation of the periodic traveling wave solutions of the underlying partial differential equation (PDE), Whitham theory allows one to derive the so-called Whitham modulation equations (or Whitham equations for brevity), that govern the evolution of these periodic traveling wave solutions over longer spatial and temporal scales. The Whitham equations are a system of first-order, quasi-linear PDEs. For integrable equations in one spatial dimension, the inverse scattering transform (IST) [5, 7, 38] can also be used to study small dispersion limits (e.g., see [33, 8, 13, 14] and references therein). However, Whitham theory is more broadly applicable compared to IST, because the former does not require integrability of the original PDE, and therefore it can also be applied to non-integrable PDEs. Moreover, even if original PDE is integrable, in many cases Whitham theory is still useful because it allows one to obtain a leading-order approximation of the solutions more easily. Because of this, Whitham theory has been applied with great success to many nonlinear wave equations in one spatial dimension (again, see [20] and references therein). Until recently, however, small dispersion limits in more than one spatial dimension had been much less studied. Recently, one of us derived the Whitham modulation equations for the Kadomtsev-Petviashvili (KP) equation [3], the Benjamin-Ono equation [4] and a class of equations of KP type [2]. He then studied the properties of the resulting system of equations [11, 10] and used it to study a variety of initial value problems of physical interest [42, 43, 44]. The Whitham modulation equations for the nonlinear Schrodinger (NLS) equation in two [6] and three [1] spatial dimensions were also recently derived. In this work we continue this program of study, aimed at generalizing and applying Whitham modulation theory to nonlinear wave equations in two and three spatial dimensions. Specifically, we derive the Whitham modulation equations for another physically relevant model, namely, the Zakharov-Kuznetsov equation, and we use the resulting system of equations to study the transverse stability of its periodic traveling wave solutions. The Zakharov-Kuznetsov (ZK) equation [51] is a physical model arising in many different contexts, including fusion plasmas and geophysical fluids [25], magnetized plasmas [32; 51], vortex soliton theory [39] and wave turbulence [37]. In \(N\) spatial dimensions and in the semiclassical scaling, the ZK equation is written as \[u_{t}+uu_{x_{1}}+\epsilon^{2}(\Delta u)_{x_{1}}=0, \tag{1.1}\] where \(\mathbf{x}=(x_{1},\ldots,x_{N})\) are the spatial coordinates, \(\Delta=\partial_{x_{1}x_{1}}+\cdots+\partial_{x_{N}x_{N}}\) is the Laplacian operator, and \(0<\epsilon\ll 1\) is a small parameter that quantifies the relative magnitude of dispersive effects compared to nonlinear ones. Note that the first spatial coordinate plays a special role compared to the other ones. Accordingly, for brevity we will simply write \(x=x_{1}\) below. When solutions are independent of \(x_{2},\ldots,x_{N}\), the ZK equation (1.1) reduces to the celebrated Korteweg-de Vries (KdV) equation. Therefore, the ZK equation is, like the Kadomtsev-Petviashvili (KP) equation, a multi-dimensional generalization of the KdV equation. Unlike the KdV equation and the KP equation, however, the ZK equation does not appear to be integrable. (To avoid confusion, we mention that [37] refers to (1.1) as the Petviashvili equation.) The well-posedness of certain initial value problems and initial-boundary value problem for (1.1) was studied in [22; 24; 29; 34; 45], and the decay rate of localized solutions was studied in [35; 36]. The stability of its solitary wave solutions was studied with various methods by several authors [9; 15; 17; 21; 30; 31; 41; 50], and that of its periodic solutions was studied in [26]. Finally, a wave kinetic equation for (1.1) was derived using formal methods in [37] and rigorously in [47] for a stochastic perturbation of (1.1) on a lattice. Despite its similarities with the KP equation, the ZK equation (1.1) is not of KP type in the sense of [2], because (1.1) is fully evolutionary, i.e., no auxiliary field is present. Therefore the methodology presented in [2] does not apply. Specifically, the ZK equation (1.1) differs from the KP equation in two important respects: (i) the terms involving derivatives with respect to the transverse variables \(x_{2},\ldots,x_{N}\) contain third-order derivative, not second-order ones, and (ii) these terms involve mixed derivatives. We will see that, as a result, the parametrization of the traveling wave solutions of the ZK equation is quite different from that of the solutions of the KP equation, and in fact it has some similarities with the periodic solutions of two-dimensional NLS equation. Indeed we will see that the Whitham modulation system for the ZK equation contains a mix of the features of the systems for the KP and NLS equations. The main result of this work is the ZK-Whitham system (ZKWS) of modulation equations (2.21), or equivalently (2.27), as well as a transverse stability analysis of the periodic traveling wave solutions of the ZK equation (1.1). In section 2 we present the derivation of the ZKWS. In particular, in 2.1 we introduce the periodic traveling wave solutions and relevant conservation laws of (1.1), in 2.2 we present the multiple scales expansion used for the derivation, in 2.3 we present the relevant period averages, in 2.4 we present the calculations needed to obtain the ZKWS in final form, and in 2.5 we discuss some basic symmetries and reductions of the ZKWS. In section 3 we then use the ZKWS to study the transverse stability of the periodic traveling wave solutions, and we validate the predictions of Whitham theory by comparing them with two alternative approaches. Section 4 offers some concluding remarks. ## 2 Whitham modulation theory for the ZK equation ### Periodic traveling wave solutions and conservation laws of the ZK equation Recall that the Whitham equations describe modulations of periodic solutions of a nonlinear PDE. Therefore, the first step in formulating Whitham modulation theory is to write down the periodic solutions of the PDE. The ZK equation (1.1) admits periodic traveling wave solutions, which are most conveniently expressed by introducing Riemann-type variables \(r_{1}\leqslant r_{2}\leqslant r_{3}\), similarly to what is done for the KdV, KP and NLS equations. The derivation of these solutions is similar to that for the periodic solutions of those equations, so we omit the details for brevity. However, one can easily verify that (1) admits the following "cnoidal wave" solutions: \[u({\bf x},t)=(1+q^{2})\left[(r_{1}-r_{2}+r_{3})+2(r_{2}-r_{1})\,{\rm cn}^{2}(2K _{m}Z,m)\right],\] where \({\rm cn}(z,m)\) is the Jacobi elliptic cosine [40], \(K_{m}=K(m)\) the complete elliptic integral of the first kind, \[m={r_{2}-r_{1}\over r_{3}-r_{1}}\] is the elliptic parameter, \[Z=({\bf k}\cdot{\bf x}-\omega t)/\epsilon,\qquad\quad{\bf k}=(k_{1},\ldots,k_ {N})\,,\qquad\quad{\bf q}=(k_{2},\ldots,k_{N})/k_{1}\,,\] \[k_{1}={\sqrt{r_{3}-r_{1}}\over 2\sqrt{6}K_{m}},\quad\omega={1\over 3}(1+q^{2} )(r_{1}+r_{2}+r_{3})\,k_{1}\,,\] and \(q^{2}={\bf q}\cdot{\bf q}=q_{1}^{2}+\cdots+q_{N-1}^{2}\). The solution (1) is uniquely determined by \(N+2\) independent parameters, \(r_{1},\ldots,r_{3}\) and \(q_{1},\ldots,q_{N-1}\), and it describes wave fronts localized along the lines \({\bf k}\cdot{\bf x}-\omega t=2n\pi\), with unit period with respect to the variable \(Z\) and period \(2K_{m}\) with respect to the variable \(x\). Note the appearance of the factor \(1+q^{2}\) in (1) and (2c), unlike the KP equation [3], and similarly to the NLS equation in \(N\) spatial dimensions [1]. In keeping with the notation for the first spatial coordinate, we will simply write \(k_{1}=k\). Also, when there are only two spatial dimensions (i.e., \(N=2\)), we will simply write \(y=x_{2}\), \(l=k_{2}\) and \(q=q_{1}\). The above solutions admit two nontrivial limits: the harmonic limit, obtained when \(m=0\), corresponding to \(r_{2}=r_{1}\), and the soliton limit, obtained when \(m=1\), corresponding to \(r_{2}=r_{3}\). Specifically, recalling that \({\rm cn}(z,m)=\cos z+O(m)\) as \(m\to 0\) and \({\rm cn}(z,m)={\rm sech}\,z+O(1-m)\) as \(m\to 1^{-}\), it is trivial to see that, as \(m\to 0\), the solution (1) describes vanishing-amplitude harmonic oscillations on a non-zero background, whereas, as \(m\to 1\), the solution limits to the line soliton solutions of the ZK equation. Explicitly, in two spatial dimensions, \[u_{s}({\bf x},t)=(1+q^{2})\left[\tilde{u}+12c\,{\rm sech}^{2}\left(\sqrt{c}( x+qy-Vt)\right],\right.\] where \(\tilde{u}=r_{1}\), \(c=(r_{3}-r_{1})/6\) and \(V=(1+q^{2})\,(\tilde{u}+4c)\). However, we emphasize that the modulation theory presented below applies to all of the periodic solutions (1). Recall that several methods can be used to derive the Whitham equations: multiple scales perturbation theory (as in [3]), averaged Lagrangians [48], and averaged conservation laws (as in [1]). Here we will employ the latter. Accordingly, we need the conservation laws of the ZK equation (1). The ZK equation itself can be written as a conservation law in differential form: \[u_{t}+\left({1\over 2}u^{2}+\epsilon^{2}\Delta u\right)_{x}=0\,.\] Note that in this case there is no flux along the coordinates \(x_{2},\ldots,x_{N}\). Moreover, the ZK equation admits an additional differential conservation law related to conservation of mass: \[(u^{2})_{t}+\left[{2\over 3}u^{3}+2\epsilon^{2}\left(u\Delta u-u_{x}^{2}+( \hbox{\boldmath$\nabla$}_{\perp}u^{2})\right)\right]_{x}-2\epsilon^{2} \hbox{\boldmath$\nabla$}_{\perp}\cdot(u_{x}\hbox{\boldmath$\nabla$}_{\perp}u)= 0\,,\] where \(\hbox{\boldmath$\nabla$}_{\perp}=(\partial_{x_{2}},\ldots,\partial_{x_{N}})\) is the gradient with respect to the transverse variables. As mentioned earlier, the ZK equation is not completely integrable, unlike the KdV and KP equations, so only a limited number of conservation laws are available. Nonetheless, below we will show that the above conservation laws will be sufficient for the derivation of the Whitham modulation equations. 2.2. Multiple scales expansion As usual in Whitham theory, we now look for modulations of the above periodic solutions. Specifically, we introduce the fast variable \(Z\) defined by \[\hbox{\boldmath$\nabla$}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! as well as the slow variables \(\mathbf{X}=\mathbf{x}\) and \(T=t\), and we look for solutions \[u(\mathbf{x},t)=u(Z,\mathbf{X},T), \tag{6}\] where all of the solution's parameters are now functions of \(\mathbf{X}=(X_{1},\ldots,X_{N})\) and \(T\). In particular, \(\mathbf{k}\) and \(\omega\) are now the local wavevector and the local frequency. Recall that in two spatial dimensions we have 4 independent parameters: \(r_{1},r_{2},r_{3}\) and \(q=q_{1}\). With the above multiple scales ansatz, one has \[\boldsymbol{\nabla}_{\mathbf{x}}=\frac{\mathbf{k}}{\epsilon}\partial_{Z}+ \boldsymbol{\nabla}_{\mathbf{X}},\qquad\partial_{t}=-\frac{\omega}{\epsilon} \partial_{Z}+\partial_{T}. \tag{7}\] Or, in two spatial dimensions, simply \(\partial_{x}=(k/\epsilon)\partial_{Z}+\partial_{X}\), \(\partial_{y}=(l/\epsilon)\partial_{Z}+\partial_{Y}\) and \(\partial_{t}=-(\omega/\epsilon)\partial_{Z}+\partial_{T}\), with \(X=X_{1}\) and \(Y=X_{2}\). Inserting the above ansatz into (1), to leading order one recovers the periodic solutions in section 2.1, but where the parameters \(r_{1},r_{2},r_{3}\) and \(\mathbf{q}\) are now functions of \(\mathbf{X}\) and \(T\). The Whitham modulation equations that we are seeking are precisely the PDEs that govern the spatio-temporal dynamics of these solution parameters. It is clear from the above discussion that one needs \(N+2\) equations to obtain a closed system. The first few Whitham modulation equations, referred to as "conservation of waves", are simply a consequence of the above ansatz and cross-differentiability of \(Z\): \[\mathbf{k}_{T}+\boldsymbol{\nabla}_{\mathbf{X}}\omega=0, \tag{8a}\] \[\boldsymbol{\nabla}_{\mathbf{X}}\wedge\mathbf{k}=0, \tag{8b}\] where \(\mathbf{v}\wedge\mathbf{w}\) denotes the \(N\)-dimensional wedge product, which in two and three spatial dimensions can be replaced by the standard cross product [23]. In two spatial dimensions, recalling that \(l=qk\), (8a) becomes \[k_{T}+\omega_{X}=0, \tag{8a}\] \[(kq)_{T}+\omega_{Y}=0, \tag{8b}\] while (8b) becomes \[k_{Y}=(kq)_{X}. \tag{8c}\] Equation (8a) above provides \(N\) evolution equations, whereas, similarly to [3, 1], (8b) provides constraints on the initial values of the dependent variables (whose role will be discussed more fully below). Since we need \(N+2\) modulation equations, one must therefore supplement (8a) by obtaining two additional modulation equations. The simplest way to do that is to average the first and second conservation laws over one spatial period, obtaining \[\overline{u_{T}}+\overline{u\,u_{X}}+\epsilon^{2}\overline{\, \Delta u_{X}}=0, \tag{10a}\] \[\overline{(u^{2})_{T}}+\overline{\left[\frac{2}{3}u^{3}+\epsilon ^{2}\big{(}2u\Delta u-u_{X}^{2}+(\boldsymbol{\nabla}_{\perp}u)^{2}\big{)} \right]_{X}}-2\epsilon^{2}\,\overline{\boldsymbol{\nabla}_{\perp}\cdot(u_{X} \boldsymbol{\nabla}_{\perp}u)}=0, \tag{10b}\] where \(\boldsymbol{\nabla}_{\perp}=(\partial_{X_{2}},\ldots,\partial_{X_{N}})\) is the transverse gradient in the slow variables, and where throughout this work the overbar will denote the integral of a quantity with respect to \(Z\) over the unit period. The next step in the derivation of the modulation equations is therefore to compute the above period averages. ### Period averages Inserting the ansatz (6), the leading-order solution (1) and using (5), to leading order the averaged conservation laws (10) yield \[(\overline{u})_{T}+\left(\frac{1}{2}\overline{u^{2}}\right)_{X}=0, \tag{11a}\] \[(\overline{u^{2}})_{T}+\left(\frac{2}{3}\overline{u^{3}}-k^{2}(3 +q^{2})\overline{(u_{Z})^{2}}\right)_{X}-2\boldsymbol{\nabla}_{\perp}\cdot \left(k^{2}\overline{\,\mathbf{q}(u_{Z})^{2}}\right)=0\,. \tag{11b}\] All of the integrals appearing in the above averages can be computed exactly, yielding [16] \[\overline{u}=(1+q^{2})\left[r_{1}-r_{2}+r_{3}+2(r_{2}-r_{1})G_{1} \right], \tag{11a}\] \[\overline{u^{2}}=(1+q^{2})^{2}\left[(r_{1}-r_{2}+r_{3})^{2}+4(r_{1 }-r_{2}+r_{3})(r_{2}-r_{1})G_{1}+4(r_{2}-r_{1})^{2}G_{2}\right], \tag{11b}\] \[\overline{u^{3}}=(1+q^{2})^{3}\big{[}(r_{1}-r_{2}+r_{3})^{3}+6(r_{1}- r_{2}+r_{3})^{2}(r_{2}-r_{1})G_{1}\] \[+12(r_{1}-r_{2}+r_{3})(r_{2}-r_{1})^{2}G_{2}+(r_{2}-r_{1})^{3}G_{3} \big{]},\] (2.12 \[c\] ) \[\overline{(u_{Z})^{2}}=(1+q^{2})^{2}4(r_{2}-r_{1})^{2}G_{4}\,,\] (2.12 \[d\] ) where \[G_{1}(m)= \int\limits_{0}^{1}\text{cn}^{2}(2K_{m}z,m)\,\text{d}z=\frac{E_{m }-(1-m)K_{m}}{mK_{m}}\,,\] (2.13 \[a\] ) \[G_{2}(m)= \int\limits_{0}^{1}\text{cn}^{4}(2K_{m}z,m)\,\text{d}z=\frac{-2(1 -2m)E_{m}+(2-5m+3m^{2})K_{m}}{3m^{2}K_{m}}\,,\] (2.13 \[b\] ) \[G_{3}(m)= \int\limits_{0}^{1}\text{cn}^{6}(2K_{m}z,m)\,\text{d}z=\frac{(8- 23m(1-m))E_{m}-(1-m)(8-19m+15m^{2})K_{m}}{15m^{3}K_{m}}\,,\] (2.13 \[c\] ) \[G_{4}(m)= 16K_{m}\int\limits_{0}^{1}\text{cn}^{2}(2K_{m}z,m)\text{d}m^{2}(2 K_{m}z,m)\,\text{sn}^{2}(2K_{m}z,m)\,\text{d}z\] \[=16K_{m}\frac{2(1-m(1-m))E_{m}-(1-m)(2-m)K_{m}}{15m^{2}}\,,\] (2.13 \[d\] ) and \(E_{m}=E(m)\) is the complete elliptic integral of the second kind. The behavior of these quantities as a function of \(m\) is shown in Fig. 1. Their limiting values as \(m\to 0\) are \[G_{1}(0)=1/2,\quad G_{2}(0)=3/8,\quad G_{3}(0)=5/16,\quad G_{4}(0)=\pi^{2}/2\,, \tag{2.14}\] while their asymptotic behavior as \(m\to 1\) is \[G_{1}(m)= -\frac{2+o(1)}{\log(1-m)}\,,\,\,G_{2}(m)=-\frac{4+o(1)}{3\log(1-m)},\,\,G_{3}(m)=-\frac{16+o(1)}{15\log(1-m)},\] (2.15 \[a\] ) \[G_{1}^{\prime}(m)= -\frac{1+o(1)}{2(1-m)K_{m}^{2}},\,\,G_{2}^{\prime}(m)=-\frac{1+o(1) }{3(1-m)K_{m}^{2}},\,\,G_{3}^{\prime}(m)=-\frac{4+o(1)}{15\,(1-m)K_{m}^{2}},\] (2.15 \[b\] ) \[G_{4}(m)= -\frac{32+o(1)}{15K_{m}},\quad G_{4}^{\prime}(m)=-\frac{16+o(1)}{ 15(1-m)},\] (2.15 \[c\] ) as \(m\to 1\). Also recall that \(K_{m}=-\frac{1}{2}(\log(1-m)-4\log 2)+O(1-m)\) and \(K_{m}^{\prime}=(E_{m}-(1-m)K_{m})/(2m(1-m))=1/(2(1-m))+\frac{1}{8}(\log(1-m)-4 \log 2+3)+O(1-m)\) as \(m\to 1\)[40]. These singular behaviors as \(m\to 1\) imply that certain rescalings are needed in order to write the modulation equations in a convenient form, as discussed below. ### The ZK-Whitham system For brevity we will only write down explicitly the modulation equations in detail in two spatial dimensions, but we emphasize that the calculations below are trivially generalized to any number of transverse Figure 1: The quantities \(G_{1}(m),\ldots,G_{4}(m)\) in (2.13) (in green, orange, blue and red, respectively) as a function of \(m\). dimensions, in a similar manner as in [1]. Also, for simplicity from now on we will write derivatives with respect to \(X,Y\) and \(T\) simply as derivatives with respect to \(x,y\) and \(t\). Using the averages (12), recalling the definition of \(k\) and \(\omega\) in (2c), and collecting all terms, equations (9a), (9b), (11a) and (11b) yield a system of four modulation equations. As usual, however, some manipulations are needed in order to write the resulting system in the most convenient form. We turn to this issue next. We begin with the first conservation of waves equation, namely (9a). Recalling (2c), multiplying (9a) by \((1-m)K_{m}/k\) one then obtains an expression that remains finite both as \(m\to 0\) and \(m\to 1\). The second conservation of waves equation requires some additional treatment. In this case, one can first use (9a) to replace \(k_{t}\), obtaining, as in [1, 2, 3], the universal transversal modulation equation \[q_{t}+(D_{y}\omega)/k=0\,, \tag{16}\] where \[D_{y}=\partial_{y}-q\partial_{x}\,, \tag{17}\] is the convective derivative, which will appear prominently in all modulation equations below, similarly to other modulation systems in two spatial dimensions [1, 2, 3]. Unlike the first conservation of waves equation, however, in this case in order to obtain a non-trivial equation in the limit \(m\to 1\) it is necessary to use the constraint (9c), which we can rewrite so that it remains finite as \(m\to 0\) and \(m\to 1\) as \[c_{1}D_{y}r_{1}+c_{2}D_{y}r_{2}+c_{3}D_{y}r_{3}+c_{4}q_{x}=0\,, \tag{18}\] with \[c_{1}=(1-m)(K_{m}-E_{m}),\ c_{2}=E_{m}-(1-m)K_{m},\ c_{3}=-mE_{m},\ c_{4}=2(r_ {2}-r_{1})(1-m)K_{m}\,. \tag{19}\] Then, subtracting \(\omega/(kK_{m})\) times (18) from (16) we finally obtain the desired modulation equation. The averaged conservation laws (11) are the most complicated, as can be seen from (12) and (13). The only manipulation needed to regularize the resulting equations, however, is just multiplication by \((1-m)K_{m}\). In light of (12) it is also convenient to divide (11a) by \((1+q^{2})\) and (11b) by \((1+q^{2})^{2}\), respectively. (One could also subtract a linear combination of the first conservation law and the first conservation of waves from the second conservation law to try to simplify it, but this is unnecessary for the present purposes.) The collection of the resulting four modulation equations can be written in matrix form as \[\tilde{C}\mathbf{r}_{t}+\tilde{A}\mathbf{r}_{x}+\tilde{B}\mathbf{r}_{y}= \mathbf{0}\,, \tag{20}\] where \(\mathbf{r}=\mathbf{r}(x,y,t)=(r_{1},r_{2},r_{3},q)^{T}\) collects the four dependent variables. Specifically, we write the first row of (20) from (9a), the second and third row from (11a) and (11b), respectively, and the fourth row from (9b). Hereafter, \(\mathbf{0}_{m\times n}\) and \(\mathbf{1}_{m\times n}\) denote matrices of size \(m\times n\) with all entries equal to \(0\) or \(1\), respectively, and for brevity we will drop the size notation when it should be clear from the context. All entries of the coefficient matrices \(\tilde{C}\), \(\tilde{A}\) and \(\tilde{B}\) in (20) are finite for all \(0<m<1\) as well as in the limits \(m\to 0\) and \(m\to 1\). On the other hand, their explicit expressions are fairly complicated, and are therefore omitted for brevity, since they are just an intermediate step in the derivation. At the same time, we next show how one can considerably simplify the system by suitably diagonalizing the coefficients of the temporal derivatives. Owing to (16), the last row of \(\tilde{C}\) is simply \((0,0,0,1)\). Writing \(\tilde{C}\) in block diagonal form, it is therefore convenient to introduce a partial inverse of \(\tilde{C}\) as \(C^{-1}=(\tilde{C}_{3\times 3}^{-1},1)\), where \(\tilde{C}_{3\times 3}\) denotes the upper-left \(3\times 3\) block of \(\tilde{C}\). Multiplying (20) from the left by \(C^{-1}\), one can then solve the above system of modulation equations for the temporal derivatives, which yields the final _ZK-Whitham system_ (ZKWS) in matrix form as \[\mathbf{r}_{t}+A\mathbf{r}_{x}+B\mathbf{r}_{y}=\mathbf{0}, \tag{21}\] where the coefficient matrices \(A=C^{-1}\hat{A}\) and \(B=C^{-1}\hat{B}\) are \[A=\left(\begin{array}{cc}(1+q^{2})\,V_{\rm diag}&\frac{2}{45}\,q\,A_{3\times 1 }\\ \mathbf{0}_{1\times 3}&(1+q^{2})\,V\end{array}\right)-q\,B,\qquad\quad B=\left( \begin{array}{cc}B_{3\times 3}&B_{3\times 1}\\ \frac{1}{3}(1+q^{2})\mathbf{1}_{1\times 3}&2q\,V\end{array}\right), \tag{2.22}\] with \[V_{\rm diag}={\rm diag}(V_{1},\ldots,V_{3}),\] (2.23 \[a\] ) where \(V_{1},\ldots,V_{3}\) are velocities of the KdV-Whitham system, namely \[V_{1}=V-2b\frac{K_{m}}{K_{m}-E_{m}},\,\,\,V_{2}=V-2b\frac{(1-m)K_{m}}{E_{m}-(1 -m)K_{m}},\,\,\,V_{3}=V+2b\frac{(1-m)K_{m}}{mE_{m}},\] (2.23 \[b\] ) where \(V=\frac{1}{3}(r_{1}+r_{2}+r_{3})\), \(b=2(r_{2}-r_{1})\) is the amplitude of oscillations in (2.1), and with \[A_{3\times 1} =D_{3\times 3}^{-1}{\bf a},\] (2.24 \[a\] ) \[B_{3\times 1} =\frac{4}{45}\frac{1+5q^{2}}{1+q^{2}}(r_{3}-r_{1})^{2}\,b_{o}\,D_ {3\times 3}^{-1}{\bf e},\] (2.24 \[b\] ) \[B_{3\times 3} =\frac{4q}{45mK_{m}}\,(r_{3}-r_{1})\,D_{3\times 3}^{-1}{\bf e}^{T} \otimes{\bf b},\] (2.24 \[c\] ) \[D_{3\times 3} ={\rm diag}({\bf d}),\] (2.24 \[d\] ) where \({\bf v}^{T}\otimes{\bf w}\) denotes the outer product of two vectors, namely \(({\bf v}^{T}\otimes{\bf w})_{i,j}=v_{i}w_{j}\), with \[{\bf a}=(a_{1},a_{2},a_{3})^{T}\,,\quad{\bf b}=(b_{1},b_{2},b_{3})^{T},\quad {\bf d}=(d_{1},d_{2},d_{3})^{T},\quad{\bf e}=(-1,1,1)^{T}\,, \tag{2.25}\] and, finally, with \[a_{1} =((1+m(14+m))E_{m}-K_{m}(1-m)(1+7m)K_{m})(r_{3}-r_{1})^{2}+45d_{ 1}r_{1}^{2}\,,\] (2.26 \[a\] ) \[a_{2} =-((1-m(16+29m))E_{m}-(1-m)(1-m(8+45m))K_{m})(r_{3}-r_{1})^{2}+45d _{2}r_{1}(2r_{2}-r_{1})\,,\] (2.26 \[b\] ) \[a_{3} =(8(2-m)(1-m)K_{m}+(29+m(16-m))E_{m})(r_{3}-r_{1})^{2}+45d_{3}r_{ 1}(2r_{3}-r_{1})\,,\] (2.26 \[c\] ) \[b_{o} =(2-m)(1-m)K_{m}-2(1-m(1-m))E_{m}\,,\] (2.26 \[d\] ) \[b_{1} =2(1+2m^{2})E_{m}K_{m}-(1-m)(1+2m)K_{m}^{2}-(1-m+m^{2})\,E_{m}^{2},\] (2.26 \[e\] ) \[b_{2} =(1-m)(1-3m)K_{m}^{2}-2(1-m(2-3m))E_{m}K_{m}+\frac{m^{2}-m+1}{1- m}\,E_{m}^{2},\] (2.26 \[f\] ) \[b_{3} =m\left(5(1-m)K_{m}^{2}-2(2-m)E_{m}K_{m}-\left(\frac{1}{1-m}-m \right)E_{m}^{2}\right),\] (2.26 \[g\] ) \[d_{1} =K_{m}-E_{m},\,\,\,\,d_{2}=E_{m}-(1-m)\,K_{m},\,\,\,\,d_{3}=E_{m}\,.\] (2.26 \[h\] ) Equivalently, in component form, the ZKWS (2.21) comprises the following four PDEs: \[r_{j,t}+(1+q^{2})\,V_{j}\,r_{j,x}+b_{4}\,D_{y}r_{j}+h_{j}\,q_{x}+ \nu_{j}\,D_{y}\,q=0,\qquad\,\,j=1,2,3,\] (2.27 \[a\] ) \[q_{t}+(1+q^{2})\,V\,q_{x}+(1+q^{2})\,V_{x}+2q\,V\,D_{y}\,q=0\,,\] (2.27 \[b\] ) where \(D_{y}\) is the convective derivative introduced in (2.17), and \[b_{4}=\frac{4q(r_{3}-r_{1})e_{j}}{45mK_{m}d_{j}}\,{\bf b}\cdot{\bf r},\quad h_ {j}=\frac{2q}{45}\frac{a_{j}}{d_{j}},\quad\nu_{j}=\frac{4(1+5q^{2})(r_{3}-r_{1} )^{2}b_{o}e_{j}}{45d_{j}(1+q^{2})},\quad j=1,2,3\,. \tag{2.28}\] We should point out that while the algebraic manipulations described above are rather tedious, they are nonetheless straightforward, and are easily carried out with any computer algebra software. We also point out that the ZK-Whitham system (2.21) (ZKWS) is considerably simpler than what one would obtain by multiplying (2.20) by the full inverse of \(\tilde{C}\). More importantly, note how the above ZKWS is purely in evolution form (i.e., all four equations contains a temporal derivative), like those for the two- and three-dimensional NLS equations [1, 6], and unlike those for the KP equation [3], two-dimensional Benjamin-Ono equation [4] and modified KP equation [2]. This is of course a direct consequence of the fact that the ZK equation (1.1) does not comprise a spatial constraint like the KP equation and the two-dimensional Benjamin-Ono equation. ### Symmetries, reductions and distinguished limits of the ZKWS Like the Whitham modulation systems for the KdV, KP and NLS equations, the ZK-Whitham system (2.21) admits a number of symmetries and reductions. Symmetries.The ZK-Whitham system preserves some of the physical symmetries of the ZK equation, specifically, the symmetries under space-time translations and scaling: \[u({\bf x},t) \mapsto u({\bf x}-{\bf x}_{0},t-t_{0}), \tag{2.29a}\] \[u({\bf x},t) \to a^{2}u({\bf ax},a^{3}\,t), \tag{2.29b}\] respectively, where \(a,t_{0}\) are arbitrary real constants, \({\bf x}_{0}\) is an arbitrary \(N\)-component real vector. The ZK equation (1.1) is invariant under each of these transformations. Moreover, each of these transformations induces a corresponding transformation for the dependent variables \(r_{1},\ldots,r_{3},q\), namely: \[r_{j}({\bf x},t) \mapsto r_{j}({\bf x}-{\bf x}_{0},t-t_{0}),\quad q({\bf x},t)\to q ({\bf x}-{\bf x}_{0},t-t_{0}), \tag{2.30a}\] \[r_{j}({\bf x},t) \mapsto a^{2}r_{j}(a{\bf ax},a^{3}\,t),\qquad q({\bf x},t)\mapsto q (a{\bf ax},a^{3}\,t) \tag{2.30b}\] for \(j=1,2,3\). It is straightforward to verify that all these transformations also leave the ZKWS (2.21) invariant. For brevity we omit the details. KdV reduction.It is straightforward to see that, when \(q=0\) and all quantities are independent of \(y\), the ZK-Whitham system reduces to the Whitham modulation system for the KdV equation, namely \[r_{j,t}+V_{j}\,r_{j,x}=0\,,\qquad\quad j=1,2,3\,, \tag{2.31}\] where \(V_{1},\ldots,V_{3}\) are the characteristic velocities of the KdV-Whitham system, as above. Harmonic limit.The ZKWS system admits a self-consistent reduction in the harmonic limit \(m\to 0\) (i.e., \(r_{2}\to r_{1}\)). In this case, the PDEs for \(r_{1}\) and \(r_{2}\) coincide, and we obtain the reduced \(3\times 3\) system \[{\bf w}_{t}+A_{o}\,{\bf w}_{x}+B_{o}\,{\bf w}_{y}={\bf 0}, \tag{2.32}\] for the three-component dependent variable \({\bf w}(x,y,t)=(r_{1},r_{3},q)\), with \[A_{o} =\frac{1}{3}\left(\begin{array}{ccc}3(2r_{1}-r_{3})+q^{2}(4r_{1 }-r_{3})&0&2q(4r_{1}^{2}-2r_{3}r_{1}+r_{3}^{2})\\ 0&3(1+q^{2})r_{3}&6qr_{3}^{2}\\ -q(1+q^{2})&-q(1+q^{2})&(1-q^{2})(2r_{1}+r_{3})\end{array}\right), \tag{2.33a}\] \[B_{o} =\frac{1}{3}\left(\begin{array}{ccc}2\,q(r_{1}-r_{3})&0&0\\ 0&0&0\\ 1+q^{2}&1+q^{2}&2\,q(2r_{1}+r_{3})\end{array}\right). \tag{2.33b}\] Soliton limit.The ZKWS system also admits a self-consistent reduction in the soliton limit \(m\to 1\) (i.e., \(r_{2}\to r_{3}\)). The calculations are a bit trickier in this case, since the entries in the second and third columns of \(A\) and \(B\) diverge. As we show next, however, this is not an issue. Recalling (2.2a), let \(\tilde{m}=1-m=(r_{3}-r_{2})/(r_{3}-r_{1})\), and write \(r_{2}=r_{3}-\tilde{m}(r_{3}-r_{1})\). The limit \(m\to 1\) corresponds to \(\tilde{m}\to 0\) together with \(\tilde{m}_{x}\), \(\tilde{m}_{y}\) and \(\tilde{m}_{t}\). We then look at the second and third columns of \(A\) and \(B\) multiplied by \((r_{2},r_{3})\). For the former we have \(a_{i,2}\,r_{2,x}+a_{i,3}\,r_{3,x}=(a_{i,2}+a_{i,3})\,r_{3,x}+a_{i,2}((r_{3}-r_{ 1})\,\tilde{m})_{x}\), for \(i=1,\ldots,4\). with a similar expression for the \(y\) derivatives. Since the singular parts of \(a_{i,2}\) and \(a_{i,3}\) are exactly equal and opposite, it is straightforward to verify that one obtains a finite expression in the limit as \(\tilde{m}\to 0\). The result is the soliton modulation system \[{\bf w}_{t}+A_{1}\,{\bf w}_{x}+B_{1}\,{\bf w}_{y}={\bf 0}, \tag{2.34}\] for the same dependent variables \({\bf w}=(r_{1},r_{3},q)\) as above, but where the coefficient matrices are now \[A_{1}=\left(\begin{array}{cc}(1+q^{2})r_{1}&0\\ \frac{8}{15}q^{2}(r_{1}-r_{3})&\frac{1}{15}(5(r_{1}+2r_{3})+3q^{2}(6r_{3}-r_{1 }))&2q\frac{3r_{1}^{2}+48r_{3}^{2}-6r_{1}r_{3}+q^{2}(19r_{1}^{2}-38r_{1}r_{3}+ 64r_{3}^{2})}{45(1+q^{2})}\\ -\frac{1}{3}q(1+q^{2})&-\frac{2}{3}q(1+q^{2})&\frac{1}{3}(1-q^{2})(r_{1}+2r_{3} )\end{array}\right),\] \[B_{1}=\left(\begin{array}{cc}0&0\\ -\frac{8}{15}q(r_{1}-r_{3})&\frac{8}{15}q(r_{1}-r_{3})&-\frac{8(51+q^{2})(r_{1} -r_{3})^{2}}{45(1+q^{2})}\\ \frac{1}{3}(1+q^{2})&\frac{2}{3}(1+q^{2})&\frac{2}{3}q(r_{1}+2r_{3})\end{array} \right).\] ## 3 Transverse instability of the periodic traveling wave solutions of the ZK equation We now show how the ZK-Whitham modulation system derived in section 2 can be applied to study the stability of the periodic traveling wave solutions of the ZK equation for all \(0\leqslant m\leqslant 1\). We will then compare the predictions of Whitham theory with a numerical evaluation of the instability growth rate, as well as with an explicit, analytical calculation of the growth rate in the soliton limit. ### Stability analysis via Whitham theory Recall that, when \(r_{1},r_{2},r_{3}\) and \(q\) are independent of \(x,y\) and \(t\), (2.1) is an exact periodic traveling wave solution of (1.1). In order to study the stability of such solutions, we therefore look for solutions of the ZK-Whitham system (2.21) in the form of a constant solution \({\bf r}^{(0)}=(r_{1}^{(0)},r_{2}^{(0)},r_{3}^{(0)},q^{(0)})\) plus a small perturbation, namely, \[r_{j}(x,y,t)=r_{j}^{(0)}+\delta r_{j}^{(1)}(x,y,t),\quad j=1,2,3,\qquad\quad q (x,y,t)=q^{(0)}+\delta q^{(1)}(x,y,t)\,,\] with \(0<\delta\ll 1\). Substituting this ansatz into (2.21) and neglecting terms \(O(\delta^{2})\) and smaller, we then obtain the linearized ZK-Whitham system \[{\bf r}_{t}^{(1)}+A^{(0)}{\bf r}_{x}^{(1)}+B^{(0)}{\bf r}_{y}^{(1)}=0,\] since \({\bf r}^{(0)}\) in constant with respect to \(x\), \(y\) and \(t\). Here \({\bf r}^{(1)}=(r_{1}^{(1)},r_{2}^{(1)},r_{3}^{(1)},q^{(1)})\), while \(A^{(0)}\) and \(B^{(0)}\) denote the \(4\times 4\) matrices \(A\) and \(B\) above evaluated at \({\bf r}={\bf r}^{(0)}\). Since (3.2) is a linear system of PDEs with constant coefficients, it is sufficient to study its plane wave solutions. We therefore look for solutions in the form \[{\bf r}^{(1)}(x,y,t)={\bf R}{\bf e}^{i(Kx+Ly-Wt)},\] where \({\bf R}\) is a constant vector, and \(K\), \(L\) and \(W\) are respectively the perturbation wavenumbers in the \(x\) and \(y\) directions and the perturbation's angular frequency. Substituting this expression into (3.2), the problem above is then transformed into the homogeneous linear system of equations \((-W\,I_{4}+K\,A^{(0)}+LB^{(0)})\,{\bf R}={\bf 0}\), which is equivalent to the eigenvalue problem \[(K\,A^{(0)}+LB^{(0)})\,{\bf R}=W{\bf R}.\] Here \(I_{4}\) is the \(4\times 4\) identity matrix. The eigenvalues (corresponding to nontrivial solutions for \({\bf R}\)) are the roots of the characteristic polynomial \(p(K,L,W)=\det(KA^{(0)}+LB^{(0)}-W\,I_{4})\). In turn, the condition \(p(K,L,W)=0\) determines the linearized dispersion relation \(W=W(K,L)\). If \(W\in\mathbb{R}\), then the constant solution \({\bf r}^{(0)}\) is linearly stable, otherwise it is unstable. By virtue of the scaling invariance of the ZKWS (2.21), we can set \(r_{1}=0\) and \(r_{3}=1\) without loss of generality, in which case we simply have \(r_{2}=m\). Still, for general values of \(q\), \(K\) and \(L\), finding the linearized dispersion \(W(K,L)\) involves computing the roots a highly complicated quartic polynomial. On the other hand, a particularly simple scenario is obtained when \(q=0\) (vertical cnoidal waves) and \(K=0\) (purely transversal perturbations). In this case, we simply have \((W/L)^{2}=f(m)\), with \[f(m)=\frac{4}{135}\frac{\big{(}2(1-m(1-m))E_{m}-(1-m)(2-m)K_{m}\big{)}\big{(}(1 -m)K_{m}^{2}-2(2-m)E_{m}K_{m}+3E_{m}^{2}\big{)}}{E_{m}(K_{m}-E_{m})(E_{m}-(1-m)K _{m})}\,.\] It is straightforward to see that \(f(m)<0\) for all \(0<m<1\). Therefore, periodic traveling wave solutions of the ZK equation are linearly unstable with respect to transverse perturbations. More precisely, the above calculations yield the growth rate of the most unstable perturbation as \(g(m)=\sqrt{-f(m)}\). The behavior of \(g(m)\) as a function of \(m\) is shown in Fig. 2. Note that \(g(0)=0\) (indicating that the constant solutions are linearly stable), and \(g(m)\) increases monotonically in \(m\), limiting to the value \(g(1)=4/(3\sqrt{15})\simeq 0.344265\), which is the growth rate of unstable perturbations of the soliton solutions of the ZK equation. The above prediction that the periodic solutions are unstable is consistent with the results of [26], but now we have a fully explicit expression for the instability growth rate, similar to [2, 3, 4]. As we show next, these predictions are in excellent agreement with a numerical calculation of the growth rate (in section 3.2) as well as with a direct perturbation theory for the soliton solutions (in section 3.3). ### Stability analysis via linearization of the ZK equation and Floquet-Hill's method We can validate the predictions of Whitham theory by studying numerically the linear stability of the periodic solutions of the ZK equation (1.1) and comparing the findings with those obtained via Whitham theory in section 3.1. In this case, by analogy with section 3.1, we look for solutions of the ZK equation (1.1) in two spatial dimensions in the form \(u(x,y,t)=u_{o}(x,y,t)+\delta v(x,y,t)\), where \(0<\delta\ll 1\), and where \(u_{o}(x,y,t)\) is an exact periodic traveling wave solution, namely \[u_{o}(x,y,t)=(1+q^{2})((r_{1}-r_{2}+r_{3})+2(r_{2}-r_{1})\,{\rm cn }^{2}(2K_{m}Z,m))\] \[\qquad\qquad\qquad=(1+q^{2})\left[r_{1}-r_{2}+r_{3}+2(r_{2}-r_{1} )\,{\rm cn}^{2}\big{(}\sqrt{(r_{3}-r_{1})/6}(x+qy-Vt)/\epsilon\big{)}\right], \tag{3.6}\] \(Z=k(x+qy-Vt)/\epsilon\) is the fast variable defined in (2.5), \(V=\omega/k\) and \(k\) and \(\omega\) are as in (2.2c). Substituting this ansatz in (1.1), to leading order in \(\delta\) we obtain a linearized ZK equation: \[v_{t}+(u_{0}v)_{x}+\epsilon^{2}(\Delta v)_{x}=0\,. \tag{3.7}\] To obtain the correct balance of terms in \(\epsilon\) and \(\delta\), we use the following ansatz for \(v(x,y,t)\): \[v(x,y,t)=w(2K_{m}Z)e^{(i\zeta y+\lambda t)/\epsilon} \tag{3.8}\] which implies \[v_{x}=\frac{2K_{m}k}{\epsilon}\,v_{Z},\quad v_{y}=\frac{2K_{m}l }{\epsilon}\,v_{Z}+\frac{i\zeta}{\epsilon}\,v,\quad v_{t}=-\frac{2K_{m}\omega} {\epsilon}\,v_{Z}+\frac{\lambda}{\epsilon}\,v, \tag{3.9a}\] \[v_{yy}=\frac{4K_{m}^{2}l^{2}}{\epsilon^{2}}\,v_{ZZ}+\frac{4iK_{ m}\zeta l}{\epsilon^{2}}\,v_{Z}-\frac{\zeta^{2}}{\epsilon^{2}}\,v+O\left( \frac{1}{\epsilon}\right),\quad v_{xx}=\frac{4K_{m}^{2}k^{2}}{\epsilon^{2}}\, v_{ZZ}, \tag{3.9b}\] with \(l=qk\) as before. Then, to leading order in \(\epsilon\), (3.7) yields \[-2K_{m}\omega v_{Z}+\lambda v+2K_{m}k(u_{o}v)_{Z}+8K_{m}^{3}k^{3}(1+q^{2})\,v _{ZZZ}+8iK_{m}^{2}\zeta klv_{ZZ}-2K_{m}k\zeta^{2}v_{Z}=0\,, \tag{3.10}\] Figure 2: The growth rate of the most unstable transverse perturbation as a function of \(m\) (see text for details). which can be written as the linear eigenvalue problem \[{\cal L}_{o}v=\lambda v\,,\] with \[{\cal L}_{o}=2K_{m}v\partial_{Z}-2K_{m}k\partial_{Z}u_{0}-8K_{m}^{3}k^{3}(1+q^{2} )\partial_{Z}^{3}-\delta iK_{m}^{2}\zeta k^{2}\,q\partial_{Z}^{2}+k\zeta^{2} \partial_{Z}\,.\] Explicitly, using the definition of \(k\), (3.10) is \[-\sqrt{r_{3}-r_{1}}V\,v_{Z}+\tilde{\lambda}v+\sqrt{r_{3}-r_{1}}( u_{o}v)_{Z}+(r_{3}-r_{1})^{3/2}[(1+q^{2})/6]\,v_{ZZZ}\] \[+2iq\,[(r_{3}-r_{1})/\sqrt{6}]\,\zeta\,v_{ZZ}-\sqrt{r_{3}-r_{1}} \,\zeta^{2}\,v_{Z}=0\,,\] where \(\tilde{\lambda}=\sqrt{6}\,\lambda\). To compare the results of this perturbation expansion with the predictions of Whitham theory, we set \(r_{1}=0\) and \(r_{3}=1\), implying \(r_{2}=m\), and we take \(q=0\). Then (3.12) yields \[-V\,v_{Z}+\tilde{\lambda}v+(u_{o}v)_{Z}+(1/6)\,v_{ZZZ}-\zeta^{2}\,v_{Z}=0\,.\] Equivalently, the eigenvalue problem (3.14a) becomes \[{\cal L}v=\tilde{\lambda}\,v\,,\] where \[{\cal L}=V\,\partial_{Z}-\partial_{Z}\,u_{o}-(1/6)\partial_{Z}^{3}+\zeta^{2} \partial_{Z}\,.\] We compute the eigenvalues \(\tilde{\lambda}\) of \({\cal L}\) numerically for each \(0\leqslant m<1\) using Floquet-Hill's method [18]. The difference between the resulting values and those obtained via Whitham theory shown in the inset of Fig. 2, which demonstrates excellent agreement between the two approaches. (Note that the numerical values of the discrepancy between the two approaches depend somewhat on the value of \(\zeta\) chosen, since the latter affects the accuracy of the numerical scheme. The values in Fig. 2 were obtained with \(\zeta=0.0005\).) Note however that, unlike the present approach, Whitham theory yields an analytical expression for the instability growth rate. ### Analytical stability theory for soliton solutions As a final test for the predictions of Whitham theory, we now calculate the instability growth rate for the soliton solutions analytically. That is, we look for perturbed solution in the following form: \[u(x,y,t)=u_{c}(\xi)+U(\xi)e^{i\zeta y+\lambda t},\] where \(u_{c}(\xi)\) is the solitary wave solution [i.e., the limit \(m\to 1\) of (3.6)], and the second term in (3.15) describes purely transversal perturbations. For concreteness, we choose \(r_{1}=0\) and \(r_{2}=r_{3}=6c\) (with the specific parametrization chosen so as to simplify the calculations that follow, similarly to [41]), and \(q=0\). We then have \(2K_{m}Z=\sqrt{c}(x+qy-4c\,t)=\sqrt{c}\xi\), where \(\xi=x-4ct\), and, as per (2.3), \[u_{c}(\xi)=12c\,{\rm sech}^{2}(\sqrt{c}\xi)\,.\] We write the ZK equation (1.1) in the soliton comoving reference frame \((\xi,y,t)\), which reduces the problem to the analysis of ordinary differential equations. We then look for a formal asymptotic expansions in \(\zeta\) for \(\lambda\) and \(U\) near \(\zeta=0\), namely: \[\lambda=\lambda_{1}\zeta+\lambda_{2}\zeta^{2}+O(\zeta^{3}),\] \[U(\xi)=U_{0}(\xi)+\lambda_{1}\zeta U_{1}(\xi)+\lambda_{2}\zeta^{2}U_{1}(\xi)+ \zeta^{2}U_{2}(\xi)+O(\zeta^{3}).\] We should point out the similarities and the differences between the present approach and that of [41]. The perturbation expansion above is similar in spirit to that in [41]. However, [41] studied the stability of solitary waves with speed close to the critical speed of propagation, whereas in this case we are studying the stability near zero transverse wavenumbers. Substituting this ansatz into the ZK equation written in the comoving reference frame, at leading order we obviously simply recover an ordinary differential equation that yields the soliton solution: \[u_{c}^{\prime\prime}+\tfrac{1}{2}\,u_{c}^{2}-4c\,u_{c}=0\,,\] where primes denote differentiation with respect to \(\xi\). Then the eigenvalue problem for \(\lambda\) can be written as \[\partial_{\xi}(M+\zeta^{2})U=\lambda U, \tag{3.19}\] where \[M=-\partial_{\xi}^{2}+4c-12c\operatorname{sech}^{2}(\sqrt{c}\xi). \tag{3.20}\] We can write \[\lambda U=\zeta(\lambda_{1}U_{0})+\zeta^{2}(\lambda_{2}U_{0}+ \lambda_{1}^{2}U_{1})+O(\zeta^{3}) \tag{3.21}\] and \[\partial_{\xi}MU=\partial_{\xi}MU_{0}+\lambda_{1}\zeta\partial_{ \xi}MU_{1}+\lambda_{2}\zeta^{2}\partial_{\xi}MU_{1}+\zeta^{2}\partial_{\xi}MU _{2}+O(\zeta^{3}). \tag{3.22}\] At \(O(1)\) in \(\zeta\) of the eigenvalue problem (3.19) we have \[\partial_{\xi}(MU_{0})=0, \tag{3.23}\] which yields \(U_{0}=u_{c}^{\prime}(\xi)\). At \(O(\zeta)\) we have \[\partial_{\xi}MU_{1}=U_{0}\,, \tag{3.24}\] i.e., \[\left[-\partial_{\xi}^{2}+4c-12c\operatorname{sech}^{2}(\sqrt{c} \xi)\right]U_{1}=12c\operatorname{sech}^{2}(\sqrt{c}\xi)\,. \tag{3.25}\] It is straightforward to see that the above ODE admits the solution \[U_{1}(\xi)=\frac{3}{4}\operatorname{sech}^{2}(\sqrt{c}\xi)\left( -4+(5+4\sqrt{c}\xi)\tanh(\sqrt{c}\xi)\right). \tag{3.26}\] Then, and finally, at \(O(\zeta^{2})\) we have \[\partial_{\xi}MU_{2}+\lambda_{2}U_{0}+\partial_{\xi}U_{0}=\lambda _{2}U_{0}+\lambda_{1}^{2}U_{1}, \tag{3.27}\] or equivalently \[\partial_{\xi}MU_{2}=\lambda_{1}^{2}U_{1}-\partial_{\xi}^{2}u_{c}\,. \tag{3.28}\] The Fredholm solvability condition requires the right-hand side of (3.28) to be orthogonal to the kernel of the adjoint of the operator in the left-hand side in order for (3.28) to admit solutions. Since \(M\) is self-adjoint, the adjoint of \(\partial_{\xi}M\) is simply \(M\partial_{\xi}\). The kernel in question is thus spanned by \(u_{c}\). Therefore the resulting constraint is \[\lambda_{1}^{2}\int\limits_{\mathbb{R}}u_{1}u_{c}\,\mathrm{d}\xi =\int\limits_{\mathbb{R}}u_{c}u_{c}^{\prime\prime}\,\mathrm{d}\xi\,, \tag{3.29}\] and this condition determines \(\lambda_{1}\). The integrals in the above conditions are given by, respectively, \[\int\limits_{\mathbb{R}}u_{1}u_{c}\,\mathrm{d}\xi=-36\sqrt{c}, \quad\int\limits_{\mathbb{R}}u_{c}u_{c}^{\prime\prime}\,\mathrm{d}\xi=-\int (u_{c}^{\prime})^{2}d\xi=-\frac{768}{5}\,c^{5/2}. \tag{3.30}\] Their ratio then gives \(\lambda_{1}\) as \[\lambda_{1}=\frac{8}{\sqrt{15}}\,c, \tag{3.31}\] In order to compare this result with Whitham theory, note that in that case we took \(r_{3}=1\), implying \(c=1/6\), which then yields \(\lambda_{1}=4/(3\sqrt{15})\), which is in perfect agreement with the results of section 3.1. We should note that the above formalism can be generalized in a relatively straightforward way to compute the instability growth rate for all periodic solutions of the ZK equation. However, the corresponding calculations are somewhat more involved, and at the moment they have not yet led to a closed-form result similar to (3.31). For brevity, they are therefore deferred to a future publication. ## 4 Concluding remarks In summary, we have derived the ZK-Whitham system (ZKWS), i.e., the system of Whitham modulation equations for the periodic solutions of the Zakharov-Kuznetsov equation. The ZKWS shares some similarities with the KP-Whitham system, i.e., the system of modulation equations for the KP equation. Both are first-order systems of PDEs of hydrodynamic type, and both systems involve three time evolution equations for the Riemann-type variables \(r_{1},\ldots,r_{3}\) plus a fourth time evolution equation for the local slope parameter \(q=k_{2}/k_{1}\). At the same time, there are some important differences between the two modulation systems. Most importantly, the fact that the ZKWS comprises only four PDEs, whereas the KP-Whitham system contains an additional PDE (which does not contain time derivatives) for an auxiliary field. (As mentioned in [3], the presence this fifth PDE is essential for the system to correctly capture the dynamics of solutions of the KP equation.) We also studied the harmonic and soliton limits of the ZKWS, and we used the ZKWS to study the transverse stability of the periodic traveling wave solutions, showing that all such solutions are unstable to transverse perturbations. The instability of such solutions raises the interesting question of whether the ZK equation admits any exact solutions describing stable two-dimensional wave patterns. Another interesting question is whether the ZKWS can be used to study time evolution problems similarly to what was done in [42, 43, 44] for the KP equation. The situation for the ZK equation is different because its periodic solutions are unstable. Still, it is well known that Whitham modulation equations can be very useful even when the underlying solutions of the PDE are unstable and the system is not hyperbolic (e.g., as in the case of the modulational instability of constant solutions of the focusing one-dimensional NLS equation [12, 19, 28]). A natural question is therefore where special solutions of the ZKWS could be useful to capture certain features of the time evolution of solutions of the ZK equation. Obviously it would also be interesting to study the ZKWS as a (2+1)-dimensional hydrodynamic system on its own, independently of its connection with the ZK equation. On that note, we point out that, similarly to what happens with the KP equation [10], solutions of the ZKWS describe the modulation of solutions of the ZK equation only when the initial conditions for the ZKWS are consistent with the third conservation of waves equation, i.e., the constraint \(k_{y}=(qk)_{x}\). As with the KP equation [3], it is straightforward to show that if this condition is satisfied at time zero, the ZKWS ensures that it is preserved by the time evolution. A related question concerns the possible integrability of the ZKWS. Since the ZK equation is not integrable, one would not expect the ZKWS to be integrable. Nonetheless, it is possible that certain reductions such as the harmonic limit and the soliton limit, could nonetheless be integrable. All of these questions are left for future investigation, and it is hoped that the results of this work and the above remarks will stimulate further study on these topics. ## Acknowledgments We are indebted to Dmitry Pelinovsky for his help in calculating the soliton instability growth rate in section 3.3. We also thank Gigliola Staffilani for many interesting conversations. This work was partially supported by the National Science Foundation under grant number DMS-209487.
2303.06115
Gravitomagnetism and galaxy rotation curves: a cautionary tale
We investigate recent claims that gravitomagnetic effects in linearised general relativity can explain flat and rising rotation curves, such as those observed in galaxies, without the need for dark matter. If one models a galaxy as an axisymmetric, stationary, rotating, non-relativistic and pressureless 'dust' of stars in the gravitoelectromagnetic (GEM) formalism, we show that GEM effects on the circular velocity $v$ of a star are $O(10^{-6})$ smaller than the standard Newtonian (gravitoelectric) effects. Moreover, we find that gravitomagnetic effects are $O(10^{-6})$ too small to provide the vertical support necessary to maintain the dynamical equilibrium assumed. These issues are obscured if one constructs a single equation for $v$, as considered previously. We nevertheless solve this equation for a galaxy having a Miyamoto--Nagai density profile. We show that for the values of the mass, $M$, and semi-major and semi-minor axes, $a$ and $b$, typical for a dwarf galaxy, the rotation curve depends only very weakly on $M$. Moreover, for aspect ratios $a/b > 2$, the rotation curves are concave over their entire range, which does not match observations in any galaxy. Most importantly, we show that for the poloidal gravitomagnetic flux $\psi$ to provide the necessary vertical support, it must become singular at the origin. This originates from the unwitting, but forbidden, inclusion of free-space solutions of the Poisson-like equation that determines $\psi$, hence ruling out the methodology as a means of explaining flat galaxy rotation curves. We further show that recent deliberate attempts to leverage such free-space solutions against the rotation curve problem yield no deterministic modification outside the thin disk approximation, and that, in any case, the homogeneous contributions to $\psi$ are ruled out by the boundary value problem posed by any physical axisymmetric galaxy.
A. N. Lasenby, M. P. Hobson, W. E. V. Barker
2023-03-10T18:04:36Z
http://arxiv.org/abs/2303.06115v2
# Gravitomagnetism and galaxy rotation curves: a cautionary tale ###### Abstract We investigate recent claims that gravitomagnetic effects in linearised general relativity can explain flat and rising rotation curves, such as those observed in galaxies, without the need for dark matter. If one models a galaxy as an axisymmetric, stationary, rotating, non-relativistic and pressureless 'dust' of stars in the gravitoelectromagnetic (GEM) formalism, we show that gravitomagnetic effects on the circular velocity \(v\) of a star are \(O(10^{-6})\) smaller than the standard Newtonian (gravitoelectric) effects and thus any modification of galaxy rotation curves must be negligible, as might be expected. Moreover, we find that gravitomagnetic effects are \(O(10^{-6})\) too small to provide the vertical support necessary to maintain the dynamical equilibrium assumed in such a model. These issues are obscured if one constructs a single equation for \(v\), as considered previously. We nevertheless solve this equation for a galaxy having a Miyamoto-Nagai density profile since this allows for both an exact numerical integration and an accurate analytic approximation. We show that for the values of the mass, \(M\), and semi-major and semi-minor axes, \(a\) and \(b\), typical for a dwarf galaxy, the rotation curve depends only very weakly on \(M\), and becomes independent of it for larger \(M\) values. Moreover, for aspect ratios \(a/b>2\), the rotation curves are concave over their entire range, which does not match observations in any galaxy. Most importantly, we show that for the poloidal gravitomagnetic flux \(\psi\) to provide the necessary vertical support, it must become singular at the origin and have extremely large values near to it. This originates from the unwitting, but forbidden, inclusion of free-space solutions of the Poisson-like equation that determines \(\psi\) and also clearly contradicts the linearised treatment implicit in the GEM formalism, hence ruling out the methodology in the form used as a means of explaining flat galaxy rotation curves. We further show that recent deliberate attempts to leverage such free-space solutions against the rotation curve problem yield no deterministic modification outside the thin disk approximation, and that, in any case, the homogeneous contributions to \(\psi\) are ruled out by the boundary value problem posed by any physical axisymmetric galaxy. pacs: 04.50.Kd, 04.60.-m, 04.20.Fy, 98.80.-k ## I Introduction It is widely accepted that the modelling of galaxy rotation curves in general relativity (GR) requires the inclusion of a dark matter halo in order to reproduce observations [1; 2; 3; 4]. In particular, the modelling of the approximately flat rotation curves observed in the outskirts of large spiral galaxies and, to a lesser extent, the rising rotation curves observed in smaller dwarf galaxies [5; 6; 7; 8; 9] is considered to pose a significant challenge to GR without such a component. The absence of any direct experimental evidence for dark matter [10] has thus led to the consideration of various modified gravity theories to attempt to explain the astrophysical data. There are a number of claims in the literature, however, that such modifications are unnecessary since hitherto neglected effects in GR itself are capable of explaining rotation curves without dark matter. These include gravitoelectric flux confinement arising from graviton self-interaction [11; 12; 13; 14; 15; 16; 17; 18], nonlinear GR effects arising even in the weak-gravity regime [19] and, most recently, gravitomagnetic effects in linearised GR [20]. Certain elements of [20] are further developed in [21] where (although the dark matter paradigm is not directly challenged) significant gravitomagnetic corrections to the rotation curve of a toy-model galactic baryon profile are suggested. An immediate question regarding such claims is how such significant behaviours can have been consistently missed in the long history of numerical relativity [22; 23], or in the well-developed post-Newtonian formalism [24; 25]. Perhaps unsurprisingly therefore, the claims in [11; 12; 13; 14; 15; 16; 17; 18] and [19] have been subsequently shown to be non-viable in [26] and [27], respectively. The purpose of this paper is to perform the same function for the claims in [20; 21], by showing that gravitomagnetism in the form used therein cannot be a significant factor in explaining flat or rising galaxy rotation curves without dark matter. Our findings concur with the recent results reported in [28], where the gravitoelectromagnetic (GEM) formulation of linearised GR was used to predict galaxy rotation curves that at all radii differ from those of Newtonian theory at the order of only \(v^{2}/c^{2}\approx 10^{-6}\), as one might expect. The main focus of the present paper, however, is to clarify _why_ the approach adopted in [20] leads to such different, unexpected and incorrect results, which is not addressed in [28]. We will observe in particular the accidental involvement in [20] of homogeneous solutions to the GEM field equations: this leads us naturally to [21], where such solutions are actively employed. We will show however that such solutions do not yield any deterministic phenomenology according to the suggested approximation in [21], and that they are moreover absolutely ruled out by the absence of suitable GEM boundary conditions in the galactic environment. The remainder of this paper is arranged as follows. In Section II, we briefly outline linearised GR, focussing on station ary non-relativistic matter sources, and discuss its expression in the GEM formalism in Section III. We then summarise in Section IV the application of the GEM formalism to the modelling of galaxy rotation curves, as proposed in [20]. We lay out the problems with this modelling approach in Section V. Our analysis reveals the unwitting use of homogeneous solutions to the GEM field equations. In Section VI we finally address the deliberate use in [21] of such solutions. Conclusions follow in Section VII. ## II Linearised general relativity In the weak gravitational field limit appropriate for modelling galaxy rotation curves, there exist quasi-Minkowskian coordinate systems \(x^{\mu}=(ct,x^{t})\) in which the spacetime metric takes the form \(g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\) where \(|h_{\mu\nu}|\ll 1\) and the first and higher partial derivatives of \(h_{\mu\nu}\) are also small.1. One can conveniently reinterpret \(h_{\mu\nu}\) simply as a special-relativistic symmetric rank-2 tensor field that represents the weak gravitational field on a Minkowski background spacetime and possesses the gauge freedom \(h_{\mu\nu}\to h_{\mu\nu}-\partial_{\mu}s_{\nu}-\partial_{\nu}\bar{s}_{\mu}\). Imposing the Lorenz gauge condition \(\partial_{\bar{\mu}}\bar{h}^{\mu\rho}=0\) on the trace-reverse \(\bar{h}_{\mu\nu}\equiv h_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}h\), where \(h=\eta_{\mu\nu}h^{\mu\nu}\), the linearised GR field equations reduce to the simple form Footnote 1: We adopt the following sign conventions: \((+,-,-,-)\) metric signature, \({R^{\prime}}_{\sigma\mu\nu}=2(\partial_{\mu}\Gamma^{\prime}|_{\mu\nu}|+\Gamma ^{\prime}|_{\mu}\Gamma^{\prime}|_{\mu\nu}|)\), where the metric (Christoffel) connection \(\Gamma^{\prime}_{\mu\nu}=\frac{1}{2}g^{\mu\sigma}(\partial_{\mu}g_{\mu\nu}+ \partial_{\rho}g_{\mu\sigma}-\partial_{\sigma}\bar{s}_{\mu})\), and \({R^{\prime}}_{\mu}={R^{\mu\nu}}_{\sigma\mu}\). \[\square^{2}\bar{h}^{\mu\nu}=-2\kappa T^{\mu\nu}, \tag{1}\] where \(\square^{2}\equiv\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}\) is the d'Alembertian operator, \(\kappa=8\pi G/c^{4}\) is Einstein's gravitational constant and \(T^{\mu\nu}\) is the matter energy-momentum tensor. For modelling galaxy rotation curves, it is sufficient to a very good approximation to limit one's considerations to stationary, non-relativistic, perfect fluid matter sources. In this case, \(\partial_{0}\Gamma^{\prime\mu\nu}=0\) and the coordinate 3-speed \(u\) of any constituent particle is small enough compared with \(c\) that one may neglect terms of order \(u^{2}/c^{2}\) and higher in \(T^{\mu\nu}\); in particular one may take \(\gamma_{u}=(1-u^{2}/c^{2})^{-1/2}\approx 1\). Moreover, the fluid pressure \(p\) is everywhere much smaller than the energy density and may thus be neglected as a source for the gravitational field. Finally, we note that \(|T^{i/j}|/|T^{00}|\sim u^{2}/c^{2}\) and so one should take \(T^{ij}\approx 0\) to the order of our approximation. Thus, for a stationary, non-relativistic source, one approximates its energy-momentum tensor as \[T^{00}\approx\rho c^{2},\qquad T^{00}\approx c\rho u^{i},\qquad T^{ij}\approx 0, \tag{2}\] where \(\rho(\mathbf{x})\) is the proper-density distribution of the source and \(\mathbf{x}\) denotes a spatial 3-vector. As an immediate consequence, the particular integral of (1) yields \(\bar{h}^{ij}\approx 0\). Indeed, this is consistent with the Lorenz gauge condition, which implies that \(\partial_{j}\bar{h}^{ij}=-\partial_{0}\bar{h}^{0}\), where the right-hand side vanishes for stationary systems. Thus, only the \(\bar{h}^{00}\) and \(\bar{h}^{0t}=\bar{h}^{t0}\) components of the gravitational field tensor are non-zero in this approximation. In linearised GR, there is an inconsistency between the field equations (1) and the equations of motion for matter in a gravitational field. From (1), one quickly finds that \(\partial_{\mu}T^{\mu\nu}=0\), which should be contrasted with the requirement from the full GR field equations that the covariant divergence should vanish, \(\nabla_{\mu}T^{\mu\nu}=0\). The latter requirement leads directly to the geodesic equation of motion for the worldline \(x^{\mu}(\tau)\) of a test particle, namely \[\ddot{x}^{\mu}+\Gamma^{\mu}{}_{\nu\sigma}\dot{x}^{\nu}\dot{x}^{\sigma}=0, \tag{3}\] where the dots denote differentiation with respect to the proper time \(\tau\), whereas the former requirement leads to the equation of motion \(\ddot{x}^{\mu}=0\). This means that the gravitational field has _no effect_ on the motion of the particle and so clearly contradicts the geodesic postulate. Despite this inconsistency, one may show that the effect of weak gravitational fields on test particles may still be computed by inserting the linearised connection coefficients into the geodesic equations (3). ## III Gravitoelectromagnetism Gravitoelectromagnetism (GEM) provides a useful and notionally-familiar formalism for linearised GR by drawing a close analogy with classical electromagnetism (EM). Indeed, GEM is ideally suited to modelling galaxy rotation curves, since the assumption of a stationary, non-relativistic matter source leads to GEM field equations and a GEM 'Lorentz' force law (derived below) that are fully consistent and have forms analogous to their counterparts in EM; this is not possible for more general time-dependent scenarios. The GEM formalism for linear GR with a stationary, non-relativistic source is based on the simple ansatz of relabelling2 the four independent non-zero components of \(\bar{h}^{\mu\nu}\) as \(\bar{h}^{00}\equiv 4\Phi/c^{2}\) and \(\bar{h}^{0t}\equiv A^{i}/c\), where we have defined the gravitational scalar potential \(\Phi\) and spatial gravitomagnetic vector potential \(A^{i}\). On lowering indices, the corresponding components of \(h_{\mu\nu}\) are \(h_{00}=h_{11}=h_{22}=h_{33}=2\Phi/c^{2}\) and \(h_{0i}=A_{i}/c\). It should be remembered that raising or lowering a spatial (Roman) index introduces a minus sign with our adopted metric signature. Thus the numerical value of \(A_{i}\) is minus that of \(A^{i}\), the latter being the \(i\)th component of the spatial vector \(\mathbf{A}\). It is also worth noting that both \(\Phi/c^{2}\) and \(A_{i}/c\) are dimensionless, thereby yielding dimensionless components \(h_{\mu\nu}\), which is consistent with our choice of coordinates \(x^{\mu}=(ct,x^{t})\) having dimensions of length. Footnote 2: Conventions in the literature vary up to a multiplicative constant for the definition of the gravitomagnetic vector potential \(A^{i}\). These factors variously modify the analogues of the EM field equations and the Lorentz force law, with no scaling choice allowing all the GEM and EM equations to be perfectly analogous. Here, we follow the convention used in [29]. With the above identifications, the linearised field equations (1) with energy-momentum tensor (2) may be written in the scalar/vector form \[\nabla^{2}\Phi=4\pi G\rho,\qquad\nabla^{2}\mathbf{A}=\frac{16\pi G}{c^{2}}\mathbf{J}, \tag{4}\] where we have defined the momentum density (or matter current density) \(\mathbf{j}\equiv\rho\mathbf{u}\), and the Lorenz gauge condition \(\partial_{\rho}\bar{h}^{\mu\rho}=0\) itself becomes \(\mathbf{\nabla}\cdot\mathbf{A}=0\). Clearly, the first equation in (4) recovers the Poisson equation for the gravitational potential, familiar from Newtonian gravity, whereas the second equation determines the gravitomagnetic vector potential that describes the 'extra' (weak) gravitational field predicted in linearised GR, which is produced by the motion of the fluid elements in a stationary, non-relativistic source. Indeed, the general solutions to the equations (4) are given immediately by \[\Phi(\mathbf{x}) =-G\int\frac{\rho(\mathbf{x}^{\prime})}{|\mathbf{x}-\mathbf{x}^{\prime}|} \,\mathrm{d}^{3}\mathbf{x}^{\prime}, \tag{5a}\] \[\mathbf{A}(\mathbf{x}) =-\frac{4G}{c^{2}}\int\frac{\mathbf{J}(\mathbf{x}^{\prime})}{|\mathbf{x}-\bm {x}^{\prime}|}\,\mathrm{d}^{3}\mathbf{x}^{\prime}. \tag{5b}\] One may take the analogy between linearised GR and EM further by defining the gravitoelectric and gravitomagnetic fields \(\mathbf{E}=-\mathbf{\nabla}\Phi\) and \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\), which are easily found to satisfy the gravitational Maxwell equations \[\mathbf{\nabla}\cdot\mathbf{E} =-4\pi G\rho,\qquad\mathbf{\nabla}\cdot\mathbf{B} =0, \tag{6}\] \[\mathbf{\nabla}\times\mathbf{E} =\mathbf{0},\qquad\qquad\mathbf{\nabla}\times\mathbf{B} =-\frac{16\pi G}{c^{2}}\mathbf{j}.\] The gravitoelectric field \(\mathbf{E}\) describes the standard (Newtonian) gravitational field produced by a static matter distribution, whereas the gravitomagnetic field \(\mathbf{B}\) is the 'extra' gravitational field produced by moving fluid elements in the stationary, non-relativistic source. The equation of motion for a test particle in the presence of the GEM fields is merely the geodesic equation (3) for the metric \(g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\), from which one may determine the trajectories of either massive particles, irrespective of their speed, or massless particles, by considering timelike or null geodesics, respectively. We will assume here, however, that the test particle is massive and slowly-moving, i.e. its coordinate 3-speed \(\upsilon\) is sufficiently small that we may neglect terms in \(\upsilon^{2}/c^{2}\) and higher. Hence we may take \(\gamma_{\nu}=(1-\upsilon^{2}/c^{2})^{-1/2}\approx 1\), so that the 4-velocity of the particle may be written \(\upsilon^{\mu}=\gamma_{\nu}(c,\mathbf{v})\approx(c,\mathbf{v})\). This immediately implies that \(\tilde{x}^{0}=0\) and, moreover, that \(\mathrm{d}t/\mathrm{d}\tau=1\), so one may consider only the spatial components of (3) and replace dots with derivatives with respect to \(t\). Expanding the summation in (3) into terms containing, respectively, two time components, one time and one spatial component, and two spatial components, neglecting the purely spatial terms since their ratio with respect to the purely temporal term is of order \(\upsilon^{2}/c^{2}\), expanding the connection coefficients to first-order in \(h_{\mu\nu}\) and remembering that for a stationary field \(\partial_{0}h_{\mu\nu}=0\) and that one inherits a minus sign on raising or lower a spatial (Roman) index, one finally obtains the gravitational Lorentz force law \[\frac{\mathrm{d}\mathbf{v}}{\mathrm{d}t}=-\mathbf{\nabla}\Phi+\mathbf{v}\times(\mathbf{\nabla }\times\mathbf{A})=\mathbf{E}+\mathbf{v}\times\mathbf{B}. \tag{7}\] The first term on the right-hand side gives the standard Newtonian result for the motion of a test particle in the field of a static, non-relativistic source, whereas the second term gives the 'extra' force felt by a moving test particle in the presence of the 'extra' field produced by moving fluid elements in the stationary, non-relativistic source. ## IV Gravitoelectromagnetic modelling of galaxy rotation curves The GEM formalism is applied to the modelling of galaxy rotation curves in [20], where the galactic density and velocity distribution is assumed to act as a stationary, non-relativistic matter source. Thus, somewhat unusually, the fluid pressure is assumed to vanish and the galaxy is instead modelled as consisting of a 'dust' of stars. This approach therefore uses the field equations (4) and the equation of motion (7), where the velocity distribution \(\mathbf{u}\) of the galaxy in the former is identified with the velocity \(\mathbf{v}\) of test particles in the latter, thereby leading to a potentially self-consistent pressureless model. The central result in [20] can be derived straightforwardly as follows. First, one adopts cylindrical polar coordinates \((R,\phi,z)\) and assumes azimuthal symmetry, such that \(\rho=\rho(R,z)\) and \(\mathbf{v}=\upsilon(R,z)\hat{\mathbf{\phi}}\), which from (5) implies that \(\Phi=\Phi(R,z)\) and \(\mathbf{A}=A(R,z)\hat{\mathbf{\phi}}\). In this case, \[\mathbf{\nabla}\times\mathbf{A} =\frac{1}{R}\left(-\frac{\partial\psi}{\partial z}\hat{\mathbf{R}}+ \frac{\partial\psi}{\partial R}\hat{\mathbf{z}}\right), \tag{8a}\] \[\mathbf{v}\times(\mathbf{\nabla}\times\mathbf{A}) =\frac{\upsilon}{R}\left(\frac{\partial\psi}{\partial R}\hat{\mathbf{R} }+\frac{\partial\psi}{\partial z}\hat{\mathbf{z}}\right), \tag{8b}\] where we have defined the poloidal gravitomagnetic flux \(\psi\equiv\mathbf{R}A\). Also, in light of the Lorenz (or Coulomb) gauge condition \(\mathbf{\nabla}\cdot\mathbf{A}=0\) (which is easily confirmed by direct calculation), one has \[\nabla^{2}\mathbf{A}=-\mathbf{\nabla}\times(\mathbf{\nabla}\times\mathbf{A})=\left[\frac{ \partial}{\partial R}\left(\frac{1}{R}\frac{\partial\psi}{\partial R}\right)+ \frac{1}{R}\frac{\partial^{2}\psi}{\partial z^{2}}\right]\hat{\mathbf{\phi}}. \tag{9}\] The field equations (4) and the radial and vertical components of the fluid equation of motion (7) may therefore be written as \[\frac{1}{R}\frac{\partial}{\partial R}\left(R\frac{\partial\Phi}{ \partial R}\right)+\frac{\partial^{2}\Phi}{\partial z^{2}} =4\pi G\rho, \tag{10a}\] \[\frac{\partial}{\partial R}\left(\frac{1}{R}\frac{\partial\psi}{ \partial R}\right)+\frac{1}{R}\frac{\partial^{2}\psi}{\partial z^{2}} =\frac{16\pi G}{c^{2}}\rho v,\] (10b) \[\frac{\partial\Phi}{\partial R}-\frac{\upsilon}{R}\frac{\partial \psi}{\partial R} =\frac{\upsilon^{2}}{R},\] (10c) \[-\frac{\partial\Phi}{\partial z}+\frac{\upsilon}{R}\frac{\partial \psi}{\partial z} =0. \tag{10d}\] Using (10c) and (10d) to eliminate \(\partial\psi/\partial R\) and \(\partial\psi/\partial z\) from (10b), then using (10a) to eliminate the resulting term containing \(\partial^{2}\Phi/\partial z^{2}\), the field equation (10b) yields \[\left(\upsilon+R\frac{\partial\upsilon}{\partial R}\right)\frac{ \partial\Phi}{\partial R}+R\frac{\partial\upsilon}{\partial z}\frac{\partial \Phi}{\partial z} =\] \[\frac{\upsilon}{R}\left[\upsilon\left(\upsilon-R\frac{\partial \upsilon}{\partial R}\right)+4\pi G\rho R^{2}\left(1-\frac{4\upsilon^{2}}{c^{2}} \right)\right]. \tag{11}\] The non-linear first-order partial differential equation (11) for the galactic velocity field \(v(R,z)\) is the key expression in [20]3, and depends only on the galactic density distribution \(\rho\) and on the derivatives \(\partial\Phi/\partial R\) and \(\partial\Phi/\partial z\) of the Newtonian gravitational potential, which are themselves also determined by specifying \(\rho\). Indeed, \(\Phi\) is given by (5a), which in cylindrical polar coordinates with azimuthal symmetry reads4 Footnote 3: Equation (11) does, in fact, differ slightly from equation (4.1) in [20], since the latter lacks the factor of 4 multiplying \(v^{2}/c^{2}\) in the final term on the RHS. We believe the expression in [20] to be in error as a consequence of the choice of scaling used in the definition therein of the gravitomagnetic vector potential \(\mathbf{A}\). \[\Phi(R,z) =-G\!\int_{0}^{\infty}\!\!\mathrm{d}R^{\prime}\!\int_{0}^{2\pi}\! \!\mathrm{d}\mathbf{\phi}^{\prime}\!\int_{-\infty}^{\infty}\!\!\mathrm{d}z^{\prime }\,\frac{R^{\prime}\rho\left(R^{\prime},z^{\prime}\right)}{|\mathbf{x}-\mathbf{x}^{ \prime}|} \tag{12}\] \[=-2G\!\int_{0}^{\infty}\!\!\mathrm{d}R^{\prime}\!\int_{-\infty}^ {\infty}\!\!\mathrm{d}z^{\prime}\,\rho\left(R^{\prime},z^{\prime}\right)\,R^ {\prime}\sqrt{\frac{m}{RR^{\prime}}}K(m),\] where \(K(m)\) is a complete elliptic integral function of the first kind and \(m=4RR^{\prime}/[(R+R^{\prime})^{2}+(z-z^{\prime})^{2}]\). Moreover, the derivatives \(\partial\Phi/\partial R\) and \(\partial\Phi/\partial z\) may also be expressed analytically as \[\frac{\partial\Phi}{\partial R} =G\int_{0}^{\infty}\mathrm{d}R^{\prime}\int_{-\infty}^{\infty} \mathrm{d}z^{\prime}\,\rho\left(R^{\prime},z^{\prime}\right)\,\frac{R^{\prime }}{R}\sqrt{\frac{m}{RR^{\prime}}}\left[K(m)+\tfrac{1}{2}\left(\,\frac{R}{R^{ \prime}}-\frac{2-m}{m}\right)\frac{mE(m)}{1-m}\right], \tag{13a}\] \[\frac{\partial\Phi}{\partial z} =\frac{G}{2}\int_{0}^{\infty}\mathrm{d}R^{\prime}\int_{-\infty}^ {\infty}\mathrm{d}z^{\prime}\,\rho\left(R^{\prime},z^{\prime}\right)\left( \frac{z-z^{\prime}}{R}\right)\sqrt{\frac{m}{RR^{\prime}}}\frac{mE(m)}{1-m}, \tag{13b}\] where \(E(m)\) denotes a complete elliptic integral of the second kind. Before considering further the application of equation (11) to modelling galaxy rotation curves, we note that, if one neglects the mass currents on the RHS of (10b) (by letting \(c\to\infty\)), then one may consistently set \(\psi=0\) (although other solutions to the resulting homogeneous equation (10b) do exist). The radial and vertical components of the fluid equation of motion (10c)-(10d) then immediately yield \(\partial\Phi/\partial z=0\) and thus \(v^{2}(R)=R\,\partial\Phi/\partial R\), where the latter is the usual Newtonian equation assumed in the modelling of galaxy rotation curves. In applying the full equation (11) to the modelling of galaxy rotation curves, it is noted in [20] that observations of the rotation velocity are typically made along the galactic equatorial plane, so one may take \(z=0\). Assuming further a galactic density distribution that is symmetric about this mid-plane, (11) then reduces to \[\left(\beta+R\frac{\partial\beta}{\partial R}\right)\,\frac{ \partial\Phi(R,0)}{\partial R}=\\ \frac{c^{2}\beta}{R}\left[\beta\left(\beta-R\frac{\partial\beta}{ \partial R}\right)+\frac{4\pi G}{c^{2}}\rho(R,0)R^{2}(1-4\beta^{2})\right], \tag{14}\] where we have defined \(\beta(R)\equiv v(R,0)/c\). Equation (14) is applied in [20] to two different models of the galactic density distribution. The first model considered uses the density and gravitational potential given by the analytical Miyamoto-Nagai (MN) solution to Poisson's equation [30]. In this approach, one begins by assuming the fairly simple potential form \[\Phi(R,z)=-\frac{GM}{\sqrt{R^{2}+(a+\sqrt{b^{2}+z^{2}})^{2}}}, \tag{15}\] where \(M\) is the total galactic mass and \(a\) and \(b\) are free positive parameters. The density distribution implied by Poisson's equation is then given by \[\rho(R,z)=\\ \frac{Mb^{2}}{4\pi}\times\frac{aR^{2}+(a+3\sqrt{b^{2}+z^{2}})(a+ \sqrt{b^{2}+z^{2}})^{2}}{\left[R^{2}+(a+\sqrt{b^{2}+z^{2}})^{2}\right]^{5/2}(b ^{2}+z^{2})^{3/2}}, \tag{16}\] which extends to infinity in both \(R\) and \(z\). The constant density contours have the form of spheroids of revolution with semi-axes proportional to \(a\) and \(b\). It is straightforward to verify that, when integrated over all space, this density distribution yields the total mass \(M\). In [20], this model is fitted to the observed rising rotation curve of NGC 1560 out to \(8.3\,\mathrm{kpc}\) by varying the parameters \(M\), \(a\) and \(b\). The derived parameter values are \(M=7.3\times 10^{10}\,\mathrm{M}_{\odot}\), \(a=0.373\,\mathrm{kpc}\) and \(b=0.300\,\mathrm{kpc}\), which yield a reasonable fit to the rotation curve, but does not reproduce the luminosity profile of NGC 1560. This occurs because the infinite spheroidal solution does not describe the equilibrium of a finite disk-like object, and thus fails to reproduce its mass distribution and total mass. Consequently, in the second model, the galaxy is instead considered as an axisymmetric thin disk of finite radius, which is again symmetric about its mid-plane \(z=0\). The density distribution is assumed to have the functional form \[\rho(R,z)=\rho(R,0)\exp\left(-\frac{z^{2}}{2\Delta^{2}(R)}\right), \tag{17}\] where \(\Delta(R)\) is a characteristic disk width with some assumed radial dependence. For small values of \(\Delta(R)\), one can estimate the integral over \(z^{\prime}\) in (13a) analytically using the Laplace approximation, which boils down to setting \(z^{\prime}=0\) in the integrand and multiplying by the volume \(\sqrt{2\pi}\Delta(R)\) of the Gaussian factor in (17); this yields \[\frac{\partial\Phi(R,0)}{\partial R}\approx 2\sqrt{2\pi}G\int_{0}^{\infty}\, \frac{R^{\prime}\rho(R^{\prime},0)\Delta(R^{\prime})}{R(R+R^{\prime})}\,\left[ K\left(\frac{4RR^{\prime}}{(R+R^{\prime})^{2}}\right)+\frac{R+R^{\prime}}{R-R^{ \prime}}\,E\left(\frac{4RR^{\prime}}{(R+R^{\prime})^{2}}\right)\right]\, \mathrm{d}R^{\prime}. \tag{18}\] To evaluate the above integral (numerically), the density distribution \(\rho(R,0)\) is taken from the luminosity profile of the galaxy under consideration, which is therefore reproduced _automatically_, but one still requires a model for the radially-dependent characteristic vertical width \(\Delta(R)\) of the galaxy. In [20], this is taken to coincide with a given constant density contour of the analytical MN solution (16). In particular, one defines \(\Delta(R)\) such that \[\frac{\rho_{MN}(R,\Delta(R))}{\rho_{MN}(0,0)}\frac{\mathcal{M}(\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{ \mathcal{ \mathcal{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} {\} \} \ \}\}\ \ \}\ \ \}\ \}\ \ \ \ \ \ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\}\\\\\}\\\\\}\\\\\\\\\\\\}\\\}\\\\\\\\\\\\}\\\\\\\\}\\\\\\\}\\\\\}\\\\\\\\\\}\\\\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\}\\\\\\}\\\\\\\\}\\\\\\\\\\\}\\\\\\\\}\\\\\\}\\\\\\\\\}\\\\\\\\\\}\\\\\\}\\\\\\\\\}\\\\\\\ in our units. For a broad selection of galaxies types, one may therefore take typical densities in our units to lie in the range \(\mathcal{O}(10^{-8})\) to \(\mathcal{O}(10^{-6})\); we will take the upper of these as indicative, since this maximises the magnitude of gravitomagnetic effects, although in reality they will usually be somewhat smaller. From the Poisson equation (10a), or its more succinct form in (4), one sees that \(|\Phi|\) is also \(\mathcal{O}(10^{-6})\), and hence the velocity \(v\sim\mathcal{O}(10^{-3})\) (where to convert velocities in SI unit to our units, one needs merely to divide by \(c\)). Then, from equation (10b), which one can also write more usefully as \(\nabla^{2}\psi=16\pi R\rho v-(2/R)\omega\psi/\partial R\), one sees that \(|\psi|\sim\mathcal{O}(10^{-9})\), modulo any multiplicative effects from \(R\) which are limited to a factor of \(\sim 10\) for a typical galaxy. Now considering either the radial equation of motion (10c) or its vertical counterpart (10d), one sees any effects arising from \(\psi\), which always appears multiplied by \(v\), must be \(\mathcal{O}(10^{-6})\)_smaller_ that those arising from \(\Phi\). Consequently, any gravitomagnetic effects will have a negligible effect on the circular velocity of a test particle, which will very well approximated simply by the strictly Newtonian expression \(\sqrt{R\,\partial\Phi/\partial R}\). This result is at least allowable (notwithstanding the usual clash, if no dark matter is assumed, with the flat or rising rotation curves observed in many galaxies), if disappointing, but one sees from the vertical equation of motion (10d) that there is a much more serious problem. In this case, one requires the \(\mathcal{O}(10^{-6})\) term in \(\Phi\) to be balanced by the \(\mathcal{O}(10^{-12})\) term in \(\psi\); this is simply impossible and indicates that the set of equations (10) has no physically meaningful solution. As mentioned above, this problem arises because one has insisted that all the vertical support force arises from gravitomagnetic effects, which is impossible for ordinary matter. ### Rotation curves for the MN density profile In eliminating various quantities between the equations (10) to arrive at the'master' equation (11) in [20], one can no longer identify the issues discussed above. Indeed, one can go on to find solutions \(\psi\) that satisfy (11), although these cannot be physically meaningful, as our analysis above shows. We now illustrate this directly by considering a galaxy having the MN density profile (16) and gravitational potential (15), which was the first model used in [20] to fit the observed rotation curve data of NGC 1560 (although it fails to reproduce its luminosity profile). As discussed above, the resulting derived parameters are \(M=7.3\times 10^{10}\) M\({}_{\odot}\), \(a=0.373\,\)kpc and \(b=0.300\,\)kpc, so the fitted MN density profile is moderately oblate. The resulting gravitational potential and density contours are shown in Figure 1. Inserting the forms for the MN potential (15) and density (16) into the'master' equation (11) yields a very complicated expression, but one can make progress analytically if one restricts attention to the equatorial plane \(z=0\), as in (14). This is permissible since, although (11) contains the \(z\)-derivative of the potential \(\Phi\), one can see that for the MN form of the potential this vanishes on the equatorial plane. The resulting equation then reads \(A+B=0\), with \[A =-R^{2}M\left[-bR\left(R^{2}+a^{2}+2ba+b^{2}\right)\frac{dv}{d\,R }+\left(2b^{3}+5b^{2}a+\left(4a^{2}-R^{2}\right)b+R^{2}a+a^{3}\right)v\right] \tag{21a}\] \[B =4\left[\tfrac{1}{4}b\left(R^{2}+a^{2}+2ba+b^{2}\right)^{5/2} \left(\frac{dv}{d\,R}R-v\right)+M\,R^{2}v\left(5a^{2}b+a^{3}+R^{2}a+3b^{3}+7b^ {2}a\right)\right]v^{2}, \tag{21b}\] where we have split the LHS into the terms, since it is possible to obtain a simple analytic result for \(v\) by just setting \(A=0\). It is not immediately obvious that this is a valid procedure, even as an approximation, since \(v\sim\mathcal{O}(10^{-3})\) and \(\rho\), and by extension its volume integral \(M\), are likely \(\mathcal{O}(10^{-6})\). Thus, both expression \(A\) and the first half of the terms in \(B\) are likely \(\mathcal{O}(10^{-9})\), and hence it is not clear that one can preferentially drop the first half of \(B\). Numerically, however, it transpires that the value of \(M=7.3\times 10^{10}\) M\({}_{\odot}\) derived for NGC 1560 is sufficiently large that one can consider just \(A=0\), and we note that this yields an expression for \(v\) that is in fact _independent_ of \(M\). We may illustrate this approach explicitly by comparing the exact and approximate solutions for \(v\) in this case. Setting just \(A=0\) and solving for \(v\) gives \[v=\frac{CR^{2+\frac{g}{3}}}{[R^{2}+(a+b)^{2}]^{3/2}}, \tag{22}\] where \(C\) is an arbitrary constant. In Figure 2, we show the rotation curve resulting from the analytical approximation (22) as the red curve and an exact numerical integration of the full equation (21) as the black curve. For the analytic approximation, although there is no dependence on mass, one must provide an overall scaling \(C\), and a value of \(C=1/6400\) was used in the plot, which gives reasonably good agreement between the exact result in this case. The latter was calculated by numerical integration starting at the outermost rotation curve data point for NGC 1560, for which \(v=2.67\times 10^{-4}\) (in units of \(c\)) at \(R=8.29\,\)kpc, and moving inwards towards the origin, in the same way as performed in [20]. Similarly, one could instead fix the scaling \(C\) of the analytical result by ensuring that it passes through the outermost data point, which moves the red curve up slightly. In any case, it is important to note that, while the fit to the NGC 1560 rotation curve data in [20] yields the derived mass \(M=7.3\times 10^{10}\) M\({}_{\odot}\), the only information about \(M\) is in quite small changes in the _shape_ of the curve that occur as \(M\) drops below this best-fit value. For _larger_ values of \(M\), the shape of the curve is invariant, and corresponds to that given in the analytical approximation (22), which does not depend on \(M\). This suggests that there may be a large uncertainty on the mass \(M\) derived from the rotation curve data, although no errors on the fitted value are provided in [20]. Nonetheless, let us assume the best-fit value of \(M\) to calculate also the rotation curve that one would obtain in the absence gravitomagnetic effects, i.e. \(\psi=0\), and the galaxy is completely static and supported just by usual pressure forces. In this case, the rotational velocity of a test particle is merely \(\sqrt{R\dot{\phi}\Phi/\partial R}\) and one obtains the red curve in Figure 3, which we plot alongside the exact rotation curve (in black) from Figure 2, which includes gravitomagnetic effects. Figure 3 matches very well with Figure 2 in [20], but is worthy of further comment. First, we note that the conventional rotation curve peaks at velocities around \(420\,\mathrm{km\,s^{-1}}\) (readopting SI units for the moment); this is much higher than one would expect for what is meant to be a dwarf galaxy. Second, and more important, we see that the effects of gravitomagnetism here are to _suppress_ the rotational velocity of test particles, not _enhance_ them. Thus one requires a great deal more matter present in the case with gravitomagnetic effects than that without, in order to explain a given rotation curve level. Gravitomagnetic effects serve here to explain only aspects of the _shape_ of rotation curves (here a gradually rising one), but absolutely not whether one requires more matter than appears visible; in other words, it makes the missing matter problem _worse_. Before moving on to discuss the issue of gravitomagnetic vertical support (or the lack thereof) in the next subsection, it is worth noting some further aspects of the shape of the rotation curves derived above. Although the rotation curves obtained using either (21) or the analytic approximation (22) appear to fit the rotation curve data for NGC 1560 shown in Figure 1 of [20] in a pleasing way, this disguises the problem that the shape of these rotation curves changes considerably with just small changes in the \(a\) and \(b\) parameters. Figure 1: Gravitational density (left) and potential (right) contours for a MN profile with parameters \(a\) and \(b\) derived from fitting the rotation curve of NGC 1560 in [20]. Figure 3: The conventional Newtonian rotation curve (red) for NGC 1560 assuming a MN profile with the best-fit values of the parameters \(a\), \(b\) and \(M\) from [20], together with the exact rotation curve including gravitomagnetic effects (black), already shown in Figure 2. Figure 2: Rotation velocity \(v\) (in units of \(c\)) versus \(R\) in kpc for a MN profile with parameters derived from NGC 1560. The red curve is obtained using the analytical approximation (22) with \(C=1/6400\) and the black curve is an exact numerical integration using equation (21). Observations of NGC 1560 in the visible show it to be considerably more 'elliptical' than the ratio \(a\,:\,b=0.373\,:\,0.300\) indicates, with a ratio of \(\sim 0.7\,:\,0.3\) seeming much more appropriate. From the analytical expression (22), however, one can see that this will cause a problem, since the shape of the predicted rotation curve will scale as \(v\propto R^{1.33}\) at large \(R\), and so it will be concave rather than convex towards the \(R\) axis. Indeed, this will clearly occur for any ratio \(a\,:\,b>2\,:\,1\). No known rotation curves have this shape (concave rather than convex over their whole range), and so this model will be incapable of accommodating galaxies with ellipticities beyond this ratio. That this is not an artefact of our analytical approximation is illustrated in Figure 4, which is the equivalent of the rotation curves plot in Fig. 2, but for \(a\) and \(b\) values of \(0.7\) and \(0.3\,\mathrm{kpc}\), and using the same mass \(M\). One sees that the red curve (analytical approximation) closely follows the black curve (exact numerical integration), and hence the insights that the analytic approximation (22) provides for what occurs at higher \(a\,:\,b\) ratios are indeed borne out in the exact integration. ### Gravitomagnetic vertical support As our final point in this section we now discuss further the assumption that all vertical support for dynamical equilibrium is provided by gravitomagnetic rotational effects, which in our opinion is the key issue with the modelling approach outlined in Section IV, and applies irrespective of the assumed density profile of the galaxy. As above, however, we will illustrate our findings for the MN profile, since it can again be treated almost entirely analytically. In particular, we will show that in order to provide the vertical support necessary, \(\psi\) has to become infinite at the origin, and have extremely large values near to it. To substantiate this, plus gain some insight into what is happening analytically, we again take a 'dual track' approach in which we carry out exact numerical integrations, as well as develop an analytical approximation. To this end, one can construct an exact ODE in \(R\) applicable in the equatorial plane by using radial equation of motion (10c), together with our analytical approximation for circular velocity \(v\) in (22). One can then form an approximation to \(\psi\) based on the smallness of the coefficient \(C\), which yields the very simple approximate solution \[\psi=\frac{MbR}{C(b-a)R^{\frac{q}{b}}}. \tag{23}\] Using the values of the parameters derived for NGC 1560 in [20], this approximation is in fact even better than that for the rotation curve in (22), as we demonstrate in Fig. 5. The curves for the exact numerical integration (black) and the analytic approximation from (23) (red) are virtually indistinguishable. One sees that \(\psi\) itself diverges towards the origin, whereas \(R\psi\) converges at the origin; this is consistent with the ratio \(a/b=0.373/0.3\) lying between \(1\) and \(2\), and hence according to (23) \(R\psi\) should go to zero at \(R=0\), whereas \(\psi\) diverges. By comparison, in Fig. 6 we show \(R\psi\) for the higher ellipticity case considered above, i.e. \(a/b=0.7/0.3\). We have plotted only \(R\psi\) here since even this diverges, as to be expected from (23) with \(a/b>2\). We also note that in all of these plots of \(\psi\) the values involved are \(\mathcal{O}(1)\) or perhaps \(\mathcal{O}(10^{-1})\), which is roughly \(10^{8-9}\) larger than expected to be generated by GEM effects, according to the orders of magnitude analysis given earlier. This effect must originate from the unwitting inclusion of _free-space_ solutions of the Poisson-like equation (10b) that determines \(\psi\), i.e. solutions for which the source term on the Figure 4: Same as Figure 2, but for a higher ellipticity case, with \(a=0.7\,\mathrm{kpc}\) and \(b=0.3\,\mathrm{kpc}\). Figure 5: Top: the function \(-\psi\) versus \(R\) in \(\mathrm{kpc}\) using the parameters derived for NGC 1560 in [20]. Bottom: the function \(-R\psi\), to indicate better the behaviour near the origin. In each case the black curve is the result of an exact numerical integration, and the red curve shows the analytic approximation (23). RHS, which would normally generate \(\psi\), are set to zero. One can introduce arbitrary amounts of such homogeneous solutions to any solution of the inhomogeneous equation. However, the penalty is of course that any such solution has to add in singularities at either infinity or the origin. If this were not the case, one would be free to add homogeneous solutions of arbitrary amplitude to, for example, the Poisson equation for the gravitational field around the Sun or Earth, meaning one would lose the ability to predict the force of gravity based on the mass of an object. Such a procedure is forbidden by the need to exclude singularities. Thus, having demonstrated that a singularity exists (at the origin in this case) with the GEM approach outlined in Section IV, this should definitively rule out the methodology as a means of explaining flat galaxy rotation curves without dark matter. It might be argued that a 'get-out' might exist since most galaxies already contain a singularity near their centres in the form of supermassive black holes. However, such a model would require separate computations that we have not seen carried out as yet to establish it, and a priori seems contrived. Finally, although we have not gone into it here, one finds further that a singularity can exist even if \(\psi\) does not diverge, since it turns out that to have the spacetime metric obey 'elementary flatness' [31], one requires not only that \(\psi\) is not divergent as \(R\) approaches zero, but must behave as \(\psi\propto R\) for small \(R\). The \(\psi\) functions discussed here are far from having this property, and indeed violate this requirement all the way up the \(z\)-axis, posing a further problem for this line of approach. ## VI Homogeneous poloidal solutions In Section V.3 we alluded to the unwitting inclusion in [20] of homogeneous solutions to the Poisson-like equation Eq. (10), which can seemingly facilitate large and interesting departures from the Newtonian rotation formula. Even more recently in fact, an attempt has been made in [21] to capitalise directly on these solutions in an effort to bring about the same effect. In this final section, we demonstrate that the homogeneous solution approach is not viable. ### No prospects without thin disks We will prefer still to consider an extended, axisymmetric source, such as that of the MN density profile in Eq. (15). In contrast, the authors of [21] consider only an infinitesimal, equatorial thin disk with finite surface density. In the thin disk case, the poloidal gravitomagnetic flux \(\psi=\psi\left(R,z\right)\) may be completely described by a Hankel-transformed function \(\tilde{\psi}=\tilde{\psi}\left(\lambda,z\right)\), where \[\psi\left(R,z\right)=\int_{0}^{\infty}\mathrm{d}\lambda^{\prime}Re^{-z^{ \prime}|z|}\tilde{\psi}\left(\lambda^{\prime}\right)\,J_{1}\left(\lambda^{ \prime}R\right)\,. \tag{24}\] If (24) holds as presented in [21] (and we will find in Section VI.2 that it does not), then in the case of an extended density profile the linearity of the vector Poisson equation Eq. (5) implies that the poloidal flux at a point may be associated with a distribution of thin disks \[\psi\left(R,z\right) =\int_{0}^{\infty}\mathrm{d}\lambda^{\prime}R\tilde{\Psi}\left( \lambda^{\prime},z\right)\,J_{1}\left(\lambda^{\prime}R\right)\,, \tag{25a}\] \[\tilde{\Psi}\left(\lambda,z\right) \equiv\int_{-\infty}^{\infty}\mathrm{d}z^{\prime}e^{-\lambda|z- z^{\prime}|}\tilde{\psi}\left(\lambda,z^{\prime}\right)\,. \tag{25b}\] By substituting Eq. (25a) into Eq. (10) and taking an inverse Hankel transform we then find \[\begin{split}\frac{1}{\lambda}\frac{\partial\tilde{\Psi}\left( \lambda,z\right)}{\partial z}=\int_{0}^{\infty}\mathrm{d}R^{\prime}& \frac{R^{\prime}}{v\left(R^{\prime},z\right)}\\ &\times\frac{\partial\Phi\left(R^{\prime},z\right)}{\partial z}J_ {1}\left(\lambda R^{\prime}\right)\,,\end{split} \tag{26}\] while applying the same steps to Eq. (10), in combination with the recurrence relation for Bessel functions, yields \[\begin{split}\tilde{\Psi}\left(\lambda,z\right)&= \int_{0}^{\infty}\mathrm{d}R^{\prime}\left[v\left(R^{\prime},z \right)\right.\\ &\left.+\frac{R^{\prime}}{v\left(R^{\prime},z\right)}\frac{ \partial\Phi\left(R^{\prime},z\right)}{\partial R^{\prime}}\right]J_{0}\left( \lambda R^{\prime}\right)\,.\end{split} \tag{27}\] In the (anyway unphysical) limit of a thin disk, inspection of Eq. (24) suggests we may be justified in using the relation \[\frac{\partial\tilde{\Psi}\left(\lambda,z\right)}{\partial z}\to-\mathrm{sgn} (z)\lambda\tilde{\Psi}\left(\lambda,z\right). \tag{28}\] Precisely Eq. (28) is used in [21] to relate the integrals in Eqs. (26) and (27), and this is done effectively in the singular environment of the disk itself, at \(z=0\). Given this relation of integrals, the authors then take the curious step of equating the Figure 6: Same as the Figure 5 (bottom), but for a higher ellipticity case, with \(a/b=0.7/0.3\). integrands_, arriving at an apparently deterministic expression for the rotational velocity at all radii \[\begin{split} v\left(R^{\prime},0\right)^{2}&=-R^{ \prime}\frac{\partial\Phi\left(R^{\prime},0\right)}{\partial R^{\prime}}\\ &\quad-\frac{R^{\prime}J_{1}\left(\lambda R^{\prime}\right)}{J_{0 }\left(\lambda R^{\prime}\right)}\left(\frac{\partial\Phi\left(R^{\prime},z \right)}{\partial z}\right)_{z=0}.\end{split} \tag{29}\] In Eq. (29) we retain the prime on \(R^{\prime}\) to remind ourselves that a dummy variable has somehow ended up on the _outside_ of a putatively physical equation. In Eq. (29) the second term on the right hand side constitutes a correction to the Newtonian rotation curve. This correction looks appealing because it is also sourced by the gravitational potential in a strict manner: the axial gravitoelectric field strength close in to the singular plane will approach the surface density of matter in the thin disk, according to the Gaussian 'pill-box' construction. This correction is tunable by a ratio of Bessel functions, in which the conjugate Hankel radius appears as a single free parameter. By tuning this parameter, the poles introduced by the Bessel coefficient can be driven off to some distant extragalactic scale. The _intragalactic_ rotation curve on the other hand, which then looks as though it is being computed deterministically from the surface density profile, may indeed depart from the Newtonian and become flat or rising. Whether or not Eq. (29) has any physical meaning, we can at least conclude that the mathematical steps which produced it cannot be replicated without Eq. (28), i.e. the construction of [21]_requires_ a singular disk. In the physical case of an extended profile, Eqs. (26) and (27) can only be related by differentiating under the integral sign of Eq. (27). If we then repeat the remarkable step of equating the integrands, the closest we can get to Eq. (29) is the following \[\begin{split}\frac{\lambda R^{\prime}J_{1}\left(\lambda R^{ \prime}\right)}{v\left(R^{\prime},z\right)J_{0}\left(\lambda R^{\prime}\right) }\frac{\partial\Phi\left(R^{\prime},z\right)}{\partial z}&=\frac{ \partial}{\partial z}\Bigg{[}v\left(R^{\prime},z\right)\\ &+\frac{R^{\prime}}{v\left(R^{\prime},z\right)}\frac{\partial \Phi\left(R^{\prime},z\right)}{\partial R^{\prime}}\Bigg{]}.\end{split} \tag{30}\] In common with Eq. (29), the true relation Eq. (30) contains a deterministic correction, relative the Newtonian rotational velocity prediction, which is somewhere singular and freely tuned by the conjugate Hankel radius. However the implications of the new relation are fundamentally different: at every (dummy) radius \(R^{\prime}\) the velocity is determined by an ODE in the axial \(z\) direction. This ODE requires some initial data for each (dummy) \(R^{\prime}\), which might as well be provided by some _user-defined_ rotation curve \(v(R^{\prime},0)\) in the (dummy) equatorial plane. Thus, Eq. (30) requires the equatorial rotation curve as an input, and does not supply it as an output. If some initial data \(v(R^{\prime},0)\) is chosen, Eq. (30) can propagate the rotational velocity axially above and below the equatorial plane, depending strictly on the gravitoelectric potential \(\Phi(R^{\prime},z)\) and the tunable Hankel radius \(\lambda\). Because Eq. (30) modifies the axial derivative of the Newtonian expression, we can still depart from the Newtonian rotational velocity above and below the equatorial plane even if we use the Newtonian expression for \(v(R^{\prime},0)\). This is illustrated in Fig. 7, where the MN profile associated with NGC 1560 is used to propagate Eq. (30) using precisely the Newtonian rotational velocity of that profile as initial data. With other initial data, doubtless even more interesting effects may be produced by Eq. (30): there are apparently as many possibilities as there are functions on the positive real line, and this is not the kind of situation we expect to encounter in a well posed theory of gravity such as GR. We will now clarify in Section VI.2 why the construction underpinning [21] and our corollary in Eq. (30) do not -- and can never -- arise in nature. ### No prospects without sources The authors of [21] attempt to make a distinction between what happens in determining the potential from the matter density distribution, and how the poloidal gravitomagnetic field is determined (or not) by the matter flows. In particular for the first case (density) they correctly say that our equation Eq. (10) 'completely fixes the value of the Newtonian potential everywhere', whereas for the poloidal gravitomagnetic field \(\psi\) in Eq. (24), there is meant to be a freedom in adding in homogeneous solutions of the equation which determines it. Since we do not agree with this distinction, we will start with the case of how the potential is uniquely determined by \(\rho\) and then show how exactly the same procedure applied to \(\psi\) again leads to unique solutions, to which we cannot add in extra homogeneous components. Despite these problems, the Hankel transform approach used by [21] is useful since it enables us to explicitly find the homogeneous solutions in question explicitly, and thereby show they are inadmissible. In order to make sure that we do not introduce unnecessary singularities, we will work not with the 'thin disk' approximation used by [21], but continue from Section VI.1 with a continuous and differentiable distribution of matter, which we can call a thick disk. The MN profile used earlier would be a good example of what we have in mind here. After obtaining the results we will look at the thin disk limit, and show -- unlike in the previous analysis in Section VI.1 -- that it behaves in exactly the same way as found here for the thick disk. We take this as an indicator that we are finally connecting with the correct physics, and that our results are equally applicable to the case treated in [21]. We thus start with the equivalent of Eq. (25a), but for potential rather than the poloidal field, and write \[-\Phi(R,z)=\int_{0}^{\infty}f\left(\lambda^{\prime},z\right)J_{0}\left(\lambda ^{\prime}R\right)\lambda\mathrm{d}\lambda^{\prime}. \tag{31}\] The function \(f(\lambda,z)\) is thus the Hankel transform, in the \(R\) direction, of minus the potential. Note particularly that, contrary to what is done in Eq. (25b), at this stage we are _not_ going to assume a particular form for \(f(R,\lambda)\). This is because we will be able to _deduce_ the equivalent form for \(f\) from the equations themselves, which is an interesting feature of the approach here. We now insert (31) into the Poisson equation for \(\Phi\), obtaining \[\int_{0}^{\infty}\left(\frac{\partial^{2}f}{\partial z^{2}}-\dot{x}^{\prime 2}f \right)J_{0}\left(\dot{x}^{\prime}R\right)\,\dot{x}^{\prime}\mathrm{d}\dot{x}^{ \prime}=-4\pi\rho(R,z). \tag{32}\] Taking the inverse Hankel transform of each side then yields \[\frac{\partial^{2}f}{\partial z^{2}}-\dot{\lambda}^{2}f=-4\pi\int_{0}^{\infty} \rho\left(R^{\prime},z\right)J_{0}\left(\dot{x}R^{\prime}\right)R^{\prime} \mathrm{d}R^{\prime}. \tag{33}\] This is a linear equation for \(f\) which we can solve by the Figure 7: Non-equatorial enhancement of the rotational velocity of the MN profile obtained by generalising the approach of [21] to extended sources. The Newtonian velocity is shown in the top frame for the MN potential in Eq. (15), as plotted in Fig. 1 for parameters associated with NGC 1560 in [20]. The middle frame shows the enhancement in the case where the equatorial rotational velocity (initial data) is identical to the Newtonian, and Eq. (30) allows this to be propagated axially using the potential. In the lower frame, the difference between enhanced and Newtonian velocities is shown, indicating a substantial velocity increase in annular zones above and below the galactic plane. The inverse Hankel radius in this case is \(\dot{\lambda}=1\times 10^{-10}\,\mathrm{kpc}^{-1}\), so that the pole introduced by the first zero of the Bessel function is expelled from the observable Universe. We show that these effects result from the misuse of homogeneous solutions for the poloidal gravitomagnetic flux. method of _variation of parameters_. In this technique, if we know solutions of the homogeneous equation for \(f\) we can use them in constructing solutions of the inhomogeneous equation via integrations involving their product with the inhomogeneous part of the equation. In the current case this yields the following full solution for \(f\): \[\begin{split} f(\lambda,z)&=F_{1}(\lambda)e^{- \lambda z}+F_{2}(\lambda)e^{\lambda z}\\ &\quad-\frac{2\pi}{\lambda}\Bigg{(}e^{\lambda z}\int_{a}^{z}e^{- \lambda z^{\prime}}\bar{\rho}\left(\lambda,z^{\prime}\right)\mathrm{d}z^{ \prime}\\ &\qquad\qquad\qquad-e^{-\lambda z}\int_{b}^{z}e^{\lambda z^{ \prime}}\bar{\rho}\left(\lambda,z^{\prime}\right)\mathrm{d}z^{\prime}\Bigg{)}. \end{split} \tag{34}\] In this equation \(\bar{\rho}(\lambda,z)\) is the Hankel transform of \(\rho\), i.e. \[\bar{\rho}(\lambda,z)=\int_{0}^{\infty}\rho(R^{\prime},z)J_{0}\left(\lambda R ^{\prime}\right)R^{\prime}\mathrm{d}R^{\prime}, \tag{35}\] while the integration lower limits \(a\) and \(b\) are constants, and \(F_{1}(\lambda)\) and \(F_{2}(\lambda)\) are arbitrary functions of \(\lambda\). One can verify explicitly, by substituting (34) into (33), that this \(f\) does indeed solve the intended equation. However, it now looks as though we have got a problem, since the solution involves naked factors of \(e^{\lambda z}\) and \(e^{-\lambda z}\). These appear multiplying \(F_{1}\) and \(F_{2}\), and also multiplying the integrals in \(z^{\prime}\). Considering e.g. \(F_{1}(\lambda)e^{-\lambda z}\), this blows up as \(z\to-\infty\) for any non-zero value of \(F_{1}\). (Note the range of \(\lambda\) is from \(0\) to \(\infty\).) Thereafter there is an integration over \(\lambda\) which occurs in equation (31) but no subsequent integration over \(z\), and hence the singularity will persist into the final answer for \(\Phi\). The only way out of this is if \(F_{1}(\lambda)\) is strictly zero, and of course the same considerations apply to for \(F_{2}(\lambda)\). This then looks bad for the \(e^{\lambda z}\) multiplying the integrals, except in this case there is a 'get out'. This is that the integrals are functions of \(z\) as well as \(\lambda\), via the upper limit of integration. In particular if we choose the lower limit of integration \(b\) to be \(-\infty\) then the integral will tend to zero as \(z\to-\infty\), thus potentially (depending on respective rates of convergence of the integral and the outside \(e^{-\lambda z}\) factor) leading to a finite answer. Similarly, in the first integral we should let \(a=+\infty\), since then as \(z\to\infty\) it is possible that a finite answer can be obtained here as well. With these values of \(a\) and \(b\), and setting \(F_{1}(\lambda)\) and \(F_{2}(\lambda)\) to zero, we get \[\begin{split} f(\lambda,z)&=\frac{2\pi}{\lambda} \Bigg{(}\int_{z}^{\infty}e^{-\lambda(z^{\prime}-z)}\bar{\rho}(\lambda,z^{ \prime})\mathrm{d}z^{\prime}\\ &\qquad+\int_{-\infty}^{z}e^{\lambda(z^{\prime}-z)}\bar{\rho}( \lambda,z^{\prime})\mathrm{d}z^{\prime}\Bigg{)},\end{split} \tag{36}\] which assembles to give \[f(\lambda,z)=\frac{2\pi}{\lambda}\int_{-\infty}^{\infty}e^{-\lambda|z-z^{ \prime}|}\bar{\rho}(\lambda,z^{\prime})\mathrm{d}z^{\prime}, \tag{37}\] for which convergence is assured if \(\bar{\rho}\), and therefore \(\rho\) itself, behaves reasonably. This is excellent for our purposes. We have now achieved the analogue of equation Eq. (25b), but with the bonus that we know it is only the _inhomogeneous_ part of the Poisson equation, i.e. the density itself, that does the'sourcing'. All possible homogeneous contributions have been killed off by the requirement that there should not be explicit \(e^{\pm\lambda z}\) type factors left in the final answer. Note that if we wanted to move towards an explicit solution for \(\Phi\) from this point, we could write the solution so far as the triple integral \[\begin{split}-\Phi=\int_{0}^{\infty}\lambda^{\prime}\mathrm{d} \lambda^{\prime}\int_{-\infty}^{\infty}&\mathrm{d}z^{\prime} \int_{0}^{\infty}R^{\prime}\mathrm{d}R^{\prime}\,\frac{2\pi}{\lambda^{\prime} }e^{-z^{\prime}|z-z^{\prime}|}\\ &\qquad\times\rho(R^{\prime},z)J_{0}(\lambda^{\prime}R^{\prime}) J_{0}(\lambda^{\prime}R).\end{split} \tag{38}\] This looks forbidding, but in fact we can explicitly carry out the \(\lambda\) integral by using the Bessel function identity drawn attention to in the paper [32] by Cohl & Tohline, specifically their equation (14), which reads, using the current variables, \[\int_{0}^{\infty}\mathrm{d}\lambda^{\prime}e^{-\lambda^{\prime}|z-z^{\prime}|} J_{0}(\lambda^{\prime}R^{\prime})J_{0}(\lambda^{\prime}R)=\frac{Q_{-1/2} \left(\chi\right)}{\pi\sqrt{RR^{\prime}}}. \tag{39}\] Here \(Q_{-1/2}(\chi)\) is a Legendre function of the second kind and \[\chi=\frac{R^{2}+R^{\prime 2}+(z-z^{\prime})^{2}}{2RR^{\prime}}. \tag{40}\] Cohl & Tohline further say that this Legendre function is related to the complete elliptic integral of the first kind, \(K\), via \[Q_{-1/2}(\chi)=\mu K(\mu), \tag{41}\] where \[\mu\equiv\sqrt{\frac{2}{1+\chi}}=\sqrt{\frac{4RR^{\prime}}{\left(R+R^{\prime} \right)^{2}+(z-z^{\prime})^{2}}}. \tag{42}\] At this point, inserting these results into (38), we see we have recovered (12), with all factors agreeing exactly, hence we can declare this method of approach to be successful. This is of course not surprising as regards determining the potential from the density, where we are perfectly happy with the idea that adding in extra homogeneous solutions is prohibited by the boundary conditions, but we now show in Section VI.3 that exactly the same analysis leads to the same conclusion for the poloidal gravitomagnetic field. ### Repeating the analysis for the poloidal field So we pick up from equation (31), but this time in a version for the poloidal field \(\psi\). We will, however, re-use \(f\) for the Hankel transform of this field, since then many of the above relations will look almost identical. The particular version of Hankel transform which works best in terms of substituting into the gravitomagnetic equations is \[\psi(R,z)=\int_{0}^{\infty}f(\lambda^{\prime},z)RJ_{1}(\lambda^{\prime}R) \lambda^{\prime}\mathrm{d}\lambda^{\prime}, \tag{43}\] where we can see the function \(Rf\) is being transformed by a \(J_{1}\). The equation we are substituting into is \[\frac{1}{r}\frac{\partial^{2}\psi}{\partial r^{2}}-\frac{1}{r^{2}}\frac{\partial \psi}{\partial r}+\frac{1}{r}\frac{\partial^{2}\psi}{\partial z^{2}}=-16\pi \rho v. \tag{44}\] We now insert (43) into this, obtaining \[\begin{split}\int_{0}^{\infty}\left(\frac{\partial^{2}f}{ \partial z^{2}}-\lambda^{\prime 2}f\right)& J_{1}(\lambda^{\prime}R) \lambda^{\prime}\mathrm{d}\lambda^{\prime}\\ &=-16\pi\rho(R,z)v(R,z).\end{split} \tag{45}\] Taking an inverse Hankel transform of each side then yields \[\begin{split}\frac{\partial^{2}f}{\partial z^{2}}&- \lambda^{2}f=\\ &-16\pi\int_{0}^{\infty}\rho\left(R^{\prime},z\right)v\left(R^{ \prime},z\right)J_{1}\left(\lambda R^{\prime}\right)R^{\prime}\mathrm{d}R^{ \prime}.\end{split} \tag{46}\] Again this is a linear equation for \(f\) which we can solve by the method of variation of parameters. The full solution this time is \[\begin{split} f(\lambda,z)&=F_{1}(\lambda)e^{- \lambda z}+F_{2}(\lambda)e^{\lambda z}\\ &-\frac{8\pi}{\lambda}\Bigg{(}e^{\lambda z}\int_{a}^{z}e^{- \lambda z^{\prime}}\tilde{J}\left(\lambda,z^{\prime}\right)\mathrm{d}z^{ \prime}\\ &\qquad\qquad-e^{-\lambda z}\int_{b}^{z}e^{\lambda z^{\prime}} \tilde{J}\left(\lambda,z^{\prime}\right)\mathrm{d}z^{\prime}\Bigg{)},\end{split} \tag{47}\] where we have defined a'matter current' \(j=\rho v\) and \(\tilde{J}\) is its Hankel transform (using a \(J_{1}\)) \[\tilde{J}(\lambda,z)=\int_{0}^{\infty}\rho\left(R^{\prime},z\right)v\left(R^ {\prime},z\right)J_{1}\left(\lambda R^{\prime}\right)R^{\prime}\mathrm{d}R^{\prime} \tag{48}\] The arguments given before about what happens as \(z\to\pm\infty\) go through in exactly the same way, and we can jump straight to the final answer for \(f\) which is now \[f(\lambda,z)=\frac{8\pi}{\lambda}\int_{-\infty}^{\infty}e^{-\lambda|z-z^{ \prime}|}\tilde{J}\left(\lambda,z^{\prime}\right)\mathrm{d}z^{\prime}. \tag{49}\] We thus recover Eq. (25b), except now we know that the \(\tilde{\psi}\) in this _has_ to be the transform of the inhomogeneous source \(j\), and cannot contain a free homogeneous component. In the units used above, which have \(1\,\mathrm{kpc}\) as the unit of length, then clearly the \(j\) or \(\tilde{J}\) terms will be of order \(10^{-9}\) and hence far too small to give the GEM effects claimed in the approach of [20], or indeed the possible substantial modifications to rotation curves claimed to be allowable in [21]. If we wish to progress in the same way as in the potential case to getting an explicit integral expression for \(\psi\), then this will need the analogue of (39) for \(\boldsymbol{J}_{1}\)'s. This reads \[\begin{split}\int_{0}^{\infty}\mathrm{d}\lambda^{\prime}e^{- \lambda^{\prime}|z-z^{\prime}|}& J_{1}\left(\lambda^{\prime}R^{ \prime}\right)J_{1}\left(\lambda^{\prime}R\right)\\ &=\frac{1}{\pi\sqrt{RR^{\prime}}}Q_{1/2}\left(\chi\right),\end{split} \tag{50}\] and according to equation (23) in [32] we have \[Q_{1/2}(\chi)=\chi\mu K(\mu)-(1+\chi)\mu E(\mu), \tag{51}\] where \(\chi\) and \(\mu\) are as defined earlier in equations (40) and (42) and \(E\) is the complete elliptic integral of the second kind. Thus overall we will obtain \[\begin{split}\psi(R,z)=& 8\int_{-\infty}^{\infty} \mathrm{d}z^{\prime}\int_{0}^{\infty}R^{\prime}\mathrm{d}R^{\prime}\sqrt{ \frac{R}{R^{\prime}}}\rho\left(R^{\prime},z\right)\\ &\times\upsilon\left(R^{\prime},z\right)\left(\chi\mu K(\mu)-( 1+\chi)\mu E(\mu)\right).\end{split} \tag{52}\] ### Thin disks Finally, we should comment on the relation to the 'thin disk' approach used by [21]. If we assume that \[\rho(R,z)=\sigma(R)\delta(z), \tag{53}\] where \(\sigma(R)\) is the surface density, and adopt the definition for the spectral function for the potential given in equation (18) of [21], i.e. \[\tilde{\Phi}(\lambda)=2\pi\int_{0}^{\infty}R^{\prime}\sigma\left(R^{\prime} \right)J_{0}\left(R^{\prime}\lambda\right)\mathrm{d}R^{\prime}, \tag{54}\] then our \(\tilde{\rho}\) is given by (see equation (35) above): \[\begin{split}\tilde{\rho}(\lambda,z)&=\delta(z)\int_ {0}^{\infty}\sigma\left(R^{\prime}\right)J_{0}\left(\lambda R^{\prime} \right)R^{\prime}\mathrm{d}R^{\prime}\\ &=\frac{1}{2\pi}\delta(z)\tilde{\Phi}(\lambda).\end{split} \tag{55}\] Hence our \(f(\lambda,z)\) as given by equation (37) is \[f(\lambda,z)=\frac{1}{\lambda}\tilde{\Phi}(\lambda)e^{-\lambda|z|}, \tag{56}\] and so our expression for (minus) the potential in this case is \[\begin{split}-\Phi(R,z)&=\int_{0}^{\infty}f\left( \lambda^{\prime},z\right)J_{0}\left(\lambda^{\prime}R\right)\lambda^{\prime} \mathrm{d}\lambda^{\prime}\\ &=\int_{0}^{\infty}\tilde{\Phi}\left(\lambda^{\prime}\right)e^{- \lambda^{\prime}|z|}J_{0}\left(\lambda^{\prime}R\right)\mathrm{d}\lambda^{ \prime},\end{split} \tag{57}\] which agrees with equation (16) of [21] up to an overall sign. This shows that, unsurprisingly, we can reach the thin disk results of [21] starting from a non-singular distribution in the case of the potential and exactly the same will go through for the poloidal field, in the sense that the thin disk results, when done correctly, must show the same behaviour as the thick-disk ones, i.e. the behaviour is sourced only by the'matter current' and extra homogeneous solutions are not allowed. ## VII Conclusions We have investigated the recent claims in [20] that one need not consider modified gravity theories to explain flat rotation curves, such as those observed in galaxies, without the need for dark matter, since such curves can be explained by gravitomagnetic effects in standard linearised GR. We have also considered the related effects obtained in [21], specifically substantial gravitomagnetic corrections to the rotation curve of a galactic toy-model which are put forward as possibly being impactful in galactic dynamics. In [20] the convenient GEM formalism is adopted and, somewhat unusually, a galaxy is modeled as an axisymmetric, stationary, rotating, non-relativistic and pressureless 'dust' of stars, all of which follow circular orbits. This approach therefore identifies the bulk velocity distribution of the galaxy with the velocity of stars, thereby aiming to define a self-consistent pressureless model. The resulting system of GEM field equations for the gravitational (gravitoelectric) potential \(\Phi\) and the poloidal gravitomagnetic flux \(\psi\), together with the radial and vertical equations of motion, are amenable to an order of magnitude analysis. Indeed, it is straightforward to show that gravitomagnetic effects on the circular velocity \(\upsilon\) of a star are \(\mathcal{O}(10^{-6})\) smaller than the standard Newtonian (gravitoelectric) effects. Thus, as one might have expected, any modification of Newtonian galaxy rotation curves must be negligible. More importantly, we find that the assumption in the [20] model that all the vertical support necessary to maintain dynamical equilibrium arises from gravitomagnetic effects is impossible to satisfy; if one assumes the presence only of ordinary matter, the gravitomagnetic effects are \(\mathcal{O}(10^{-6})\) too small to provide this support. The above issues are obscured when various quantities are eliminated between the system of equations to arrive at the single key equation for \(\upsilon\) used by [20]. Nevertheless, to understand how [20] appears to arrive at a self-consistent pressureless model for a galaxy, we solve this key equation for \(\upsilon\) in the case of a galaxy having a MN density profile. This allows us to establish an intuition for the results by adopting a 'dual track' approach by performing an exact numerical integration and by developing an accurate anaylic approximation. Adopting the derived values of the mass, \(M\), and semi-major and semi-minor axes, \(a\) and \(b\), obtained by [20] in fitting rotation curve data for NGC 1560, we find that the resulting rotation curve depends only very weakly on the mass \(M\). Moreover, we show that for larger values of \(M\), the rotation curve becomes independent of \(M\). In any case, if one compares the rotation curve for the fitted parameters with the corresponding standard Newtonian rotation curve, one finds that the effects of gravitomagnetism are to suppress the rotational velocity of test particles, not enhance them. Thus, although the rotation curve including gravitomagnetic effects has a shape closer to that observed, it requires more matter to be present than in the Newtonian case in order to explain a given rotation curve level, which exacerbates the missing matter problem. Although the predicted rotation curve for the fitted aspect ratio \(a/b=0.373/0.3\) matches the observed one reasonably well, this aspect ratio is somewhat smaller than what would be inferred from observations of NGC 1560 in the visible, which is close to \(a/b=0.7/0.3\approx 2.33\). We show, however, that for aspect ratios \(a/b>2\), the predicted rotation curves are concave over their entire range, which does not match observations in any galaxy. The most problematic issue, however, is that in order to provide the necessary vertical support to maintain dynamical equilibrium, the poloidal gravitomagnetic flux \(\psi\) must become singular at the origin and have extremely large values near to it. In particular, we show that \(\psi\) must be at least \(\mathcal{O}(10^{8})\) larger than expected from gravitomagnetic effects. This must occur because free-space solutions of the Poisson-like equation that determines \(\psi\) are being unwittingly included, but this is forbidden if one wishes to avoid the presence of singularities. Moreover, the large values of \(\psi\) contradict the linearised treatment implicit in the GEM formalism. Consequently, one may rule out the GEM model proposed by [20] as a means of explaining flat or rising galaxy rotation curves without the need for dark matter. The involvement in [20] of free-space solutions to the Poisson-like equation that determines \(\psi\) then leads us naturally to consider [21] where (although the authors emphasise that more detailed analysis is needed) such solutions are deliberately employed. The fact that the methods of [21] lead to a dummy integration variable appearing on the outside of a putatively physical expression is already quite suggestive. In fact, when we try to faithfully generalise the proposed approach in [21] from the infinitesimal thin disk limit to an extended density profile, we find that the implications for galactic rotation curves are qualitatively different from those proposed in [21]. The orbital velocity above and below the equatorial plane is indeed determined by an ODE in the axial direction, but this ODE requires initial data which may as well be taken as the rotation curve in the plane itself. Thus, the formulation is entirely non-predictive outside the thin disk limit. Far more seriously, we show conclusively in both the thin and thick disk cases that the free-space solutions on which [21] relies _necessarily violate the gravitomagnetic boundary value problem at the equatorial plane_: they are inadmissible without a matter current there. We note that this objection is independent from the guaranteed existence of divergent regions in the solutions (which [21] notes may be tuned to large radii away from the galaxy). We conclude that (i) only the inhomogeneous parts of the GEM solutions may contribute to the rotation curve, and that (ii) they do so in a predictive manner, depending on the matter source currents. In the context of GEM, derived from GR without any infrared modification, we further conclude that these matter currents must after all include a substantial 'dark' component to be consistent with the observed phenomena. ###### Acknowledgements. WEVB is grateful for the kind hospitality of Leiden University and the Lorentz Institute, and is supported by Girton College, Cambridge.
2305.14847
Drafting Event Schemas using Language Models
Past work has studied event prediction and event language modeling, sometimes mediated through structured representations of knowledge in the form of event schemas. Such schemas can lead to explainable predictions and forecasting of unseen events given incomplete information. In this work, we look at the process of creating such schemas to describe complex events. We use large language models (LLMs) to draft schemas directly in natural language, which can be further refined by human curators as necessary. Our focus is on whether we can achieve sufficient diversity and recall of key events and whether we can produce the schemas in a sufficiently descriptive style. We show that large language models are able to achieve moderate recall against schemas taken from two different datasets, with even better results when multiple prompts and multiple samples are combined. Moreover, we show that textual entailment methods can be used for both matching schemas to instances of events as well as evaluating overlap between gold and predicted schemas. Our method paves the way for easier distillation of event knowledge from large language model into schemas.
Anisha Gunjal, Greg Durrett
2023-05-24T07:57:04Z
http://arxiv.org/abs/2305.14847v1
# Drafting Event Schemas using Language Models ###### Abstract Past work has studied event prediction and event language modeling, sometimes mediated through structured representations of knowledge in the form of event schemas. Such schemas can lead to explainable predictions and forecasting of unseen events given incomplete information. In this work, we look at the process of creating such schemas to describe complex events. We use large language models (LLMs) to draft schemas directly in natural language, which can be further refined by human curators as necessary. Our focus is on whether we can achieve sufficient diversity and recall of key events and whether we can produce the schemas in a sufficiently descriptive style. We show that large language models are able to achieve moderate recall against schemas taken from two different datasets, with even better results when multiple prompts and multiple samples are combined. Moreover, we show that textual entailment methods can be used for both matching schemas to instances of events as well as evaluating overlap between gold and predicted schemas. Our method paves the way for easier distillation of event knowledge from large language model into schemas. ## 1 Introduction Predicting and modeling sequences of events has become more sophisticated over the past decade. Early work mined narrative schemas that were limited in representational power: initially sequences of predicate-role pairs Chambers and Jurafsky (2008), then generalized to predicate-argument structures Chambers and Jurafsky (2009), which continue to be used in neural approaches Weber et al. (2018); Koupaee et al. (2021). Recently, language modeling provides a very flexible interface for predicting tokens given context and has been applied to event prediction Rudinger et al. (2015); Pichotta and Mooney (2016); Koupaee et al. (2021) and cloze tasks Paperno et al. (2016). Event schemas can only compete with language modeling approaches if they are high-quality and specific enough to provide strong predictions. Ultimately, event schemas might enable explainable forecasting Zou et al. (2022) grounded in an expert-curated knowledge structure. This paper attempts to bridge this gap by constructing natural language represents of event knowledge from language models. Past efforts like ATOMIC Sap et al. (2019) and COMET Bosselut et al. (2019) show that structured repositories of knowledge and the ability to extend them can help enable predictions about the world. We follow in their vein and construct collections of events we call _light schemas_. These are less structured than graph-based schemas anchored in event ontologies Li et al. (2021). Our chief aim is to have high recall over a set of events in a domain to serve as a draft for curation of a more structured schema. We generate these schemas using language models like GPT-3.5 Brown et al. (2020); Ouyang et al. ( Figure 1: Overview of our system. A large language model can generate lightly structured lists of events, which themselves may have complex predicate-argument structure. 2022) and Flan-T5 (Chung et al., 2022). As shown in Figure 1, these models have strong abilities to surface events characteristic to a particular domain (e.g., _international conflict_), including typical arguments for those events. Although our schemas are ontology-free, they implicitly have a certain "style" associated with the natural language expressions of their events. We explore both zero-shot and few-shot (specifically one-shot) prediction of schemas. Understanding the event coverage of our schemas requires comparing them to schemas built by human curators. Evaluation of schematic knowledge and what it predicts have typically been restricted to cloze tasks (Granroth-Wilding and Clark, 2016; Modi et al., 2017; Weber et al., 2018) or events in certain coarse ontologies (Li et al., 2021), but these do not directly evaluate schema representations themselves. Recent past work uses measures very tied to lexical expression of schema predicates (Dror et al., 2022; Zhang et al., 2023), but these are most appropriate for schemas in closed ontologies. Instead, we evaluate our schema generation using textual entailment methods (Dagan et al., 2005; Williams et al., 2018), following a similar application of these methods to evaluate groundedness of summaries (Falke et al., 2019; Zhang and Bansal, 2021; Laban et al., 2022). We use entailment to compare our drafted schemas to two sources of ground-truth schemas annotated by human annotators. Specifically, we investigate whether an event we generate entails an event in the ground-truth schema as a measure of recall; we also explore bidirectional entailment (is there mutual entailment between the events?) for a more precise measure. Through human study, we validate that our entailment-based evaluation is reliable. Our results show that large language models can generate schemas that have substantial overlap with ground-truth schemas written by curators. One-shot prediction allows us to emulate the stylistic features of target schemas and attain varying levels of specificity with respect to arguments of predicates. We compare different methods and find that drawing multiple samples from these models can further improve recall. Our main contributions are (1) We analyze the performance of current text generation models (GPT-3.5 and Flan-T5) for the task of generating lightly organized event schemas in a completely training-free regime. (2) We show promising results of using textual entailment as a metric to automatically measure event coverage by generated schemas. (3) We show that one-shot prediction can be used to achieve stylistic control over schema generation, suggesting a way to adapt approaches for different desired output formats. ## 2 Methods ### Preliminaries Our schemas are anchored to domains \(d\). An example of \(d\) shown in Figure 1 is _international conflict_; these may be broad topics or more specific scenarios like _roadside bombing attack_. In this work, they are not anchored to specific entity participants. We define a schema \(S=(\mathbf{s}_{1},\dots,\mathbf{s}_{n})\) as an ordered1 collection of sentences expressing events. The \(\mathbf{s}_{i}\) are sentences expressing events at a moderate level of generality; they are typically short descriptions and do not involve specific named entities. However, we do not structurally constrain their form. We refer to the **style** of the schema as a collection of surface-level factors including the average length in words of the \(\mathbf{s}_{i}\) and the specificity of the events. Footnote 1: We preserve the ordering of \(S\) because we find that it often corresponds to a partial temporal ordering. We do not evaluate this aspect extensively in this work. We explore two classes of models in this work. First, **zero-shot models** have the form \(P(S\mid v_{c}(d))\); they condition on a verbalization \(v\) of domain \(d\), parameterized by a strategy \(c\). For example, the prompt in Figure 1 has \(d=\) international conflict and the verbalizer _List 10 things that each happen (1) before; (2) during; and (3) after [d]...Before an [d], there are several things that can happen: 1._. This verbalizer is designed to produce a certain pattern of temporal information; in this case, the answer from the model separates into events occurring before, during, and after the conflict. Other verbalizers we explore look at aspects like cause and effect; a full list of verbalizers is included in the Appendix A. The verbalizers give us control over attributes \(c\); however, they do not necessarily allow us to specify a target style for the schema. We find that each model has certain styles it tends to generate in across a range of verbalizers. We also explore **one-shot models**\(P(S\mid v(d),S_{\mathrm{demo}})\) that condition on a schema demonstration as well as a verbalizer of the domain. Note that \(S_{\mathrm{demo}}\) is a hand-authored schema (or post edited output of the model) coming from a separate domain \(d^{\prime}\). We give examples of the prompts we use in Appendix A. ### Models Considered Gpt-3.5 text-davinci-003We experiment with the most capable of the OpenAI GPT-3.5 models (Brown et al., 2020). According to (OpenAI, 2022), text-davinci-003 is an instruction-tuned model (Ouyang et al., 2022) using reinforcement learning on models of human preference judgments. Flan-T5We also experiment with (Chung et al., 2022). We use the XXL variant, which is 11B parameters. This allows us to see what is achievable with a smaller instruction-tuned model that can be more easily and cheaply run. We qualitatively observed that flan-t5-xxl does not perform well on temporally-aided complex prompt as described in A. Hence we simplify the prompt into three independent prompts: 1. List events that occur _before_... 2. List events that occur _during_... 3. List events that occur _after_... The outputs generated are minimally post-processed if necessary to extract the events generated. Older GPT-3 variantsWe also tried using older variants of GPT-3 model (Brown et al., 2020) such as text-davinci-base, however the generations consisted of a lot of redundancies and required a lot of human curation to extract relevant information from the output, refer appendix D. For this reason, we exclude the base GPT-3 model text-davinci-base from our main results. Inference hyperparametersFor all models, we decode using nucleus sampling (Holtzman et al., 2020) with hyperparameters top-p=1.0 and temperature set to 0.7. We do not perform any model training and use off-the-shelf models for our analysis. The GPT-3.5 variants are accessed through OpenAI's API with estimate compute cost amounting to less than $100. To run inference for flan-t5-xxl we host the model on a p3.16xlarge AWS instance. ## 3 Evaluation via Textual Entailment Inspection of our schemas (see Figure 1, Table 6) shows that they are very high quality. As we are using state-of-the-art autoregressive Transformer models, the fluency, coherence of the sequence of events, and linguistic quality of each individual event are very high and do not need to be the focus of our evaluation. Instead, the main question is to what extent the events we have cover the important events in the target domain; they may fail to do so as a result of reporting biases in text. We can compare these to human-written schemas; however, because our schemas are in natural language, we will need a sophisticated comparison function in order to do so. Here, we turn to textual entailment. Our evaluation focuses on comparing a predicted schema \(\hat{S}\) with a ground-truth, human-annotated schema \(S^{*}\). \(S^{*}\) is considered to contain events that we want to see represented in predicted schemas. Note that \(S^{*}\) may not be exhaustive; that is, an event not in \(S^{*}\) may still be considered of high quality and relevant. Therefore, our evaluation will focus on recall. We use textual entailment models of the form \(E:(\mathbf{s}_{1},\mathbf{s}_{2})\rightarrow\mathbb{R}\) to judge whether two sentences are matching events. An entailment model computes a distribution over three classes {_entailment_, _neutral_, _contradiction_}. We set \(E\) to return the probability of \(\mathbf{s}_{1}\) entailing \(\mathbf{s}_{2}\), ignoring the distinction between neutral and contradiction. Intuitively, a sentence like _protests break out in the city_ should entail (and be entailed by) a sufficiently similar event like _civil unrest in the capital_. While there may be minor variation in the argument (e.g., _city_ vs. _capital_), the notion of entailment still approximately captures the appropriate similarity for this task. Our goal is to compare an event \(\mathbf{s}\) to an entire schema \(S\). The recall score \(r\) for an event \(s\in S^{*}\) is then given by \[r(s,\hat{S})=\max_{\hat{s}\in\hat{S}}(\max(E(s,\hat{s}),E(\hat{s},s))) \tag{1}\] maximing over the events in the predicted schema. As the level of specificity between the gold events and predicted events can sway in either direction, we run the entailment model in both directions (e.g. gold event _entails_ predicted event or vice versa). We consider two variants of this procedure: **any-directional entailment** where we use the entailment model as described above, and **bidirectional entailment** where we modify the score to be \(\min\{E(s,\hat{s}),E(\hat{s},s)\}\). This places a stronger requirement that the two statements be equivalent. Entailment Models UsedWe test our generated schemas using the textual entailment model roberta-large-wanli by Liu et al. (2022) trained on the WANLI dataset. This model uses RoBERTa-large Liu et al. (2019) architecture and has 345M parameters. ## 4 Experimental Setup ### Gold Schema We conduct experiments on the gold schemas from two datasets: RESIN-11 Du et al. (2022) and CuratedSchemas, described below. The domains included in our dataset are _international conflict, natural disaster, IED attacks, disease outbreak, mass shooting, and kidnapping_. We sample these domains as they are available in both datasets and gives us the opportunity to test various interesting aspects of schema datasets such as varying coverage, and style of event descriptions. More details on both dataset can be found in Appendix E. For the published RESIN-11 Schema, we modify the event structure into a natural language sentence as described in Appendix E.2. We also use a separate set of schemas we call the CuratedSchemas set. These schemas were annotated by ourselves and our collaborators independently of the RESIN schemas. Appendix E.1 describes these. ### Language Model Schema Generation We predominatly test the schema drafting performance of GPT-3 variant text-davinci-003 and Flan-T5 variant flan-t5-xxl. Each prompt is used to generate 3 generations. We report statistics and event recall on average of 3 generations in the 5 section. We can also over-generate predictions using diverse prompts and achieve a higher event recall with the possibility of generating incorrect or redundant events. To experiment this, we craft 3 different prompts and sample 3 generations from each using the text-davinci-003. We call this approach **prompt union**. More details on prompt union can be found in the appendix A. A key aspect of natural language event schemas is stylistic variations across datasets and language models that generate it. As discussed in 2, we use one-shot prompts to guide the models to generate outputs similar to the target dataset style. ## 5 Results ### Schema Generation Performance Table 1 shows the results of several schema generation approaches measured against the gold schemas from RESIN-11 and CuratedSchemas. The metric used to measure the recall is the _any-directional entailment_ as described in Section 3. We report the mean and standard deviation of event recall across three sampled generations for each prompt. Along with the event recall, we also report the number of events predicted by each model, which gives signal about the precision of their event generation. Zero-shot generation performance is high.Table 1 highlights that both text-davinci-003 and flan-t5-xxl show an average of 0.39 and 0.3958 event coverage with respect to the gold schemas. Discussions on human agreement of the entailment model judgments and influence of generation style are deferred to Sections 6 and 5.2. We also report the average number of event generated by both models for each prompt which are in the range of 15-30, indicating that we do not over-generate events for each domain to increase the recall performance. Finally, the overlap of generated events with both the human curated gold schemas (RESIN-11 and CuratedSchemas) is substantial and reflects on the potential of language models in drafting complex schemas with sufficient coverage. Drawing more samples from this model can increase recall further.Complex event schemas of domains like disease outbreak or natural disasters can have varying actors and topics that cannot be exhaustively sampled from a single prompt. For instance, _What happens after a disease outbreak?_ can have various responses talking about either _legal proceedings against organisations who are held accountable for a disease outbreak_ OR _research on preventing future outbreaks of the disease_ - both responses are valid but cover various aspects of the complex event. This result can be emulated by using diversity in prompts generation to generate events affecting different participants from the event. In Table 2 we compare event coverage results from a single prompt versus taking a union across a larger number of prompts and samples. We see that taking a union of generations from various prompts leads to a substantial boost to the event recall with the caveat that we generate larger number of events. Prompting models with complex prompts leads to significantly higher performance than past workDror et al. (2022) explore using language models to generate documents which can be used to construct complex schemas for events. Their work studies generation of direct step-by-step schemas using prompts such as _What are the steps involved in topic? 1._. We generate responses from this prompt template to extract natural language events. Our work is not a direct comparison to Dror et al. (2022) as their focus is predominantly using language models to generate documents for downstream event schema induction pipelines. However, we only adopt their direct step-by-step schema generation prompt and argue that using complex prompts can lead to better event coverage in comparison to simpler prompts such as listing steps in an event. This result is highlighted by comparing the results of text-davinci-003 and Dror et al. in Table 1. Overall, we see promising results on event schema drafting performance with language models with minimal human intervention and ability to automatically evaluate against gold schemas. ### Stylistic and Coverage Differences In this section, we investigate the various differences that can occur in natural language schemas derived from different sources. There are stylistic differences between event schema datasets and generations.Our gold datasets are derived from two independent sources and have stylistic differences in the method of representing natural language events. In Table 3 we show the average length of these prompts. We see that the mean length of sentences measured by word count varies between 3.57 to 6.29 among the datasets and LM generated schemas. Some stylistic influence can be achieved by one-shot prompting as noted in the case of word count difference between zero-shot and one-shot outputs of text-davinci-003. We also show this qualitatively in Section 7. One-shot prompts for style-matching with gold schemasWe can much better match the style of schemas by providing them as demonstrations in one-shot prompts. Specifically, for generating a schema for domain \(d\), we formulate one-shot prompts as shown in appendix A from three domains \(x\), \(\forall x\notin[d]\). Inter-dataset AgreementTo further confirm that the schemas we have differ, Table 8 shows that the average event recall measured between the gold schemas of RESIN-11 and CuratedSchemas. This \begin{table} \begin{tabular}{c|c|c|c||c||c} \hline \hline \multirow{2}{*}{**Domain**} & **Gold Schema** & **davinci-003** & **fhan-t5-xd** & **davinci-003** & **Dror et al.** \\ & & zero-shot & zero-shot & one-shot & one-shot \\ \hline \multirow{2}{*}{**Natural Disaster**} & \# Events & 24.33 & 21.67 & 39.22 & 4.67 \\ & **RESIN** & 0.33\(\pm\)0.24 & 0.4\(\pm\)0.13 & 0.56\(\pm\)0.19 & 0.11\(\pm\)0.1 \\ & **CuratedSchemas** & 0.41\(\pm\)0.29 & 0.29\(\pm\)0.09 & 0.40\(\pm\)0.12 & 0.14\(\pm\)0.06 \\ \hline \multirow{2}{*}{**International Conflict**} & \# Events & 29.67 & 25.67 & 44.67 & 5.33 \\ & **RESIN** & 0.44\(\pm\)0.08 & 0.6\(\pm\)0.12 & 0.46\(\pm\)0.15 & 0.07\(\pm\)0.07 \\ & **CuratedSchemas** & 0.73\(\pm\)0.06 & 0.45\(\pm\)0.1 & 0.55\(\pm\)0.16 & 0.09\(\pm\)0.04 \\ \hline \multirow{2}{*}{**Mass Shooting**} & \# Events & 15.67 & 22.66 & 29 & 4.67 \\ & **RESIN** & 0.23\(\pm\)0.07 & 0.45\(\pm\)0.47 & 0.57\(\pm\)0.19 & 0.21\(\pm\)0.07 \\ & **CuratedSchemas** & 0.27\(\pm\)0.04 & 0.41\(\pm\)0.14 & 0.53\(\pm\)0.16 & 0.25\(\pm\)0.06 \\ \hline \multirow{2}{*}{**Disease Outbreak**} & \# Events & 27 & 23.67 & 23.33 & 5.33 \\ & **RESIN** & 0.46\(\pm\)0.15 & 0.4\(\pm\)0.13 & 0.38\(\pm\)0.07 & 0.15\(\pm\)0.03 \\ & **CuratedSchemas** & 0.37\(\pm\)1.4 & 0.24\(\pm\)0.05 & 0.29\(\pm\)0.06 & 0.07\(\pm\)0.04 \\ \hline \multirow{2}{*}{**Kidnapping**} & \# Events & 15 & 21 & 23.56 & 6.33 \\ & **RESIN** & 0.52\(\pm\)0.17 & 0.33\(\pm\)0.2 & 0.42\(\pm\)0.09 & 0.26\(\pm\)0.06 \\ & **CuratedSchemas** & 0.52\(\pm\)0.12 & 0.37\(\pm\)0.08 & 0.54\(\pm\)0.1 & 0.44\(\pm\)0.1 \\ \hline \multirow{2}{*}{**IED**} & \# Events & 18 & 23.33 & 32.11 & 6 \\ & **RESIN** & 0.23\(\pm\)0.04 & 0.44\(\pm\)0.07 & 0.53\(\pm\)0.13 &.11\(\pm\)0.02 \\ & **CuratedSchemas** & 0.17\(\pm\)0.03 & 0.37\(\pm\)0.05 & 0.42\(\pm\)0.13 & 0.15\(\pm\)0.05 \\ \hline \hline \multicolumn{2}{l|}{**Average Across Domains**} & RESIN \& CuratedSchemas & **0.39** & **0.3958** & **0.47** & **0.1708** \\ \hline \hline \end{tabular} \end{table} Table 1: Event recall of zero-shot and one-shot performance performance of different language models measured against human curated gold schemas from two datasets. We use any-directional entailment. One-shot results are substantially better for certain domains and lead to generation of more events. However, all systems are able to generate a substantial number of matching events across the domains of interest. result conflates two things: the performance of the entailment model (discussed more in Section 6) and the meaningful differences in events between the two schemas. However, on inspection, stylistic attributes are responsible for both, as certain more specific events in RESIN have no analogue in CuratedSchemas due to the different styles. Entailment reflects this even though it is not reliable on every case. ## 6 Human Evaluation of Entailment Our recall values in Table 2 are high enough to establish the utility of our approach. Most events can theoretically be matched to some other event in our generated dataset. To confirm whether the entailment systems are making correct decisions, we conduct a precision-focused human evaluation of the automatic entailment model. \begin{table} \begin{tabular}{c c} \hline \hline **Gold Schema** & **Avg. Event Recall** \\ \hline RESIN & 0.62 \\ CuratedSchemas & 0.46 \\ \hline \hline \end{tabular} \end{table} Table 4: Measuring the overlap between events in the gold schemas: RESIN and CuratedSchemas. We use any-directional entailment to get an estimate of the overlap between two distinct human-curated schemas. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline **Domain** & **Gold** & **Single Prompt** & **Prompt Union** \\ \hline **Natural Disaster** & \begin{tabular}{c} \# Events \\ **RESIN** \\ **CuratedSchemas** \\ \end{tabular} & \begin{tabular}{c} 24.33 \\ 0.33 \\ 0.41 \\ \end{tabular} & \begin{tabular}{c} 187 \\ 0.86 \\ 0.80 \\ \end{tabular} \\ \hline **International Conflict** & \begin{tabular}{c} \# Events \\ **RESIN** \\ **CuratedSchemas** \\ \end{tabular} & \begin{tabular}{c} 29.67 \\ 0.44 \\ 0.73 \\ \end{tabular} & \begin{tabular}{c} 215 \\ 0.78 \\ \end{tabular} \\ \hline **Mass Shooting** & \begin{tabular}{c} \# Events \\ **RESIN** \\ **CuratedSchemas** \\ \end{tabular} & \begin{tabular}{c} 15.67 \\ 0.23 \\ 0.27 \\ \end{tabular} & \begin{tabular}{c} 154 \\ 0.62 \\ 0.76 \\ \end{tabular} \\ \hline **Disease Outbreak** & \begin{tabular}{c} \# Events \\ **RESIN** \\ **CuratedSchemas** \\ \end{tabular} & \begin{tabular}{c} 27 \\ 0.46 \\ 0.37 \\ \end{tabular} & \begin{tabular}{c} 203 \\ 0.76 \\ 0.52 \\ \end{tabular} \\ \hline **Kidnapping** & \begin{tabular}{c} \# Events \\ **RESIN** \\ **CuratedSchemas** \\ \end{tabular} & \begin{tabular}{c} 15 \\ 0.52 \\ 0.52 \\ \end{tabular} & \begin{tabular}{c} 127 \\ 0.91 \\ 0.83 \\ \end{tabular} \\ \hline **IED** & \begin{tabular}{c} \# Events \\ **RESIN** \\ **CuratedSchemas** \\ \end{tabular} & \begin{tabular}{c} 18 \\ 0.17 \\ 0.23 \\ 0.63 \\ \end{tabular} & \begin{tabular}{c} 134 \\ 0.70 \\ 0.63 \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 2: Diverse and instructive prompts improve the coverage of schema generation. We compare the single prompt version used in 1 against using a **prompt union** method which uses three prompts to over-generate events and improve recall. This result shows that the method can be potentially used to increase event coverage with gold schemas while compromising precision. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline **Domain** & \begin{tabular}{c} RESIN-11 \\ \end{tabular} & CuratedSchemas & \begin{tabular}{c} text-davinci-003 \\ zero-shot \\ \end{tabular} & \begin{tabular}{c} flan-t5-xxi \\ zero-shot \\ \end{tabular} & \begin{tabular}{c} text-davinci-003 \\ one-shot \\ \end{tabular} \\ \hline Natural Disaster & 4.47 & 3.36 & 5.68 & 4.75 & 6.68 \\ International Conflict & 5.27 & 2.55 & 2.86 & 4.81 & 4.89 \\ Mass Shooting & 5.24 & 3.32 & 8.6 & 6.62 & 6.02 \\ Disease Outbreak & 7.57 & 4.45 & 5.08 & 7.1 & 5.95 \\ Kidnapping & 7.08 & 4.5 & 5.07 & 6.67 & 6.11 \\ IED & 4.87 & 3.24 & 5.48 & 6.87 & 8.11 \\ \hline Mean & 5.75 & 3.57 & 5.46 & 6.14 & 6.29 \\ \hline \hline \end{tabular} \end{table} Table 3: Average length in words of the gold/generated events in each schema. We argue that word length is a proxy for a schema’s style (_political unrest_ vs. _protestors cause civil unrest in the capital_ differ in specificity in a way that length reveals). \begin{table} \begin{tabular}{c c} \hline \hline **Majority Vote** & **Atleast One Vote** & **Krippendorff’s Alpha** \\ \hline 0.55 & 0.75 & 0.43 \\ \hline \hline \end{tabular} \end{table} Table 5: Human Agreement Study on the union of RESIN+CuratedSchemas (Zero/One-Shot). 75% of examples judged equivalent by the entailment model are judged equivalent by at least one Turker. Turker ratings are in moderate agreement according to Krippendorff’s \(\alpha\). ment decisions. The objective was to assess how reliably the entailment models aligned with our actual judgments regarding the equivalence between events. To gather annotations for this evaluation, we used Amazon Mechanical Turk (AMT) and enlisted the participation of randomly selected human annotators. We presented them with 216 sampled event pairs from all domains consisting of gold and predicted events that are matched by the any-directional entailment model as described in 3. The annotators were then asked to indicate their agreement with each match. Each task is annotated by three unique annotators to measure overall consensus. Further details on the task setup can be found in Appendix B. The results of our human agreement study are shown in Table 5. The event match performance of entailment models across two datasets and all event domains achieves a majority vote agreement of 0.55 with the entailment judgments. However, at least one annotator agrees with the event match 75% of the time. We also measure the Krippendorff's Alpha to measure the inter-annotator agreement. The alpha score for our task is 0.43, which is considered moderate agreement, but does reflect the subjectivity of the task. We argue that not all of the entailment mistakes labeled as such truly represent errors. For instance, the any-directional entailment model matches the prediction "_Implementation of preventative measures_" to two gold events: "_people maintain physical distancing to prevent disease spread_" and "_people are vaccinated against the disease_." Although the level of specificity differs between the two, we argue that _any-directional entailment_ can be a reasonable candidate for automatic metric while serving the purpose of assigning soft matches between gold and predicted events. Cases like this are often marked as not equivalent by Turkers, but we argue that the entailment judgment is still a reliable method for assessing recall. For a highly precise evaluation protocol, _bi-directional entailment_ can be a suitable candidate, however, as this is a very strict metric, the recall achieved by this evaluation protocol is significantly lower (see Table 7 in the Appendix). We also conduct a internal human evaluation of the entailment metric at a granular level in Appendix F. The performance of entailment depends on stylistic matchingTable 8 highlights that human agreement of anydirectional entailment improves across all domains for davinci-003 when the schemas are generated with one-shot prompts compared to zero-shot. This signifies that one-shot prompts are beneficial in guiding the language models to generate schemas of a specific style. ## 7 Qualitative Analysis While the length analysis in Table 3 shows differences between various domains and schema sources, the stylistic differences go beyond length in ways that are hard to precisely quantify. We show examples from the **disease-outbreak** domain in Table 6 to highlight these differences and qualitatively depict the variation in the writing style of events across human-curated datasets (RESIN-11 and CuratedSchemas) and generations from language models (text-davinci-001 and flan-t5-xxl) in zero-shot and one-shot settings. We see that event samples from AltSchemas are more formal and shorter in size as compared to RESIN-11 which have a high length variance and are more natural language like. This style also differs from the zero-shot generations from davinci-003 and flan-t5-xxl. A controlled generation using one-shot prompts derived from RESIN-11 schema can be used to attempt to match the event description style of the gold schemas. ## 8 Related Work Event-centric modeling and schema inductionMethods performing schema induction can be categorized into simple and complex schema induction. Simple schema induction methods rely on identifying event triggers and participants and do no incorporate the relationships between events Chambers (2013); Cheung et al. (2013); Nguyen et al. (2015); Sha et al. (2016); Yuan et al. (2018). Recent work Li et al. (2021); Du et al. (2022) focuses on generating complex schemas that incorporate temporal as well as event argument relationships but assume availability of large amount of event relevant corpora. Existing event datasets such as MAVEN Wang et al. (2020) and event-centric knowledge bases such as Event-Wiki Ge et al. (2018), but working with these datasets naturally restricts a system designer to a fixed ontology. Closest to our work, Zhang et al. (2023) also generate schemas including a GPT prompting stage. However, they follow this with a stage of grounding to an ontology, sidestepping the challenges with evaluation we tackle in this work and losing the ability to homogenize between two different sources of schemas. Dror et al. (2022) use language models to generate large number of source documents about a topic that can be used to extract events and relations to build schemas in a zero-shot manner. However, their method uses language models to generate documents containing relevant information which is further used to extract events using event extraction methods. In this work, we provide a way to both generate and automatically evaluate light event schemas in natural language making the process less dependent on traditional event extraction and schema matching pipelines. Textual EntailmentNatural Language Inference research focuses on establishing entailment between a premise and a hypothesis pair. Although most of the previous work focuses on sentence level hypothesis and premise pair, recent datasets such as DocNLI Yin et al. (2021) and ContractNLI Koreda and Manning (2021) push the boundaries to extend NLI models to longer multi-sentence inputs and real-work datasets. Schuster et al. (2022) explore the utility of NLI models on longer inputs using a "stretching" form of aggregation, namely making over possible alignments to a document. It is common to see similarity models being used to judge similarity between two sentences in ROUGE, BLEU and BERTScore Zhang et al. (2020). However, recent works recommend the usage of NLI models as evaluation metrics for Abstract Summarization Maynez et al. (2020) as they capture the faithfulness and factuality of summaries better than standard metrics. Zhang and Bansal (2021) explore the usage of NLI models to automate evaluation of summarization tasks which can also benefit automated best model checkpointing. In this work, we explore using NLI as a metric for schema coverage matching directly in natural language. ## 9 Conclusion In this paper, we explored the ability of language models to draft light event schemas. In both zero- and one-shot settings, we showed that large language models can generate coherent, varied sequences of events in natural language that overlap substantially with human-curated events across several domains of interest. We show that textual entailment can be used to evaluate these matches. We believe our work can pave the way for future efforts looking at how explicit knowledge like schemas can be used in tandem with large language models to make predictions. Streamlining the ability to generate schemas and then curate them with human intervention will be an important step to scaling this method to work across many domains. ## Limitations The schemas we produce in this work are, by choice, lighter weight than representations used in some prior work. Past work Li et al. (2021) has \begin{table} \begin{tabular}{c c l} \hline \hline **LM/Gold Dataset** & **Prompt** & **Example** \\ \hline \multirow{3}{*}{RESIN-11} & \multirow{3}{*}{-} & medical treatment is attempted on infected people people donate to help fight the disease outbreak officials are assigned to monitor, prevent, contain, and mitigate the disease outbreak \\ \hline \multirow{3}{*}{CuratedSchemas} & \multirow{3}{*}{-} & disease control agency investigates outbreak infected group reports to disease control agency scientists invent drug \\ \hline \multirow{3}{*}{text-davinci-003} & \multirow{3}{*}{zero-shot} & ongoing monitoring of the disease collaboration between healthcare providers and public health agencies stockpiling of necessary medical supplies. \\ \hline \multirow{3}{*}{flan-t5-xxl} & \multirow{3}{*}{zero-shot} & people living in that country become infected with the pathogen vaccines are developed and distributed the laboratories informs the public about the disease outbreak \\ \hline \multirow{3}{*}{text-davinci-003} & \multirow{3}{*}{one-shot} & medical teams conduct research on the disease affected area is monitored for further outbreaks people are vaccinated against the virus \\ \hline \multirow{3}{*}{flan-t5-xxl} & \multirow{3}{*}{one-shot} & government issues a public health advisory people are quarantined disease is transmitted from animal to human \\ \hline \hline \end{tabular} \end{table} Table 6: Examples of output events from the different gold annotations we use and large language models. explored schemas with graph-structured ordering. While these schemas can express a richer set of partial ordering and mutual exclusion relationships between events, they are both cumbersome to produce and relatively little work has shown the ability to use them to perform complex inferences. Our view is that more complex structural relationships should also be specified in natural language for maximal compatibility with prediction based on large language models; we use this for future work. Human curation can also be used to impart these features for use in downstream applications. A second limitation is that the robustness of event recall evaluation using textual entailment is dependent on the stylistic similarities between generated and gold schemas. While we analyze this in the paper, stronger textual entailment systems down the road can potentially be useful to improve the precision of our performance estimates further. Finally, we note that schema-mediated prediction with neural models is an emerging and ongoing area of research. Therefore, there are not standard systems we can plug our schemas into for downstream evaluation. Nevertheless, we believe that these knowledge structures can be intrinsically evaluated, and high quality representations will pave the way for future work in this area.
2305.05792
Testing for Overfitting
High complexity models are notorious in machine learning for overfitting, a phenomenon in which models well represent data but fail to generalize an underlying data generating process. A typical procedure for circumventing overfitting computes empirical risk on a holdout set and halts once (or flags that/when) it begins to increase. Such practice often helps in outputting a well-generalizing model, but justification for why it works is primarily heuristic. We discuss the overfitting problem and explain why standard asymptotic and concentration results do not hold for evaluation with training data. We then proceed to introduce and argue for a hypothesis test by means of which both model performance may be evaluated using training data, and overfitting quantitatively defined and detected. We rely on said concentration bounds which guarantee that empirical means should, with high probability, approximate their true mean to conclude that they should approximate each other. We stipulate conditions under which this test is valid, describe how the test may be used for identifying overfitting, articulate a further nuance according to which distributional shift may be flagged, and highlight an alternative notion of learning which usefully captures generalization in the absence of uniform PAC guarantees.
James Schmidt
2023-05-09T22:49:55Z
http://arxiv.org/abs/2305.05792v1
# Testing for Overfitting ###### Abstract High complexity models are notorious in machine learning for overfitting, a phenomenon in which models well represent data but fail to generalize an underlying data generating process. A typical procedure for circumventing overfitting computes empirical risk on a holdout set and halts once (or flags that/when) it begins to increase. Such practice often helps in outputting a well-generalizing model, but justification for why it works is primarily heuristic. We discuss the overfitting problem and explain why standard asymptotic and concentration results do not hold for evaluation with training data. We then proceed to introduce and argue for a hypothesis test by means of which both model performance may be evaluated using training data, and overfitting quantitatively defined and detected. We rely on said concentration bounds which guarantee that empirical means should, with high probability, approximate their true mean to conclude that they should approximate each other. We stipulate conditions under which this test is valid, describe how the test may be used for identifying overfitting, articulate a further nuance according to which distributional shift may be flagged, and highlight an alternative notion of learning which usefully captures generalization in the absence of uniform PAC guarantees. ## 1 Introduction Supervised machine learning is severely underdetermined: a finite labeled data set is used to search a function space for an appropriate model fitting both the data and "from where the data comes." While the full function space is often at least two infinite orders of magnitude greater than the data, practitioners usually restrict search to a hypothesis class that is parametrized as a finite dimensional space. If this hypothesis class is too restricted, the search may output a model which fails to represent or approximate the data well enough; if, on the other hand, the class is too rich, the output model may represent the data _too_ well, in that the model fails to represent the underlying distribution from which data is drawn. Generally, this tradeoff between _underfitting_ and _overfitting_, respectively, is asymmetric: a model which fits data may (and hopefully does) still generalize to the underlying distribution, while a model which underfits data usually does not fit the distribution. Stated differently, underfitting is _detectable_ in the course of performance evaluation while overfitting cannot be identified by performance on the training data alone ([3]). To mitigate the aspect blindness of training data performance to overfitting, standard practice sets aside a holdout set disjoint from training and computes performance separately. Thus, training a model ordinarily incorporates two distinct steps: 1. optimization with training data to fit (model to) data and 2. verification of generalization by evaluating performance on holdout data. While vague heuristics motivating this two-step procedure abound in the literature and research community, rigorous statistical rationale less ubiquitously accompany justification of its use. Moreover, this two-step process facially treats training data and holdout data as altogether different kinds of things, with different tasks and different intended uses. As such, separating the conclusions we draw from training data and holdout data threatens to undermine the original impetus according to which training data is used for training in the first place, namely _that_ optimization with respect to training data should _thereby_ optimize an expectation (generalization). We explain the reasons for this paradox, and propose a solution that translates into a statistical test which may be deployed for both defining and identifying overfitting, using modified Law of Large Numbers (LLN) intuition that empirical means should approximate their expectation. In section 2, we review requisite background for the supervised learning problem, discuss the problem with training data, how it relates to overfitting, and why we would still like to use model performance on training data to contribute to assessing generalization. In section 3, we detail the statistical test for achieving this end, and give commentary on how this test clarifies the meaning of overfitting. We point out how the test validates generalization even absent strong but restrictive (e.g. PAC) learnability guarantees; we also introduce a weaker but still rich notion of learnability. We end with some plots in section 4 illustrating the use of the test in simulation. ## 2 Technical Background ### Supervised Machine Learning The setting for a supervised machine learning problem starts with the following data: 1. a joint probability space \((\mathcal{X}\times\mathcal{Y},\mathbb{P}_{\mathcal{X}\times\mathcal{Y}})\),1 Footnote 1: We leave implicit the \(\sigma\)-algebra of measurable sets and suppose that anything we try measuring is indeed \(\mathbb{P}\)-measurable. 2. labeled data \(\mathsf{S}=\big{(}(x_{1},y_{1}),\ldots,(x_{m},y_{m})\big{)}\in(\mathcal{X} \times\mathcal{Y})^{\omega}\coloneqq\bigcup_{m\in\mathsf{N}}(\mathcal{X}\times \mathcal{Y})^{m}\), 3. a hypothesis class \(\mathcal{H}\subset\mathcal{Y}^{\mathcal{X}}\) of functions \(\mathfrak{g}:\mathcal{X}\to\mathcal{Y}\),2 usually finite dimensional, elements \(\mathfrak{g}\in\mathcal{H}\) of which are called _models_, and Footnote 2: The notation \(\mathcal{Y}^{\mathcal{X}}\) denotes the _set_\(\big{\{}\mathfrak{g}:\mathcal{X}\to\mathcal{Y}\big{\}}\) of unstructured functions with domain \(\mathcal{X}\) and codomain \(\mathcal{Y}\). Of course, we require \(\mathcal{H}\) to consist only of measurable such functions. 4. a cost function generator \(\mathsf{c}:\mathcal{H}\to\mathbb{R}^{\mathcal{X}\times\mathcal{Y}}\) mapping a model \(\mathfrak{g}\) to random variable \(\mathsf{c}_{\mathfrak{g}}:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\), whose output \(\mathsf{c}_{\mathfrak{g}}(\mathsf{x},\mathsf{y})\) on input \((\mathsf{x},\mathsf{y})\) is a measure of fit between prediction \(\mathfrak{g}(\mathsf{x})\) and label \(\mathsf{y}\). The goal is to concoct an _algorithm_\(\mathfrak{g}_{(\cdot)}:(\mathcal{X}\times\mathcal{Y})^{\omega}\to\mathcal{H}\) which outputs a model \(\mathfrak{g}_{\mathsf{S}}\) with small expected cost \[\mathbb{E}(\mathsf{c}_{\mathfrak{g}_{\mathsf{S}}})\approx\inf_{\mathfrak{g} \in\mathcal{H}}\mathbb{E}(\mathsf{c}_{\mathfrak{g}}),\] having some guarantees of approximation performance in probability. The measure \(\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\) generating data \((\mathsf{x}_{i},\mathsf{y}_{i})\) is usually unknown, and data \(\mathsf{S}\) is used to proxy approximate expectation and to optimize the expected risk function \[\mathbb{E}(\mathsf{c}_{(\cdot)}):\begin{array}{rcl}\mathcal{H}&\to&\mathbb{R }\\ \mathfrak{g}&\mapsto&\mathbb{E}(\mathsf{c}_{\mathfrak{g}}).\end{array} \tag{1}\] The standard algorithm for this optimization is empirical risk minimization, namely \[\mathfrak{g}_{\mathsf{S}}\in\arg\min_{\mathfrak{g}\in\mathcal{H}}\mathsf{e} _{\mathsf{S}}(\mathfrak{g}), \tag{2}\] where empirical risk is defined as \[\mathsf{e}_{\mathsf{S}}(\mathfrak{g})\coloneqq\frac{1}{|\mathsf{S}|}\sum_{( \mathsf{x},\mathsf{y})\in\mathsf{S}}\mathsf{c}_{\mathfrak{g}}(\mathsf{x}, \mathsf{y}). \tag{3}\] Law of Large Numbers intuition suggests that \[\operatorname{e_{\mathsf{S}}}(\hat{\mathfrak{y}})\approx\mathbb{E}(\operatorname{ c_{\mathfrak{g}}}) \tag{4}\] when \(|\mathsf{S}|\) is large, so supposing as much, an output \(\hat{\mathfrak{y}}_{\mathsf{S}}\) of eq. (2) may be hoped to be a close approximation of the true goal, in the sense that \[\operatorname{e_{\mathsf{S}}}(\hat{\mathfrak{y}}_{\mathsf{S}})\approx\inf_{ \mathfrak{G}\in\mathcal{H}}\operatorname{E}(\operatorname{c_{\mathfrak{g}}}). \tag{5}\] To the extent that a model \(\hat{\mathfrak{y}}\in\mathcal{H}\) (approximately) satisfies approximation (4), we say that the model _generalizes_ (\(\varepsilon\)-generalizes if the error in approximation is bounded by \(\varepsilon\)), and to the extent that models in \(\mathcal{H}\) can be guaranteed to generalize optimality \(\inf_{\hat{\mathfrak{y}}}\operatorname{E}(\operatorname{c_{\mathfrak{g}}})\) (5), we say that \(\mathcal{H}\) is some kind of _learnable_. The familiar and formal notion of _probably approximately correct_ (PAC) learnability, for example, extends guarantees of concentration bounds to an optimization (over \(\mathcal{H}\)) context, and defines \(\mathcal{H}\) to be PAC learnable if there is a sample complexity \(\mu:(0,1)^{2}\to\mathbb{N}\) for which \(\hat{\mathfrak{y}}_{\mathsf{S}}\) may be guaranteed to \(\varepsilon\)-generalize with at least \(1-\delta\) probability as long as \(|\mathsf{S}|>\mu(\varepsilon,\delta)\) ([9], [4]).3 Properly quantifying the character and richness of \(\mathcal{H}\) (as captured, e.g., by VC dimension) demarcates learnability conditions, and various theoretical results exist providing such guarantees. Footnote 3: Explicitly, if \(\mathfrak{m}>\mu(\varepsilon,\delta)\) then \(\operatorname{P}_{(\mathcal{X}\times\mathcal{Y})^{\mathsf{m}}}\left(\left| \operatorname{E}(\operatorname{c_{\mathfrak{g}_{(\cdot)}}})-\inf_{\mathfrak{g }\in\mathcal{H}}\operatorname{E}(\operatorname{c_{\mathfrak{g}}})\right|> \varepsilon\right)<\delta\). Strictly speaking, PAC learnability only requires the existence of an algorithm \(\hat{\mathfrak{y}}:(\mathcal{X}\times\mathcal{Y})^{\mathsf{av}}\to\mathcal{H}\) satisfying this bound, not necessarily that empirical risk minimization is it. ### Overfitting and Generalization Absent formal learnability guarantees, it turns out that LLN reasoning is not sufficient for ensuring generalization. The reasons are multifarious but substantively turn around _currying_ ([6, SS2.3]) of the cost function generator \(\operatorname{c}:\mathcal{H}\to\mathbb{R}^{(\mathcal{X}\times\mathcal{Y})}\). The notion of currying reflects pre-fixing arguments of a multivariable function to generate a function of fewer variables, and casting the learning objective in this formalism is helpful for understanding the overfitting problem. For a _fixed_ model \(\hat{\mathfrak{y}}\in\mathcal{H}\), the map \(\operatorname{c_{\mathfrak{g}}}:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) is a random variable, and therefore defines a measure \(\operatorname{\mathbb{P}_{\mathbb{R}}}\) on \(\mathbb{R}\) by \(\operatorname{\mathbb{P}_{\mathbb{R}}}([\operatorname{\operatorname{\mathfrak{a }}},\operatorname{b}])\coloneqq\operatorname{P}_{\mathcal{X}\times\mathcal{Y}} \bigl{(}\operatorname{c_{\mathfrak{g}}}^{-1}([\operatorname{\operatorname{ \mathfrak{a}}},\operatorname{b}])\bigr{)}\). This means, among other things, given data \(\mathsf{S}=\bigl{(}(\operatorname{x_{1}},\operatorname{y_{1}}),\ldots,( \operatorname{x_{m}},\operatorname{y_{m}})\bigr{)}\righm{\operatorname{iid}} \operatorname{\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}}\), that \(\operatorname{c_{\mathfrak{g}}}(\mathsf{S})\coloneqq\bigl{(}\operatorname{c_{ \mathfrak{g}}}(\operatorname{x_{1}},\operatorname{y_{1}}),\ldots,\operatorname{ c_{\mathfrak{g}}}(\operatorname{x_{m}},\operatorname{y_{m}})\bigr{)}\righm{ \operatorname{iid}}\operatorname{\mathbb{P}_{\mathbb{R}}}\). And independence invites valid conclusions of various concentration results. Searching over a function space, in the supervised machine learning setting, adds complications to otherwise innocuous independence conclusions. For the learning algorithm \(\hat{\mathfrak{y}}:(\mathcal{X}\times\mathcal{Y})^{\mathsf{av}}\to\mathcal{H}\) first takes \(\operatorname{\textit{data}}\mathsf{S}\in(\mathcal{X}\times\mathcal{Y})^{ \mathsf{av}}\) in search of a certain minimum with respect to _this_ data. Given different data, the algorithm outputs a different model. The curried cost generator, by contrast, \(\operatorname{c_{(\cdot)}}:\mathcal{H}\to\mathbb{R}^{\mathcal{X}\times\mathcal{Y}}\) defines an empirical risk generator \(\operatorname{e}(\cdot):\mathcal{H}\to\mathbb{R}^{(\mathcal{X}\times\mathcal{Y}) ^{\mathsf{av}}}\) defined by sending \(\hat{\mathfrak{y}}\mapsto\operatorname{e}_{(\cdot)}(\hat{\mathfrak{y}})\), the latter of which is defined by mapping data \(\mathsf{S}\in(\mathcal{X}\times\mathcal{Y})^{\mathsf{av}}\) to \(\operatorname{e_{\mathsf{S}}}(\hat{\mathfrak{y}})\) (eq. (3)), and with respect to which LLN reasoning and the like may properly apply. The learning optimization procedure, however, flips the currying around: fixing data \((\operatorname{x},\operatorname{y})\in\mathcal{X}\times\mathcal{Y}\), we have a cost on models \(\operatorname{c_{(\cdot)}}(\operatorname{x},\operatorname{y}):\mathcal{H}\to \mathbb{R}\) defined by \(\hat{\mathfrak{y}}\mapsto\operatorname{c_{\mathfrak{g}}}(\operatorname{x}, \operatorname{y})\), which extends to empirical risk \(\operatorname{e_{\mathsf{S}}}(\cdot):\mathcal{H}\to\mathbb{R}\) mapping model \(\hat{\mathfrak{y}}\mapsto\operatorname{e_{\mathsf{S}}}(\hat{\mathfrak{y}})\), instantiating the curried function \(\operatorname{e_{(\cdot)}}:\bigl{(}\mathcal{X}\times\mathcal{Y}\bigr{)}^{ \mathsf{av}}\to\mathbb{R}^{\mathcal{H}}\).4 Footnote 4: The reversal of roles in subscripts between \(\operatorname{c}\) and \(\operatorname{e}\) is unfortunate, but otherwise reflective of the primary purpose of each function, namely that \(\operatorname{c_{\hat{\mathfrak{g}}}}\) measures performance of model \(\hat{\mathfrak{y}}\) on a datapoint \((\operatorname{x},\operatorname{y})\in\mathcal{X}\times\mathcal{Y}\) while \(\operatorname{e_{\mathsf{S}}}\) measures empirical risk of fixed data on a model \(\hat{\mathfrak{y}}\in\mathcal{H}\). Order of operations matter. Consider uncurried versions \(\mathcal{H}\times(\mathcal{X}\times\mathcal{Y})^{\mathsf{av}}\xrightarrow{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatornameoperatorname{\operatornameoperatornameoperatornameoperatornameoperatornameoperatorname\operatornameoperatornameoperatornameoperatorname training data is "aspect blind" to overfitting: Law of Large Numbers reasoning does not apply in this regime. Consider that a sequence of datasets \[\mathsf{S}_{1} = (\mathsf{x}_{1},\mathsf{y}_{1})\in(\mathcal{X}\times\mathcal{Y})^{ 1},\] \[\mathsf{S}_{2} = \big{(}\mathsf{S}_{1},(\mathsf{x}_{2},\mathsf{y}_{2})\big{)}\in( \mathcal{X}\times\mathcal{Y})^{2},\] \[\vdots\] \[\mathsf{S}_{\mathsf{m}} = \big{(}\mathsf{S}_{\mathsf{m}-1},(\mathsf{x}_{\mathsf{m}},\mathsf{ y}_{\mathsf{m}})\big{)}\in(\mathcal{X}\times\mathcal{Y})^{\mathsf{m}},\] \[\vdots\] with each \(\mathsf{S}_{j}\sim_{\mathsf{iid}}\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\), induces a sequence of models \[\hat{\mathfrak{g}}_{\mathsf{S}_{1}},\hat{\mathfrak{g}}_{\mathsf{S}_{2}}, \ldots,\hat{\mathfrak{g}}_{\mathsf{S}_{\mathsf{m}}},\ldots\in\mathcal{H}.\] The sequence of models consequently induces a sequence of finite sequences of costs \[\mathsf{c}_{\hat{\mathfrak{g}}_{\mathsf{S}_{1}}}(\mathsf{S}_{1}) = \mathsf{c}_{\hat{\mathfrak{g}}_{\mathsf{S}_{1}}}(\mathsf{x}_{1}, \mathsf{y}_{1})\in\mathbb{R},\] \[\mathsf{c}_{\hat{\mathfrak{g}}_{\mathsf{S}_{2}}}(\mathsf{S}_{2}) = \big{(}\mathsf{c}_{\hat{\mathfrak{g}}_{\mathsf{S}_{2}}}(\mathsf{ x}_{1},\mathsf{y}_{1}),\mathsf{c}_{\hat{\mathfrak{g}}_{\mathsf{S}_{2}}}(\mathsf{ x}_{2},\mathsf{y}_{2})\big{)}\in\mathbb{R}^{2},\] \[\vdots\] \[\mathsf{c}_{\hat{\mathfrak{g}}_{\mathsf{S}_{\mathsf{m}}}}( \mathsf{S}_{\mathsf{m}}) = \big{(}\mathsf{c}_{\hat{\mathfrak{g}}_{\mathsf{S}_{\mathsf{m}}}}( \mathsf{x}_{1},\mathsf{y}_{1}),\ldots,\mathsf{c}_{\hat{\mathfrak{g}}_{ \mathsf{S}_{\mathsf{m}}}}(\mathsf{x}_{\mathsf{m}},\mathsf{y}_{\mathsf{m}}) \big{)}\in\mathbb{R}^{\mathsf{m}},\] \[\vdots\] which clearly is not guaranteed to be iid, unless miraculously the cost functions \[\mathsf{c}_{\hat{\mathfrak{g}}_{\mathsf{S}_{1}}},\mathsf{c}_{\hat{\mathfrak{g }}_{\mathsf{S}_{2}}},\ldots,\mathsf{c}_{\hat{\mathfrak{g}}_{\mathsf{S}_{\mathsf{ m}}}},\ldots\] all induce the same measure \(\mathbb{P}_{\mathfrak{C}_{\mathfrak{G}}(\mathcal{X}\times\mathcal{Y})}\) on \(\mathbb{R}\), for which there is no apriori reason to suppose. For each such \(\mathfrak{g}_{\mathbb{S}_{j}}\), LLN still holds in the sense that there is, for \(\varepsilon,\delta>0\), a number \(m_{j}>0\) for which \[\mathbb{P}_{(\mathcal{X}\times\mathcal{Y})^{m}}\left(\left|\mathfrak{e}_{( \cdot)}(\mathfrak{g}_{\mathbb{S}_{j}})-\mathbb{E}(\mathfrak{c}_{\mathfrak{g}_{ \mathbb{S}_{j}}})\right|>\varepsilon\right)<\delta\] whenever \(m>m_{j}\). Still, there is no reason to suppose that \(m_{j}<j\), and even if it were, small probability events still exist: the search over \(\mathcal{H}\) in the supervised learning setting incentivizes discovery of such events, c.f. section 3.2. One may place an appropriate measure \(\mathbb{P}_{\Gamma}\) on \(\Gamma(\mathfrak{G})\)--in fact, this pullback naturally inherits from measures on \((\mathcal{X}\times\mathcal{Y})^{\omega}\)--and certainly iid samples \(\mathbb{S}\in(\mathcal{X}\times\mathcal{Y})^{\omega}\) induce iid samples \(\mathfrak{g}_{\mathbb{S}}\sim_{\mathrm{iid}}\mathbb{P}_{\Gamma}\), but statements of events on this set do not extend to iid conditions on sequences of costs. For such, we must fix and isolate our attention to slices \(\{\mathfrak{g}_{\mathbb{S}}\}\times(\mathcal{X}\times\mathcal{Y})^{\omega} \subset\Gamma(\mathfrak{g})\) (see fig. 2), for which sample \(\mathcal{S}^{\prime}=\left((\mathfrak{x}^{\prime}_{1},\mathfrak{y}^{\prime}_{ 1}),\ldots,(\mathfrak{x}_{k},\mathfrak{y}^{\prime}_{k})\right)\sim_{\mathrm{ iid}}\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\) induces truly independent and identically distributed sample \(\mathfrak{c}_{\mathfrak{g}_{\mathbb{S}}}(\mathcal{S}^{\prime})\coloneqq\left( \mathfrak{c}_{\mathfrak{g}_{\mathbb{S}}}(\mathfrak{x}^{\prime}_{1},\mathfrak{y }^{\prime}_{1}),\ldots,\mathfrak{c}_{\mathfrak{g}_{\mathbb{S}}}(\mathfrak{x}^ {\prime}_{k},\mathfrak{y}^{\prime}_{k})\right)\sim_{\mathrm{iid}}\mathbb{P}_{ \mathfrak{c}_{\mathfrak{g}_{\mathbb{S}}}(\mathcal{X}\times\mathcal{Y})}\). While we therefore cannot rely on empirical risk \(\mathfrak{c}_{\mathbb{S}}(\mathfrak{g}_{\mathbb{S}})\) by itself to reflect generalization performance, we _may_ in concert with \(\mathfrak{c}_{\mathbb{S}^{\prime}}(\mathfrak{g}_{\mathbb{S}})\) for some _other_ data \(\mathcal{S}^{\prime}\in(\mathcal{X}\times\mathcal{Y})^{\omega}\), usually called a holdout or validation set. Typically performance at each training stage is evaluated on the holdout set, and early stopping conditions verify that validation performance continues to improve [5]. An onset of validation performance degradation can be interpreted as indication of overfitting. Illustrations of overfitting in the literature (e.g. [1, 3, 10]) display performance on training data compared with performance on holdout data, often parameterized by model complexity or training step ([8, 7]). While discussions of overfitting many times consider generalization against model complexity, and therefore present performance across _models_, we introduce a test, for a fixed model, based on classic concentration inequalities, with respect to which overfitting may be _quantitatively_ defined, relying on comparison of validation performance to training set performance. We reason that because model construction uses and depends on (minimization with respect to \(\mathfrak{g}\) of) \(\mathfrak{c}_{\mathbb{S}}(\mathfrak{g})\), we ought to be able to conclude performance with \(it\). In fact, comparison against empirical risk \(\mathsf{e}_{\mathsf{S}}(\mathfrak{H}_{\mathsf{S}})\) provides an anchor against which we may draw rigorous statistical conclusions. The test we provide amounts to much of the same as common stopping criteria for training, though the grounds we give are both grounded in the math and provide threads for distinguishing causes of error. ## 3 Detecting Overfitting ### The Test We consider only the case where \(\mathrm{cost}\ \mathsf{c}_{(\cdot)}:\mathcal{H}\times(\mathcal{X}\times \mathcal{Y})\to\mathbb{R}\) is bounded as \(\mathsf{c}_{\mathfrak{H}_{\mathsf{S}}}\subset[0,1]\), such as most classification problems or restricted classes of regression problems. In this case, Hoeffding-like bounds abound and we expect that \[\mathbb{P}_{(\mathcal{X}\times\mathcal{Y})^{k}}\left(\left|\mathbb{E}( \mathsf{c}_{\mathfrak{H}_{\mathsf{S}}})-\mathsf{e}_{(\cdot)}(\mathfrak{H}_{ \mathsf{S}})\right|>\varepsilon\right)<2\mathsf{e}^{-2\varepsilon^{2}\mathsf{ k}}. \tag{6}\] In other words, for independently and identically distributed sampled data \(\mathsf{S}^{\prime}\in(\mathcal{X}\times\mathcal{Y})^{\mathsf{k}}\), \(\mathsf{e}_{\mathsf{S}^{\prime}}(\mathfrak{H}_{\mathsf{S}})\approx\mathbb{E}( \mathsf{c}_{\mathfrak{H}_{\mathsf{S}}})\pm\varepsilon\) with probability at least \(1-\mathsf{e}^{-2\varepsilon^{2}\mathsf{k}}\).5 While \(\mathsf{S}\in(\mathcal{X}\times\mathcal{Y})^{\mathsf{m}}\) is also drawn independently, by assumption, we cannot quite conclude the same of \(\mathsf{e}_{\mathsf{S}}(\mathsf{c}_{\mathfrak{H}_{\mathsf{S}}})\) because (as discussed above) with respect to the \(\mathsf{c}_{\mathfrak{H}_{\mathsf{S}}}\)-induced measure on \(\mathbb{R}\), the sequence \((\mathsf{c}_{\mathfrak{H}_{\mathsf{S}}}(\mathsf{x}_{1},y_{1}),\ldots,\mathsf{ c}_{\mathfrak{H}_{\mathsf{S}}}(\mathsf{x}_{m},y_{m}))\) is not. We may, however, suppose that a _consequence_ of independence holds, namely that Footnote 5: The fact that \(\mathbb{E}(\mathsf{c}_{(\cdot)})\) and \(\mathsf{e}(\cdot)\) both take \(\mathfrak{H}_{\mathsf{S}}\) as argument is irrelevant: the bound holds for any \(\mathfrak{H}\in\mathcal{H}\). \[|\mathbb{E}(\mathsf{c}_{\mathfrak{H}_{\mathsf{S}}})-\mathsf{e}_{\mathsf{S}}( \mathfrak{H}_{\mathsf{S}})|<\varepsilon/2, \tag{7}\] and use this (possibly counterfactual) supposition to test its truth. While possibly counterintuitive, a bound of the form in (7) is exactly what we desire from a generalizing model \(\mathfrak{H}_{\mathsf{S}}\). We first collect some definitions. **Definition 3.1**.: Let \(\mathsf{S}\sim_{\mathsf{iid}}\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\) and \(\mathfrak{H}_{\mathsf{S}}\in\mathcal{H}\). We say that \(\mathfrak{H}_{\mathsf{S}}\)_\(\varepsilon\)-overfits_\(\mathsf{S}\) if \[\mathsf{e}_{\mathsf{S}}(\mathfrak{H}_{\mathsf{S}})<\mathbb{E}(\mathsf{c}_{ \mathfrak{H}_{\mathsf{S}}})-\varepsilon.\] Similarly, \(\mathfrak{H}_{\mathsf{S}}\)_\(\varepsilon\)-underfits_\(\mathsf{S}\) if \(\mathsf{e}_{\mathsf{S}}(\mathfrak{H}_{\mathsf{S}})>\mathbb{E}(\mathsf{c}_{ \mathfrak{H}_{\mathsf{S}}})+\varepsilon\). Finally, \(\mathfrak{H}_{\mathsf{S}}\)_\(\varepsilon\)-generalizes_ model \(\mathfrak{H}_{\mathsf{S}}\) neither \(\varepsilon\)-overfits nor \(\varepsilon\)-underfits \(\mathsf{S}\). **Proposition 3.1** (Test for Overfitting).: Suppose that model \(\mathfrak{H}_{\mathsf{S}}\)\(\varepsilon/2\)-generalizes (definition 3.1, inequality (7)). Then \[\mathbb{P}_{(\mathcal{X}\times\mathcal{Y})^{\mathsf{k}}}\left(\left|\mathsf{e }_{\mathsf{S}}(\mathfrak{H}_{\mathsf{S}})-\mathsf{e}_{\mathsf{S}^{\prime}}( \mathfrak{H}_{\mathsf{S}})\right|>\varepsilon\right)\leq 2\mathsf{e}^{-\frac{\varepsilon ^{2}\mathsf{k}}{2}}. \tag{8}\] Therefore, the null hypothesis that trained model \(\mathfrak{H}_{\mathsf{S}}\)\(\frac{\varepsilon}{2}\)-generalizes may be tested using probability bound eq. (8). Proof.: Since \[\left|\mathsf{e}_{\mathsf{S}}(\mathfrak{H}_{\mathsf{S}})-\mathsf{e}_{\mathsf{ S}^{\prime}}(\mathfrak{H}_{\mathsf{S}})\right| \quad=\left|\mathsf{e}_{\mathsf{S}}(\mathfrak{H}_{\mathsf{S}})-\mathsf{E}( \mathsf{c}_{\mathfrak{H}_{\mathsf{S}}})+\mathsf{E}(\mathsf{c}_{\mathfrak{H}_{ \mathsf{S}}})-\mathsf{e}_{\mathsf{S}^{\prime}}(\mathfrak{H}_{\mathsf{S}})\right|\] \[\quad\leq\left|\mathsf{e}_{\mathsf{S}}(\mathfrak{H}_{\mathsf{S}})- \mathsf{E}(\mathsf{c}_{\mathfrak{H}_{\mathsf{S}}})\right|+\left|\mathsf{E}( \mathsf{c}_{\mathfrak{H}_{\mathsf{S}}})-\mathsf{e}_{\mathsf{S}^{\prime}}( \mathfrak{H}_{\mathsf{S}})\right|\] \[\quad<\frac{\varepsilon}{2}+\left|\mathbb{E}(\mathsf{c}_{\mathfrak{ H}_{\mathsf{S}}})-\mathsf{e}_{\mathsf{S}^{\prime}}(\mathfrak{H}_{\mathsf{S}})\right|\] we conclude \[\left\{\left|\mathsf{e}_{\mathsf{S}}(\mathfrak{H}_{\mathsf{S}})-\mathsf{e}_{( \cdot)}(\mathfrak{H}_{\mathsf{S}})\right|>\varepsilon\right\}\subseteq\left\{ \left|\mathbb{E}(\mathsf{c}_{\mathfrak{H}_{\mathsf{S}}})-\mathsf{e}_{(\cdot)}( \mathfrak{H}_{\mathsf{S}})\right|>\frac{\varepsilon}{2}\right\}.\] Inclusion of events implies inequality of measures, and we apply Hoeffding (inequality (9)) to bound the right hand side probability \(\mathbb{P}\left(\left|\mathbb{E}(\mathsf{c}_{\mathfrak{H}_{\mathsf{S}}})-\mathsf{e }_{(\cdot)}(\mathfrak{H}_{\mathsf{S}})\right|>\frac{\varepsilon}{2}\right).\) Notice that use of holdout data for evaluation by itself provides an absolute approximation of performance, while in tandem with training data, we gain quantified (un)certainty specifically about generalization. Finally, the probability in (8) depends on the size of validation data, but not on the size of training data. This conclusion is correct: while we would like more training data to correlate with higher likelihood of performance, the problem in section 2.2 indicates that such intuition may not find a straightforward grounding in probability. Presumably, one may be less inclined to hypothesize satisfactory model performance when training with little data. The intuition finds security in PAC learnability, absent which there is no obvious guaranteed connection between size of (training) data and performance; we discuss this issue further in section 3.3. ### Interpreting the Output Overfitting is a heuristic notion which suggests a model has fit the data and not the distribution which generated it. On closer inspection, however, the test we propose does not provide indication of _only_ overfitting. In fact, the supposition of generalization is one with respect to a certain (fixed) distribution; this test thus additionally assumes that the test data \(\mathsf{S^{\prime}}\sim_{\mathrm{iId}}\mathbb{P}\chi_{\times\mathsf{Y}}\) as well. It may not be. For there may be some form of distributional shift according to which \(\mathsf{S^{\prime}}\sim_{\mathrm{iId}}\mathbb{P}^{\prime}_{\chi\times\mathsf{Y}}\), in which case we cannot guarantee the bound in (8), at least not if the expectation \(\mathbb{E}(\mathsf{c}_{\mathfrak{g}_{\mathsf{S}}})\) is computed with respect to the original measure \(\mathrm{d}\mathbb{P}\chi_{\times\mathsf{Y}}\). In other words, instantiation of event \(\left\{\left|\mathsf{e}_{\mathsf{S}}(\mathfrak{g}_{\mathsf{S}})-\mathsf{e}_{ (\cdot)}(\mathfrak{g}_{\mathsf{S}})\right|>\varepsilon\right\}\) by inequality \(\left|\mathsf{e}_{\mathsf{S}}(\mathfrak{g}_{\mathsf{S}})-\mathsf{e}_{\mathsf{ S^{\prime}}}(\mathfrak{g}_{\mathsf{S}})\right|>\varepsilon\) may suggest: 1. an unlikely sample \(\mathsf{S^{\prime}}\) was received (all the hypotheses hold), 2. \(\mathfrak{g}_{\mathsf{S}}\) does not generalize \(\mathbb{P}_{\mathcal{X}\times\mathsf{Y}}\) with respect to \(\mathsf{c}_{(\cdot)}\) (overfitting), or 3. \(\mathsf{S^{\prime}}\not\sim_{\mathrm{iId}}\mathbb{P}_{\mathcal{X}\times\mathsf{ Y}}\) (possible distributional shift). It is important when running a statistical test to respect the scope of what it purports to evaluate: namely, _if_ a set of assumptions hold--in this case \(1\). that \(\mathfrak{g}_{\mathsf{S}}\)\(\frac{\varepsilon}{2}\)-generalizes (eq. (7)) and \(2\). \(\mathsf{S^{\prime}}\sim_{\mathrm{iId}}\mathbb{P}_{\mathcal{X}\times\mathsf{Y}}\)--then the probability that a certain kind of event occurs is bounded by some value which is explicitly calculable. Realization of the unlikely and unlucky event by \(\mathsf{S^{\prime}}\) can either mean \(\mathsf{S^{\prime}}\) really is unlucky or that one of the assumptions fails. Finally, while this test is expressed with respect to the cost function \(\mathsf{c}_{\mathfrak{g}}\) or \(\mathsf{c}_{\mathfrak{g}_{\mathsf{S}}}\), it need not be so limited. In fact, any map \(\mathsf{f}:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) may be used to probe the distribution, substituting the appropriate concentration inequality depending on the range of \(\mathsf{f}\). When \(\mathsf{f}(\mathcal{X}\times\mathcal{Y})\) is bounded, we may rely on a version of Hoeffding, which converges exponentially. Subsequent work will investigate the use of _random projections_ to examine distribution shift and uncertainty quantification, as a means of testing to eliminate or isolate the above obfuscating condition \(\#3\). ### Loosening Uniform Bounds We conclude with commentary on the merits of this test. The bound in eq. (8) is perhaps unsurprising and at first glance offers little upside beyond the performance guarantee as provided by eq. (6), which ensures approximation of empirical mean (of holdout data) to the true mean. Indeed, one may argue that overfitting, in the sense of definition 3.1, induces little cause for concern: as long as performance \(\mathbb{E}(\mathsf{c}_{\mathfrak{g}_{\mathsf{S}}})\) is "good enough," (as approximated by \(\mathsf{e}_{\mathsf{S^{\prime}}}(\mathfrak{g}_{\mathsf{S}})\)) it may not particularly matter whether or that training data performance matches a model's generalization performance. On the other hand, guarantees of the sort which PAC learnability provides ensure that the output of a training algorithm is near optimal in a hypothesis class. In the presence of overfitting, one may not know whether better than 'good enough' is achievable. Generalization _with training data_ provides confidence that empirical risk minimization (2) approximately realizes risk minimization (5) _in the absence of uniform (PAC) guarantees_. The test is a workable mechanism for checking that there is little gap between performance a hypothesis class may achieve on data and on the data's distribution. Consider, for example, fig. 3 which compares level sets for \(\mathbb{E}(\mathsf{c}_{(\cdot)})\) and \(\mathsf{e}_{\mathsf{S}}(\cdot)\). Learnability, as described by uniform convergence and notions of representability (c.f. [9, SS4.1]), guarantees that these profiles roughly track each other, which is _sufficient_ for generalization of output model \(\mathfrak{g}_{\mathsf{S}}\): if the value of \(\mathbb{E}(c_{(\cdot,\cdot)})\) and \(e_{\mathbb{S}}(\cdot)\) are roughly approximate _everywhere_ in \(\mathcal{H}\), then they certainly are at a particular point. On the other hand, learnability objectives ultimately seek generalization of the output, namely that \(\mathbb{E}(c_{(\cdot,\cdot)})\) and \(e_{\mathbb{S}}(\cdot)\) are roughly approximate _at_\(\mathfrak{g}_{\mathbb{S}}\); how they compare in other regions of \(\mathcal{H}\) may be immaterial. We underscore the point. PAC results guarantee not only that an algorithm will return an optimal (in the hypothesis class) model, but that the sample complexity with respect to which the algorithm is expected to reliably work is _independent of distribution_. Guarantees of this form are helpful in providing confidence ahead of time that the learning endeavor is not misguided. On the other hand, practitioners often engage in the tackling the learning problem irrespective of knowledge or other assurances that their class is PAC learnable. Moreover, PAC learnability does not cover the intermediate case that some distributions may require a larger sample complexity (some tasks are harder to learn than others), and that there may be no uniform bound over all measures, even if there are some over subsets. Still, assurance that the output of training generalizes does not _require_ that the hypothesis class be PAC learnable, i.e. that uniform bounds hold. Rather: uniform bounds, when they exist, provide a conceptual framework and analytic setting wherein a class of results may be generated, in the absence of which, we would nevertheless like to be able to say _something_. We collect this commentary into a definition. **Definition 3.2**.: Let \(\mathcal{P}\) be a collection of probability measures on \(\mathcal{X}\times\mathcal{Y}\). We say that a hypothesis class \(\mathcal{H}\subset\mathcal{Y}^{\mathcal{X}}\) is _\(\mathcal{P}\)-learnable_ if there is sample complexity \(\mu:(0,1)\to\mathbb{N}\) and algorithm \(\mathfrak{g}:(\mathcal{X}\times\mathcal{Y})^{\omega}\to\mathcal{H}\) for which \(\mathbb{P}_{(\mathcal{X}\times\mathcal{Y})^{\mathrm{m}}}\left(\mathbb{E}(c_{ \mathfrak{g}_{(\cdot,\cdot)}})-\inf_{\mathfrak{g}\in\mathcal{H}}\mathbb{E}(c_ {\mathfrak{g}})>\varepsilon\right)<\delta\) whenever \(\mathrm{m}>\mu(\varepsilon,\delta)\) and \(\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\in\mathcal{P}\). We say that \(\mathcal{H}\) is \(\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\)-_learnable_ if it is \(\{\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\}\)-learnable. Whereas PAC learnability typically guarantees (approximate) optimality independent of measure, \(\mathcal{P}\)-learnability expressly restricts measures for which \(\mathcal{H}\) is (uniformly) suited to learn. While PAC learnability is powerful in providing guarantees absent prior assumptions about the measure from which data is drawn, this generality also inhibits the usefulness of prior knowledge: for a class is PAC learnable or not irrespective of what is known regarding the data. One might imagine, for example, that assurance data is subgaussian may be relevant, in that subgaussianity demarcates a class of measures for which a hypothesis class is suited to fit. Figure 3: Model o overfits. ## 4 In Simulation Code implementing this test can be found at [https://github.com/schmidtgenstein/qudost.git](https://github.com/schmidtgenstein/qudost.git). In each of the following illustrations, we plot empirical performance (accuracy for classification problems) with respect to both training and holdout data on the left, and the absolute difference on the right. These curves are plotted against training epoch, and each pair uses a different size for \(\mathsf{S^{\prime}}\) with respect to which either probability or precision depend (appendix A). We fix the precision in each case, denoted by the dashed red line in the right figures, and report the probability bound a la proposition 3.1. Per Hoeffding's inequality (9), this precision may be made finer with more data. Worth noting that the test in proposition 3.1 does not intrinsically relate to early stopping: a model may overfit and cease to overfit at various epochs in training (see, e.g., fig. 5). Results in fig. 4 and fig. 5 use generated data and a multilayer perceptron for binary classification. Results in fig. 6 and fig. 7 use a simple convnet on MNIST data. ### Mnist ## Appendix A Hoeffding's Inequality for Statistical Hypothesis Testing Hoeffding's inequality gives a probability bound for independent sample \(\mathsf{S}=(\mathsf{x}_{1},\ldots,\mathsf{x}_{\mathsf{m}})\sim_{\mathsf{iid}} \mathbb{P}_{\mathcal{X}}\) when \(\mathcal{X}=[0,1]\), namely: \[\mathbb{P}_{\mathcal{X}}\left(\left|\int_{\mathcal{X}}\mathsf{xdP}_{\mathcal{ X}}(\mathsf{x})-\frac{1}{|\mathsf{S}|}\sum_{\mathsf{x}\in\mathsf{S}}\mathsf{x} \right|>\varepsilon\right)<2\mathsf{e}^{-2\varepsilon^{2|\mathsf{S}|}}. \tag{9}\] Therefore, given any two of confidence specification \(\delta\in(0,1)\), data set sized \(|\mathsf{S}|=\mathsf{m}\), and precision bound \(\varepsilon\in(0,1)\), one may readily solve for the third. Proof of its verity and other applications may be found in various probability texts ([2], [9], [4]). Figure 6: MNIST \(\mathsf{k}=1000\) Figure 7: MNIST \(\mathsf{k}=6000\)
2309.03036
An Efficient Temporary Deepfake Location Approach Based Embeddings for Partially Spoofed Audio Detection
Partially spoofed audio detection is a challenging task, lying in the need to accurately locate the authenticity of audio at the frame level. To address this issue, we propose a fine-grained partially spoofed audio detection method, namely Temporal Deepfake Location (TDL), which can effectively capture information of both features and locations. Specifically, our approach involves two novel parts: embedding similarity module and temporal convolution operation. To enhance the identification between the real and fake features, the embedding similarity module is designed to generate an embedding space that can separate the real frames from fake frames. To effectively concentrate on the position information, temporal convolution operation is proposed to calculate the frame-specific similarities among neighboring frames, and dynamically select informative neighbors to convolution. Extensive experiments show that our method outperform baseline models in ASVspoof2019 Partial Spoof dataset and demonstrate superior performance even in the crossdataset scenario.
Yuankun Xie, Haonan Cheng, Yutian Wang, Long Ye
2023-09-06T14:29:29Z
http://arxiv.org/abs/2309.03036v2
An Efficient Temporary Deepfake Location Approach Based Embeddings for Partially Spoofed Audio Detection ###### Abstract Partially spoofed audio detection is a challenging task, lying in the need to accurately locate the authenticity of audio at the frame level. To address this issue, we propose a fine-grained partially spoofed audio detection method, namely Temporal Deepfake Location (TDL), which can effectively capture information of both features and locations. Specifically, our approach involves two novel parts: embedding similarity module and temporal convolution operation. To enhance the identification between the real and fake features, the embedding similarity module is designed to generate an embedding space that can separate the real frames from fake frames. To effectively concentrate on the position information, temporal convolution operation is proposed to calculate the frame-specific similarities among neighboring frames, and dynamically select informative neighbors to convolution. Extensive experiments show that our method outperform baseline models in ASVspoof2019 Partial Spoof dataset and demonstrate superior performance even in the cross-dataset scenario. Yuankun Xie, Haonan Cheng, Yutian Wang, Long Ye+ State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing 100024, China partially spoofed audio detection, temporal deepfake location, embedding learning. Footnote †: Long Ye is corresponding author. ## 1 Introduction AI generated content (AIGC) technology has witnessed swift progress in recent years, particularly in speech-related applications like text-to-speech (TTS) [1, 2, 3] and voice conversion (VC) [4, 5, 6]. Although these technologies have brought about convenience, they have also posed significant security threats. Thus, various initiatives and challenges, such as ASVspoof [7, 8], have been established to foster research on countermeasure solutions that safeguard speech applications and human listeners against spoofing attacks. Nevertheless, a significant scenario has been overlooked in most datasets and challenges where a bonafide speech utterance is contaminated by synthesized speech segments, leading to partial spoofing (PS). Attackers can use PS to alter sentence semantics, and such modifications can be easily accomplished at low cost. For instance, attackers can easily modify single word such as time, place, and characters in sentence to dramatically change the semantics. Furthermore, If attackers have knowledge of phonology, they can manipulate vowels and even consonants such as "pan," pin,"pen," which are smaller than the word level. Therefore, defending against such fine-grained PS scenarios poses significant challenges for defenders. In recent years, there are several studies about PS scenarios for Audio Deepfake Detection (ADD). Yi et al. [9] create a dataset that focuses on changing a few words in an utterance for half-truth audio detection. At the same time, Zhang et al. [10] construct a speech database called 'PartialSpoof' designed for PS scenarios. The above two datasets are the beginning of the research for PS scenario in ADD task. Afterward, Zhang et al. [11] propose the SELCNN network to enhance the ability of the accuracy of the utterance. Lv et al. [12] use Wav2Vec2 (W2V2) [13] as front-end, ECAPA-TDNN [14] as back-end achieving the first rank in ADD 2022 Track [21]. Although the above research shows effectiveness at the utterance level detection in PS, they do not pinpoint specific segments with precision. Thus, Zhang et al. [16] extended the previous utterance-level PS dataset labels to frame-level and proposed corresponding W2V2-based countermeasures to enhance frame-level detection capability. The aforementioned methods solely utilize existing ADD models such as LCNN, currently lacking specific approaches tailored to the PS scenario, particularly in terms of precise frame-level localization. To address this challenge, we propose a novel Temporal Deepfake Location (TDL) method. For front-end, we take advantage of W2V2 [17]. By training on a vast corpus of genuine speech from diverse source domains, W2V2 can effectively discriminate the real and fake in complex acoustic scenarios. For back-end, our primary focus is on fine-grained locating the genuine and spoofed speech segment. To clearly distinguish the real and fake in feature level, we first design the embedding similarity module to separate the real and fake frames in embedding space and get a high-quality embedding similarity vector. Then, we propose temporal convolution operation to locate the region from the embedding vector. The local similarity for each temporal position is calculated from the embedding. By this means, we can obtain a frame-specific weight to guide the convolution making a temporal sensitive calculation. Our main contributions can be summarized as follows: * We propose TDL method, an efficient and effective ADD method for PS scenarios which combines a embedding similarity module and temporal convolution operation to effectively capture both feature and positional information. * The proposed method outperforms baseline models in ASV spoof 2019PS dataset and demonstrate superior performance even in cross-dataset experiments. ## 2 Proposed Method ### Problem statement and overview In PS scenarios, the fake audio segment is inserted within the genuine speech. Our target is to detect the real and fake segments at frame level. Given the large-scale self-supervised audio feature \(f=(f_{1},f_{2},...f_{T})\in R^{D\times T}\), where \(D\) and \(T\) denote the dimension of audio feature and the number of frames respectively. The whole task is defined as input feature \(f\) and output the frame level label \(y=(y_{1},y_{2},...y_{T})\in\{0,1\}^{T}\), where 1 represents the real frames and 0 represents the fake frames. The framework of our proposed TDL is depicted in Figure 1. First, we utilize Wav2Vec-XLS-R to extract the frame level feature from the raw audio. Then, for enhanced identification of genuine and fake distinctions at the embedding level, we devise an embedding similarity module to segregate authentic and synthetic frames within the embedding space. Next, to capture the position information, we adopt temporal convolution operation by focusing on frame-specific similarities among neighboring frames. Finally, we employ 1D convolutional layers and fully connected layers for downsampling to the frame level label to compute the Binary Cross-Entropy (BCE). ### W2V2 front-end W2V2 based front-end is trained by solving a contrastive task over a masked feature encoder. Firstly, speech signals in various lengths are passed through a feature extractor consisting of seven convolutional neural network (CNN) layers. Subsequently, context representations are obtained using a Transformer network [18] comprising of 24 layers, 16 attention heads, and an embedding size of 1024. In practice, we utilize the Hugging Face version of wav2vec2-XLS-R-300M1 and freeze the weights of the front-end. The front-end model is pre-trained with 436k hours of unannotated genuine speech data in 128 languages. Consequently, the last hidden states from the transformer can effectively represent the contextualized information of genuine speech which is different from the partially fake speech. Footnote 1: [https://huggingface.co/facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) ### Embedding similarity module To better capture feature-level information, we first distinguish the real and fake frames in the embedding space. Specifically, the W2V2 features are fed into a CONV module, consisting of two sequential 1D-CNNs, which downsamples the embedding dimension from 1024 to 32. The embedding vector is \(L2\)-normalized. Then we get a embedding vector \(e=(e_{1},e_{2},...e_{T})\in R^{D\times T}\). In the embedding similarity module, we utilize cosine similarity to measure the similarity of two embedding vector \(e_{u}\) and \(e_{v}\) as follows: \[\mathcal{S}\left(\mathbf{e}_{u},\mathbf{e}_{v}\right)=\frac{\mathbf{e}_{u}^{ T}\cdot\mathbf{e}_{v}}{\left\|\mathbf{e}_{u}\right\|_{2}\cdot\left\|\mathbf{e}_{v} \right\|_{2}}. \tag{1}\] To increase the distance between genuine and fake frames in the embedding space and improve generalizability, we computed the cosine similarities between genuine frames, between fake frames, and between genuine and fake frames. Specifically, we ensured that genuine frames from different positions exhibited similarity, fake frames from different positions exhibited similarity, while genuine and fake frames are dissimilar to each other. Thus, \(\mathcal{L}_{ESM}^{Real}\) and \(\mathcal{L}_{ESM}^{Fake}\) are proposed to make the real frames and fake frames in different positions similar: \[\mathcal{L}_{\mathrm{ESM}}^{\mathrm{Real}}=\max_{\forall e_{x},e_{y},\pi \neqneq}\left|\tau_{\mathrm{same}}\right.-\mathcal{S}\left(\mathbf{e}_{x}, \mathbf{e}_{y}\right)\right|_{+}, \tag{2}\] \[\mathcal{L}_{\mathrm{ESM}}^{\mathrm{Fake}}=\max_{\forall e_{m},e_{m},m\neqneq} \left|\tau_{\mathrm{same}}\right.-\mathcal{S}\left(\mathbf{e}_{m},\mathbf{e}_{ n}\right)\right|_{+}, \tag{3}\] where \(e_{x}\) and \(e_{y}\) refer to distinct positions of real frames, while \(e_{m}\) and \(e_{n}\) refer to those of fake frames. \(\tau_{\mathrm{same}}\) is the similarity threshold between frames from the same category, \(\left\lfloor\dots\right\rfloor_{+}\) represents clipping below at zero. It is noteworthy that although we know the positions of frame-level authenticity labels, the temporal dimension of W2V2-XLS-R features does not inherently align with these frame-level labels. To tackle this issue, we ascertain the temporal authenticity in the time dimension of the embedding vector by calculating the ratio between the temporal dimensions of the label and the embedding vector. \(\mathcal{L}_{ESM}^{Diff}\) is proposed to separate the real and fake frames, which can be formulated as: \[\mathcal{L}_{\mathrm{ESM}}^{\mathrm{Diff}}=\max_{\forall e_{r},e_{f}}\left| \mathcal{S}\left(\mathbf{e}_{r},\mathbf{e}_{f}\right)-\tau_{\mathrm{Diff}} \right.\right]_{+}, \tag{4}\] where \(e_{r}\) and \(e_{f}\) refer to the embedding vector of real frames and fake frames. \(\tau_{\mathrm{aff}}\) is the similarity threshold to constraint the distance between real and fake frames. Finally, the embedding similarity module is optimized by \(\mathcal{L}_{ESM}\), which takes into account the three aforementioned losses in a joint manner. The \(\mathcal{L}_{ESM}\) is calculated as follows: \[\mathcal{L}_{ESM}=\mathcal{L}_{ESM}^{Real}+\mathcal{L}_{ESM}^{Fake}+\mathcal{L }_{ESM}^{Diff}. \tag{5}\] ### Temporal convolution operation To effectively capture the positional information, we use the embedding vector as an local attention mask to perform temporal convolution operations. Consider a audio feature \(\mathbf{X}\in R^{D_{in}\times T}\), where \(D_{in}\) and \(T\) represent the dimension of the vector and number of frames Figure 1: The entire structure of our proposed Temporal Deepfake Location (TDL) method. respectively. The temporal convolution layer learns a dynamic convolution kernel \(\mathbbm{k}\in R^{k\times D_{in}\times D_{out}}\), where \(k\) is the size of temporal kernel, \(D_{out}\) is the dimension of output feature. We only utilize the dynamic kernel \(\mathbbm{k}^{m}\in R^{k\times D_{in}}\) to compute \(m^{th}\) channel of the output for convenient. Thus, the temporal convolution operation for the \(t^{th}\) feature can be expressed as: \[f_{t}^{m}=\sum_{i=0}^{k-1}\mathbbm{k}^{m}[i,\cdot]\cdot\overline{\mathbf{X}} \left[\cdot,t-\frac{k}{2}+i\right], \tag{6}\] where \(f_{t}^{m}\) is the value in the \(m^{th}\) channel of output feature vector, \([\cdots]\) means a slice of a matrix, \((\cdot)\) denotes the inner product. \(\overline{\mathbf{X}}\) is the modulated feature processed by neighbor similarity calculation: \[\begin{split}&\overline{\mathbf{X}}\left[\cdot,t-\frac{k}{2}+i \right]=\mathbf{X}\left[\cdot,t-\frac{k}{2}+i\right]\times\mathbf{a}[i,t],\\ & i\in[0,\ldots,k-1],\end{split} \tag{7}\] where matrix \(\mathbf{a}\in R^{k\times T}\) is a similarity matrix that calculate the local similarity for each temporal position, \(\mathbf{a}[i,t]\) indicates the similarity between the \(t^{th}\) feature vector and its \(k\) neighbors. In practice, we determine the dynamic kernel weight based on the embedding vector generated by ESM module. We apply temporal convolution operation to the W2V2 features on two sequence 1D-CNNs, where both input channel and output channel remain unchanged to maintain consistency in temporal dimension. ### Total loss Following two consecutive temporal convolution operation layers, to capture additional temporal information and align with the label dimensions, we subsequently employ 1D-CNN, fully connected (FC) layers, and sigmoid activation functions to calculate the BCE loss. The architecture details of TDL is shown in Table 1. The total loss is defined as follow: \[\mathcal{L}_{all}=\mathcal{L}_{BCE}+\lambda\mathcal{L}_{ESM}, \tag{8}\] where \(\lambda\) is set to 0.1 to balance the value of two losses. ## 3 Experiments ### Database Our experiments for PS scenario include two public datasets: ASVspoof2019PS (19PS) [10] and LAV-DF [19]. 19PS is constructed based on the ASVspoof2019 LA database [20]. All experiments on the 19PS dataset are conducted using 160ms resolution labels. The training, validation, and testing sets are distributed according to the original dataset allocation, consisting of 25,380, 24,844, and 71,237 utterances respectively. To evaluate the model's generalizability, we conduct additional testing of the 19PS-trained model using the LAV-DF test set. LAV-DF represents a multi-modal temporal forgery dataset, containing a total of 26,100 videos in test set. We extract the audio track of each video and create 160ms frame level genuine and fake labels. We calculated the percentage of samples belonging to fake class at both the frame and sentence levels, as shown in the Table 2. We can observe that the frame-level labels in 19PS are balanced, facilitating model training. However, the LAV-DF dataset exhibits a lower proportion of spoof segments, making it unbalanced and presenting greater challenges for detection. ### Implementation details In order to address the issue of variable-length audio inputs, we employ the technique of zero-padding to the maximum length of training set. For the frame of genuine speech, we set the label to one, while for spoofing frame, the label is set to zero. In the case of 19PS, the maximum duration of speech in the training set is 21.03 seconds with a W2V2 feature dimension of (1050,1024) and the number of frames at a resolution of 160 ms is 132. For LFCC, we extracted 60-dimensional LFCC with a combination of static, delta and delta coefficients. For training strategy, the Adam optimizer is adopted with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\varepsilon\) = \(10^{-9}\) and weight decay is \(10^{-4}\). We train all of the models for 100 epochs. The learning rate is initialized as \(10^{-5}\) and halved every 5 epochs. It is worth mention that no data augmentation method is used for experiment. ### Evaluation metrics In our experiment, we employ four evaluation metrics to assess model performance: Equal error rate (EER), precision, recall, and \(F_{1}\) score. All metrics are computed based on frame-level authenticity labels of the partially spoofed audio. Precision, recall, and \(F_{1}\) score are defined as follow: \[Precison=\frac{TP}{TP+FP}, \tag{9}\] \[Recall=\frac{TP}{TP+FN}, \tag{10}\] \[F_{1}score=\frac{2\cdot Precison\cdot Recall}{Precision+Recall}, \tag{11}\] where \(TP\), \(TN\), \(FP\), \(FN\) represent the numbers of true positive, true negative, false positive, and false negative samples, respectively. In practice, we employed point-based binary-classification precision, recall, and \(F_{1}\) score from Sklearn. Before any evaluation, zero-padding is eliminated based on the actual length of the features. \begin{table} \begin{tabular}{c c c} \hline module & kernel/stride & output shape \\ \hline W2V2 & - & (batch,1024,1050) \\ \hline CONV & 3/1 & (batch,512,1050) \\ \hline TCONV & 3/1 & (batch,32,1050) \\ \hline TCONV & 3/1 & (batch,1024,1050) \\ \hline TCONV & 3/1 & (batch,1024,1050) \\ \hline CONV & 1/1 & (batch,2,1050) \\ \hline Flatten/FC & - & (batch,132) \\ \hline \end{tabular} \end{table} Table 1: Architecture of TDL network. \begin{table} \begin{tabular}{c c c c} \hline \hline dataset & subset & frame-level & utterance-level \\ \hline 19PS & train & 53.00 & 89.83 \\ \hline 19PS & dev & 52.31 & 89.74 \\ \hline 19PS & test & 48.03 & 89.68 \\ \hline LAV-DF & test & 10.01 & 48.82 \\ \hline \hline \end{tabular} \end{table} Table 2: Percentages(%) of fake class in each dataset. ## 4 Results and Discussions ### Results **Results on 19PS.** We compare the performance of several baseline models in terms of EER metric, as presented in Table 3. All models are trained on the 19PS training dataset. TDL (w/o ESM) represents our model without ESM module. As shown in Table 3, our model achieve the lowest EER 7.04% in partially spoofed audio detection task. Based on the experimental results, We first observe that the impact of feature is greater than backbone. For instance, as seen in first and third row in Table 3, where the backbone is LCNN-BLSTM, the utilization of W2V2 features resulted in a 6.84% EER decrease compared to LFCC. Conversely, when feature remain consistent, as demonstrated in first and second row of the in Table 3, both employing the shared LFCC attribute, SELCNN-LSTM exhibited a marginal EER reduction of 0.28% in comparison to LCNN-LSTM. Furthermore, we find that the architecture design of the TDL network aligns well with partial spoofed detection. Specifically, when the features utilized W2V2-XLS-R, the TDL (without ESM module) still exhibits a 1.08% reduction in EER compared to the LCNN-BLSTM. **Results on LAV-DF.** To validate the generalizability of our proposed model, we train on 19PS and evaluate on the test set of LAV-DF for 4 evaluation metrics. The results of the testing are presented in the Table 4. Although LAV-DF is an unbalanced dataset, our proposed model achieve the best performance of 11.23% EER compared to baseline models. ### Discussion **Label Setting.** As we mentioned in Section 3.2, we set real frames, fake frames for 1 and 0. To the best of our knowledge, there has been no prior research discussing which label configuration will be beneficial to the final prediction. Therefore, we experiment with three different label settings on our proposed TDL model as shown in Table 5. "Boundary 1" indicates that we set the boundary frames between genuine and fake segments as 1, while other positions are set as 0. In practice, due to the sparsity of boundary frames, we set 4 boundary frames at the transition between genuine and fake segments. Additionally, we employ a weighted BCE loss, assigning a weight value of 100 to the boundary values, as a replacement for standard BCE. Experimental results demonstrate that this method is less effective compared to directly predicting the authenticity of individual frames. Additionally, since predicting boundaries often requires further verification of the genuineness of the segments on both sides, we did not adopt the boundary setting. For the frame-level direct prediction of authenticity, we conducted experiments by setting real frames as 0 and fake frames as 1, and alternatively by setting real frames as 1 and fake frames as 0, as shown in the "real 0 fake 1" and "real 1 fake 0" of the Table 5 respectively. Experiments results show that "real 1 fake 0" outperform "real 0 fake 1" in four evaluation metrics, especially in recall metric, which indicates that TDL can accurately identify genuine speech. When setting real frames as "1" and fake frames along with padding frames as "0", we can better concentrate on the real segment. This is similar to previous works [21, 22] which also focus on the real speech distribution in fully-spoofed ADD task. Through our experiments, we have demonstrated that it is also significant in partially-spoofed ADD task. This is also why W2V2 features are effective in the field of ADD which only extracted by rich real source domains. **Complexity Comparision.** Apart from evaluating the performance, we measured the complexity of the models. For frame-level detection task, particularly for fine-grained prediction, the large final output dimension can result in excessive parameterization and low efficiency. Unlike LCNN, which convolves overall values, our proposed TDL model uses temporal convolution operation to selectively focus only on high-weight regions. It can be observed that the parameter count of TDL is only 40.53% of that of LCNN-BLSTM, which is shown in Table 6. ## 5 Conclusion In this paper, we propose an efficient temporary deepfake location approach based embeddings for partially spoofed audio detection. TDL can achieve outstanding performance benefits from two designed core modules: embedding similarity module and temporal convolution operation, which can effectively capture both feature and positional information. The experimental results demonstrate that TDL achieves the best performance in the 19PS dataset and also perform well in cross-dataset scenarios.
2308.14663
Formal Modelling and Analysis of a Self-Adaptive Robotic System
Self-adaptation is a crucial feature of autonomous systems that must cope with uncertainties in, e.g., their environment and their internal state. Self-adaptive systems are often modelled as two-layered systems with a managed subsystem handling the domain concerns and a managing subsystem implementing the adaptation logic. We consider a case study of a self-adaptive robotic system; more concretely, an autonomous underwater vehicle (AUV) used for pipeline inspection. In this paper, we model and analyse it with the feature-aware probabilistic model checker ProFeat. The functionalities of the AUV are modelled in a feature model, capturing the AUV's variability. This allows us to model the managed subsystem of the AUV as a family of systems, where each family member corresponds to a valid feature configuration of the AUV. The managing subsystem of the AUV is modelled as a control layer capable of dynamically switching between such valid feature configurations, depending both on environmental and internal conditions. We use this model to analyse probabilistic reward and safety properties for the AUV.
Juliane Päßler, Maurice H. ter Beek, Ferruccio Damiani, S. Lizeth Tapia Tarifa, Einar Broch Johnsen
2023-08-28T15:47:40Z
http://arxiv.org/abs/2308.14663v2
# Formal Modelling and Analysis ###### Abstract Self-adaptation is a crucial feature of autonomous systems that must cope with uncertainties in, e.g., their environment and their internal state. Self-adaptive systems are often modelled as two-layered systems with a _managed_ subsystem handling the domain concerns and a _managing_ subsystem implementing the adaptation logic. We consider a case study of a self-adaptive robotic system; more concretely, an autonomous underwater vehicle (AUV) used for pipeline inspection. In this paper, we model and analyse it with the feature-aware probabilistic model checker ProFeat. The functionalities of the AUV are modelled in a feature model, capturing the AUV's variability. This allows us to model the managed subsystem of the AUV as a family of systems, where each family member corresponds to a valid feature configuration of the AUV. The managing subsystem of the AUV is modelled as a control layer capable of dynamically switching between such valid feature configurations, depending both on environmental and internal conditions. We use this model to analyse probabilistic reward and safety properties for the AUV. ## 1 Introduction Many software systems are subject to different forms of uncertainty like changes in the surrounding environment, internal failures and varying user requirements. Often, manually maintaining and adapting these systems during runtime by a system operator is prohibitively expensive and error-prone. Enabling systems to adapt themselves provides several advantages. A system that is able to perform self-adaptation can also be deployed in environments where, e.g., communication between an operator and the system is very limited or impossible, like in space or under water. Thus, self-adaptation gives a system a higher level of autonomy. A self-adaptive system (SAS) can be implemented using a two-layered approach which decomposes the system into a _managed_ and a _managing_ subsystem [18], cf. Fig. 1. The _managed_ subsystem deals with the domain concerns and tries to reach the goals set by the system's user, e.g., navigating a robot to a specific location. The _managing_ subsystem handles the adaptation concerns and defines an adaptation logic that specifies a strategy on how the system can fulfil the goals under uncertainty [24], e.g., adapting to changing environmental conditions. While the managed subsystem may affect the environment via its actions, the managing subsystem monitors the environment and the internal state of the managed subsystem. By using the adaptation logic, the managing subsystem deducts whether and which reconfiguration is needed and adapts the managed subsystem accordingly. This paper models and analyses the case study of a self-adaptive autonomous underwater vehicle (AUV) as a two-layered system. The functionalities of the managed subsystem of the AUV are modelled in a feature model, making the dependencies and requirements between the components of the AUV explicit. The behaviour of the managed subsystem is modelled as a probabilistic transition system whose transitions may be equipped with feature guards, which only allow a transition to be taken if the feature guarding it is included in the current system configuration. Thus, it is modelled as a family of systems whose family members correspond to valid feature configurations. As the behaviour of the AUV depends on environmental and internal conditions, which are both hard to control, we opted for a probabilistic model in which uncontrolled events, like a thruster failure, occur with given probabilities. We model the behaviour of the managing subsystem as a control layer that switches between the feature configurations of the managed subsystem according to input from the probabilistic environment model and the managed subsystem. We consider a simplified version of an AUV, with limited features and variability, but there are many different possibilities to extend the model to a more realistic underwater robot. The case study is modelled in ProFeat [8], a tool for probabilistic family-based model checking. Family-based model checking provides a means to simultaneously model check, in a single run, properties of a family of models, each representing a different configuration [22]. Analyses with ProFeat give system operators an estimate of mission duration and the AUV's energy consumption, as well as some safety guarantees. The main contributions of this paper are as follows: * A case study of an SAS from the underwater robotics domain, modelled as a probabilistic feature guarded transition system with dynamic feature switching; * Automated verification of (quantitative) properties that are important for roboticists, using family-based analysis. Outline.Sec. 2 presents the case study of pipeline inspection with an AUV. Sec. 3 explains both the behaviour of the managed and managing subsystem Figure 1: Two-level SAS architecture of the AUV and the environment, as well as their implementation in ProFeat. Sec. 4 presents quantitative analyses conducted on the case study. Sec. 5 provides related work. Sec. 6 discusses our results and ideas for future work. ## 2 Case Study: Pipeline Inspection by AUV In this section, we introduce our case study of an AUV used for pipeline inspection, which was inspired by the exemplar SUAVE [21]. An AUV has the mission to first find and then inspect a pipeline located on a seabed. During system operation, the water visibility (i.e., the distance in meters within which the AUV can perceive objects) might change (e.g., due to currents that swirl up the seabed), while one or more of the AUV's thrusters might fail and needs to be restarted before the mission can be continued. The AUV can choose to operate at three different altitudes, _low_, _med_ (for medium) and _high_. A higher altitude allows the AUV to have a wider field of view and thus increases its chances of finding the pipeline during its search. The probability of a thruster failure is lower at a higher altitude because, e.g., seaweed might wrap around the thrusters at a lower altitude. However, the altitude at which the AUV can perceive the seabed depends on the water visibility. With low water visibility, the AUV cannot perceive the seabed from a high or medium altitude. Thus, it is not always possible to operate at a high or medium altitude, and the altitude of the AUV needs to be changed during the search, depending on the current environmental conditions. Once the pipeline is found, the AUV will follow it at a low altitude to avoid costs for switching altitudes. In fact, once found, a wider field of view provides no benefit. However, the AUV can also lose the pipeline again (e.g., when the pipeline was partly covered by sand or the AUV's thrusters failed for some time causing the AUV to drift off its path). In this case, the AUV has to search the pipeline again, enabling all three altitudes. Two-layered View of the AUV.Considering the AUV as a two-layered SAS, the AUV's managed subsystem is responsible for the search for and inspection of the pipeline. Depending on the current task and altitude of the AUV, a different configuration of the managed subsystem must be chosen. Thus, the managed subsystem can be seen as a family of systems where each family member corresponds to a valid configuration of the AUV. To do so, the different altitudes for navigation (_low_, _med_ and _high_) and the tasks _search_ and _follow_ can be seen as _features_ of the managed subsystem that adhere to the feature model in Fig. 2, which models the dependencies and constraints among the features. Each configuration of the AUV contains exactly one feature for navigation and one for pipeline inspection, and feature _follow_ requires feature _low_, yielding four different configurations of the managed subsystem of the AUV. Figure 2: Feature model of the case study The managing subsystem of the case study switches between these configurations during runtime by activating and deactivating the subfeatures of _navigation_ and _pipeline inspection_, while the resulting feature configuration has to adhere to the feature model in Fig. 2. The features _low_, _med_ and _high_ are activated and deactivated according to the current water visibility. If the water visibility is good, all three features can be activated; if the water visibility is average, _high_ cannot be activated; and if the water visibility is poor, only _low_ can be activated. The managing subsystem switches from the feature _search_ to _follow_ if the pipeline was found, and from _follow_ to _search_ if the pipeline was lost. ## 3 Modelling the AUV Case Study with ProFeat In this section, we describe the behavioural model of the managed and managing subsystem and the environment and model the case study with the family-based model checker ProFeat1[8]. ProFeat provides a means to both specify probabilistic system families and perform family-based quantitative analysis on them. It extends the probabilistic model checker PRISM2[19] with functionalities such as family models, features and feature switches. Thereby, it enables family-based modelling and (quantitative) analysis of probabilistic systems in which feature configurations may dynamically change during runtime. The whole model can be analysed with probabilistic family-based model checking using PRISM. Footnote 1: [https://pchrszon.github.io/profeat](https://pchrszon.github.io/profeat). Footnote 2: [https://www.prismmodelchecker.org/manual](https://www.prismmodelchecker.org/manual) Similar to an SAS, a ProFeat model can be seen as a two-layered model, as illustrated in Fig. 1. The behaviour of a family of systems that differ in their features, such as the managed subsystem of an SAS, can be specified. Then a so-called _feature controller_ can activate and deactivate the features during runtime, and thus change the behaviour of the system, such as the managing subsystem of an SAS that changes the configuration of the managed subsystem. Furthermore, the environment can be specified as a separate module that interacts with the managed and managing subsystem. Thus, ProFeat is well suited to model and analyse the case study described in Sec. 2. A ProFeat model consists of three parts: an obligatory feature model that specifies features and their relations and constraints, obligatory modules that specify the behaviour of the features, and an optional feature controller that activates or deactivates features. The pipeline inspection case study was modelled as a Markov decision process in ProFeat.3 It consists of (i) the implementation of the feature model of Fig. 2; (ii) modules describing the behaviour of the managed subsystem of the AUV (cf. Fig. 3) and of the environment (cf. Fig. 4); and (iii) the feature controller that switches between features during runtime, corresponding to the managing subsystem of the AUV (cf. Fig. 5). Footnote 3: The ProFeat model is available at [https://github.com/JulianePa/auv_profeat](https://github.com/JulianePa/auv_profeat) We start by explaining how the feature model was implemented in ProFeat in Sec. 3.1, then describe the behaviour and implementation of the managed and managing subsystem and of the environment in Sec. 3.2, 3.4, and 3.3 respectively. ### The Feature Model We first show how the feature model of the case study is expressed in ProFeat, including connections and constraints among features. Each feature is specified within a **feature**... **endfeature** block, the declaration of the root feature is done in a **root** **feature**... **endfeature** block. The Root FeatureAn excerpt of the implementation of the root feature of the pipeline inspection case study according to Fig. 2 is displayed in Listing 1.1. The root feature can be decomposed into subfeatures; in this case only one, the subfeature robot, cf. Line 2. The all of keyword indicates that all subfeatures have to be included in the feature configuration if the parent feature, in this case the root feature, is included. It is, e.g., also possible to use the one of keyword if exactly one subfeature has to be included. The modules modelling the behaviour of the root feature are specified after the keyword **modules**. In this case study, the root feature is the only feature specifying modules, thus the behaviour of all features is modelled in the modules auv and environment described later. Contrary to an ordinary feature model, ProFeat allows to specify feature-specific rewards in the declaration of a feature. Like costs, rewards are real values, but unlike costs (and although they may be interpreted as costs) rewards are meant to motivate rather than penalise the execution of transitions. Each reward is encapsulated in a **rewards**... **endrewards** block. In the case study, we consider the rewards _time_ and _energy_, cf. Lines 4-18 of Listing 1.1. During each transition the AUV module takes, the reward time is increased by 1; it is a transition-based reward, cf. Line 5. We assume that one time step corresponds to one minute, allowing us to compute an estimate of a mission's duration. The reward energy is a state-based reward and can be used to estimate the necessary battery level for a mission completion. If a thruster of the AUV failed and needs to be recovered, a reward of 2 is given, cf., e.g., Line 9. The model also reflects that switching between the search altitudes requires significant energy. Since the altitude is switched if the AUV is in a search state and a navigation subfeature that does not correspond to the current search altitude is active, a higher energy reward is given in these states. If the AUV needs to switch between low and high altitude, as, e.g., in Line 13, an energy reward of 4 is given, while all other altitude switches receive a reward of 2, cf., e.g., Line 14. Since the altitude must be changed to _low_ once the pipeline is found, these cases also receive an energy reward as explained above, cf. Lines 15-16. All other states receive an energy reward of 1. We use the function active to determine which feature is active, i.e., included in the current feature configuration; given a feature, the function returns true if it is active and false otherwise. Note that both time and energy rewards are interpreted as costs. Ordinary FeaturesThe remainder of the feature model is implemented similar to the root feature, but the features do not contain feature-specific modules or rewards. The features are implemented and named according to the feature model in Fig. 2. To have only one initial state, we initialise the model with the features search and low active. ``` 1rootfeature 2allofrobot; 3modulesauv,environment; 4rewards"time" 5[step]true:1; 6endrewards 7 8 9 10/Cosstforbeginginacoverystate 11(s=recover_high):2; 12//omittedcode.. 13 14//Cosstforswitchingaltitudes 15(s=search_high)&active(low):4; 16(s=search_high)&active(med):2; 17(s=found)&active(high):4; 18(s=found)&active(med):2; 19//omittedcode.. 20endrewards 21endfeature ``` Listing 1: An excerpt of the declaration of the root feature of the case study ### The Managed Subsystem The Behavioural Model of the Managed Subsystem.The behaviour of the managed subsystem of the AUV can be described by a probabilistic transition system equipped with features that guard transitions (a probabilistic featured transition system). Only if the feature guarding a transition is included in the current configuration of the managed subsystem of the AUV, the transition can be taken. This transition system adheres to the feature model in Fig. 2 and is depicted in Fig. 3, where a number of details have been omitted to avoid cluttering (in particular, all probabilities). The details can be obtained from the publicly available model. The probabilistic model allows to easily model the possibilities of, e.g., finding and losing the pipeline depending on the system configuration. The transition system can roughly be divided into two parts, one concerning the search for and one the following of the pipeline, as shown by the grey boxes in Fig. 3. At deployment time, i.e., in state _start task_, the AUV can either immediately start following the pipeline if it was deployed above it, or start searching for it. During the search for the pipeline, i.e., when the AUV is in the grey area labelled _search_, the feature _search_ should be active and remain active until the state _found_ is reached. The managing subsystem can switch between the features _low_, _med_ and _high_ during every transition, depending on the water visibility as described later. Once the pipeline is found, the managing subsystem has to deactivate the feature _search_ and activate the feature _follow_, which also implies activating the feature _low_ and deactivating _med_ and _high_ due to the feature constraints in Fig. 2. We assume that the managing subsystem activates and deactivates features during transitions, so the features _follow_ and _low_ should be activated during the transition from the state _found_ to the state _start task_. When the AUV is following the pipeline, i.e., in the grey area labelled _follow_, it can also lose the pipeline again, e.g., because of sand covering it or because it drifted off its path due to thruster failures. Then the managing subsystem has to activate the feature _search_ during the transition from _l We distinguish two kinds of transitions: probabilistic transitions that model the behaviour of a certain configuration of the managed subsystem (black transitions) and non-deterministic (featured) transitions that depend on the feature choice of the managing subsystem during runtime (blue transitions). The labels _search_, _follow_, _low_, _med_ and _high_ on the transitions represent the features that have to be active to execute the respective transition. The non-deterministic (blue) transitions implicitly carry the action to start the task or go to the altitude specified by the feature associated with the transition. For instance, the transitions from _search low_ to _search medium_ can be taken if the feature _med_ is active because the transition has the guard _med_. When taking this transition, the AUV should perform the action of going to a medium altitude. The probabilistic (black) transitions with a feature label contain the implicit action to stay at the current altitude because the navigation subfeature has not been changed during the previous transition. Whether a probabilistic or a non-deterministic transition is executed in the search states _search low_, _search medium_ and _search high_ depends on the managing subsystem, i.e., the controller switching between features (cf. Sec. 3.4). If the managing subsystem switched between the features _low_, _med_ and _high_ during the last transition, a non-deterministic transition to the search state corresponding to the new feature will be executed. Otherwise, a probabilistic transition will be executed. For instance, consider the state _search low_. If the feature _low_ is active, then a probabilistic transition will be executed. If, however, the managing subsystem deactivated the feature _low_ during the last transition and activated either _med_ or _high_, then the AUV will perform a transition to the state _search medium_ or _search high_, respectively. The ProFeat Implementation of the Managed SubsystemThe module awv models the behaviour of the managed subsystem of the AUV as displayed in Fig. 3, cf. Listing 1.2 for an excerpt of the model. As in Fig. 3, there are thirteen enumerated states in the ProFeat module with names that correspond to the state labels in the figure. The recovery states are named according to the state they are connected to (e.g., the recovery state connected to search_high is called recover_high). The variable s in Line 2 represents the current state of the AUV and is initialised using the keyword init with the state start_task. To record how many meters of the pipeline have already been inspected, the variable d_insp in Line 3 represents the distance the AUV has already inspected the pipeline, it is Figure 3: The managed subsystem of the AUV initialised with 0. The variable inspect represents the desired inspection length and can be set by the user during design time. Since the number of times a thruster failed impacts how much the AUV deviates from its path, the variable t_failed can be increased if a thruster fails while the AUV follows the pipeline. It is bounded by the influence a thruster failure can have on the system (\(\texttt{infl\_tf}\)) that can be set by the user during design time. The behaviour of the module is specified with _guarded commands_, corresponding to possible, probabilistic transitions, of the form \[\texttt{[action] guard --> prob\_1: update\_1 +... + prob\_n: update\_n;}\] A command may have an optional label action to annotate it or to synchronise with other modules. In PRISM, the guard is a predicate over global and local variables of the model, which can also come from other modules. ProFeat extends the guards by, e.g., enabling the use of the function active. If the guard is true, then the system state is changed with probability prob_i using update_i for all \(i\). An update describes how the system should perform a transition by giving new values for variables, either directly or as a function using other variables. For instance, consider the command in Lines 8-9, which can be read as follows. If the system is in state search_high and the feature high is active, then with a probability of 0.59, the system changes its state to found, with a probability of 0.4 it changes to search_high and with a probability of 0.01 it changes to recover_high. These are exactly the probabilistic transitions shown in Fig. 3 exiting from state search high. This command also has an action label, step. Using this action label, it synchronises with the environment module and the feature controller, as described later. The non-deterministic transitions exiting state search high in Fig. 3 are modelled in Lines 10-11. If the model is in state search_high, but the feature low or med is active, indicating that the AUV should go to the respective altitude, then the state is changed to the respective search state. The transitions exiting the states search_med and search_low are modelled similarly. However, the probability of going to the state found is highest from state search_high and lowest from search_low because the AUV has a wider field of view when performing the search at a higher altitude. Furthermore, the probability of a thruster failure, i.e., of going to the respective recover state, is highest in state search_low and lowest in state search_high because the probability of seaweed getting stuck in the thrusters is higher at a lower altitude. From the following state, the transitions that can be taken depend on the variables d_insp and t_failed. Lines 15-17 consider the case where the distance of the pipeline that has already been inspected (d_insp) is less than the distance the pipeline should be inspected (inspect) and the variable t_failed is 0, indicating that there were no recent thruster failures. Then the AUV stays in the following state and inspects the pipeline one more meter, it loses the pipeline, or a thruster fails and it transitions to the failure state and increases t_failed if t_failed is not at its maximum. If d_insp is less than inspect and t_failed is greater than 0, the probabilities of following and of losing the pipeline depend on the value of t_failed. The bigger the value, the more likely it is to lose the pipeline because it indicates that the AUV's thrusters did not work for some time, causing it to drift off its path. If the already inspected distance is equal to the required inspection distance, the AUV transitions to the done state (cf. Line 20) and finishes the pipeline inspection. If the AUV lost the pipeline (cf. Line 23), then a transition to start_task is taken and the variable t_failed is set to 0 again. All commands in the module auv are labelled with step. Thus, every transition receives a time reward of 1, i.e., the time advances with every transition the AUV takes, cf. Lines 4-6 of Listing 1.1. ### The Environment The Behavioural Model of the Environment.We assume that there is a minimum and a maximum visibility of the environment, depending on where the AUV is deployed and set by the user during design time. Furthermore, different environments also have different probabilities of currents that influence the water visibility. This can also be set during design time. The behaviour of the environment is then modelled as depicted in Fig. 4, where _cp_ represents the _current probability_. With the probability of currents _cp_, the water visibility decreases by 1, while it stays the same or increases by 1 with probability (1-_cp_)/2. If the water visibility is already at minimum visibility, the water visibility stays the same with probability (1+_cp_)/2 and, at maximum visibility, it stays the same with probability (1-_cp_). Figure 4: The behaviour of the environment The Implementation of the Environment in ProFeat.The environment is modelled in a separate environment module, cf. Listing 3. The variable water_visib in Line 2 reflects the current water visibility and is initialised parametrically, depending on the minimum and maximum visibility, cf. Line 3. The function round() is pre-implemented in the PRISM language and rounds to the nearest integer. The environment module synchronises with the AUV module via the label of its action, step. Since the guard of the only action in the environment module is true, the environment executes a transition every time the AUV module does. By decoupling the environment module from the AUV module, we obtain a separation of concerns which makes it easier to change the model of the environment if needed. ### The Managing Subsystem The Behavioural Model of the Managing Subsystem.As described in Sec. 2, the managing subsystem of the AUV implements the AUV's adaptation logic, which corresponds to activating and deactivating the features of the managed subsystem. The behaviour of the managing subsystem of the AUV is displayed in Fig. 5. The grey area of the figure includes the transitions that can be taken during the search for the pipeline, and the white area the transitions once the pipeline has been found. Each transition contains a guard, written in black, and an action, written in grey after a vertical bar. During the search for the pipeline, i.e., in the grey area of Fig. 5, the managing subsystem activates and deactivates the features _low_, _med_ and _high_ according to the current water visibility as described in Sec. 2. The activated feature is displayed in grey on the transition, implicitly the other two subfeatures of _navigation_ are deactivated. Note that the transitions in the grey area implicitly carry the guard _s!= found_, i.e., the AUV is not in the state _found_, because they represent the transitions during the search for the pipeline. This guard was omitted for better readability. Once the pipeline has been found, i.e., the managed subsystem is in the state _found_, one of the transitions in the white area, guarded by _s = found_, is taken. These transitions include the action of activating _low_ and _follow_, and thus deactivating _med_, _high_ and _search_. When the AUV loses the pipeline, i.e., it is in the state _lost pipe_, the managing subsystem activates _search_ and deactivates _follow_. Since the AUV is following the pipeline at a low altitude, the AUV will start searching at a low altitude. The Implementation of the Managing Subsystem in ProFeat.The managing subsystem of the AUV is implemented as a feature controller in ProFeat. The feature controller can also use _commands_ to change the state of the system. Such commands are similar to those used in a module; they are mostly of the form [action] guard -> update. Each command can have an optional label action to synchronise with the modules, and its guard is a predicate of global and local variables of the model and can also contain the function active. In contrast to the commands in the modules, the feature controller can activate and deactivate features in the update of a command. Several features can be activated and deactivated at the same time, but this cannot be done probabilistically and the resulting feature configuration has to adhere to the feature model. In the pipeline inspection case study, subfeatures of navigation (i.e., the different altitudes at which the AUV can operate) and subfeatures of pipeline_ inspection (i.e., the tasks the robot has to fulfil) can be switched by the feature controller during runtime, cf. Listing 4. When the feature search is active and the pipeline has not been found yet, the feature controller activates and deactivates the altitudes non-deterministically, but according to the current water visibility, as described before. The minimum and maximum water visibility can be set by the user during design time and influence the altitudes associated with the features low, med and high; i.e., it influences when the feature controller is able to switch features. To reflect this, the variables med_visib and high_visib are declared as in Lines 1-2 (a _formula_ in PRISM and ProFeat can be used to assign an identifier to an expression). If the water visibility is less than med_visib, the feature controller activates low (cf. Lines 6-7) because the AUV cannot perceive the seabed from a higher altitude. If the water visibility is between med_visib and high_visib, it chooses non-deterministically between low and med, whereas it chooses non-deterministically between all three altitudes if the water visibility is above high_visib. Note that it is also possible to deactivate or activate a feature if it is already inactive or active, respectively. When the pipeline is found, i.e., the AUV is in state found, the feature controller activates the feature follow and deactivates search, cf. Lines 11-12. Since the AUV should be at a low altitude while following the pipeline, the feature controller also deactivates the features high and med and activates low. If the AUV lost the pipeline, i.e., it is in state lost_pipe, the feature controller deactivates follow and activates search to start the search for the pipeline, cf. Lines 15-16. The feature controller synchronises with the auv and environment modules via action label step. Since all transitions of the modules and feature controller have the same action label, they can only execute a transition if there is a transition with a guard evaluating to true in both modules and in the feature controller. Thus, the feature controller needs to include a transition doing nothing if the feature follow is active and the AUV is not in state lost_pipe, cf. Line 19. ## 4 Analysis ProFeat automatically converts models to PRISM for probabilistic model checking. To analyse a PRISM model, properties can be specified in the PRISM property specification language, which includes several probabilistic temporal logics like PCTL, CSL and probabilistic LTL. For family-based analysis, ProFeat extends this specification language to include, e.g., the function active. (ProFeat constructs have to be specified in \(\$\{...\}\) to be correctly translated to the PRISM property specification language.) The operators used for analysis in this paper are P and R, which reason about probabilities of events and about expected rewards, respectively. Since we use Markov decision processes which involve non-determinism, these operators must be further specified to ask for the _minimum_ or _maximum_ probability and expected cost, respectively, for all possible resolutions of non-determinism. The analysis of the model considered two different aspects. First, the rewards energy and time were used to compute some safety guarantees that can be used for the deployment of the AUV. Second, safety properties with regard to unsafe states were analysed. Note that it is not necessary to analyse whether the model satisfies the constraints of the feature model because this is automatically ensured by ProFeat. We analysed two different scenarios; the values used in these scenarios are reported in Table 1. Scenario 1 is in the North Sea, where the minimum and maximum water visibility (in 0.5 meter units) are relatively low and the probability of currents that decrease the water visibility is relatively high. In this case, only 10 meters of the pipeline have to be inspected. Scenario 2 is in the Caribbean Sea, with a higher minimum and maximum visibility and a lower probability of currents compared to the North Sea, and 30 meters of pipeline that have to be inspected. For both scenarios, we first analysed whether it is always possible to finish the pipeline inspection, i.e., reach the state done. This could be confirmed since the minimum probability for all resolutions of non-determinism of eventually reaching the state done is 1.0. Reward PropertiesThe rewards time and energy were used to analyse some safety properties related to the execution of the AUV. Since the AUV only has a limited amount of battery, an estimation of the energy needed to complete the mission is required. This ensures that the AUV is only deployed for the mission if it has sufficient battery to complete it. The commands in Listing 1.5 were used to compute the minimum and maximum expected energy (for all resolutions of non-determinism) to complete the mission. Since the model includes two reward structures, the name of the reward has to be specified in {"..."} after the R operator. Similarly, the minimum and maximum expected time to complete the mission was analysed to give the system operators an estimate of how long the mission might take. The results for Scenarios 1 and 2 are reported in Table 2. It can be seen that the variation of the parameters in the two scenarios strongly influences the expected energy and time of the mission. It is interesting to see the difference between minimum and maximum expected energy and minimum and maximum expected time for Scenario 2 are significantly bigger than for Scenario 1. In particular, the maximum expected energy and time are much higher for Scenario 2 than for Scenario 1. Further analysis in this direction could investigate trade-offs between different scenarios and a better understanding of the influence in the results for the different parameters. Unsafe StatesThruster failures, although we assume that they can be repaired, pose a threat to the AUV. Unforeseen events like strong currents might cause \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline **Scenario** & **min** & **visib** & **max** & **visib** & **current** & **prob** & **inspect** \\ \hline 1 (North Sea) & 1 & 10 & 0.6 & 10 \\ \hline 2 (Caribbean Sea) & 3 & 20 & 0.3 & 30 \\ \hline \end{tabular} \end{table} Table 1: Two different scenarios used for analysis \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{**Energy**} & \multicolumn{2}{c|}{**Time**} \\ \hline **Scenario** & **min** & **max** & **min** & **max** \\ \hline 1 & 24.78 & 44.39 & 23.66 & 32.40 \\ \hline 2 & 59.08 & 4723.29 & 55.54 & 1315.58 \\ \hline \end{tabular} \end{table} Table 2: Expected min-/maximum rewards for completing the mission for both scenarios the AUV to be damaged, e.g., by causing it to crash into a rock. To analyse this, the state space was partitioned into two parts, _safe_ and _unsafe_ states. This was achieved by using labels, cf. Lines 1-4 of Listing 6. These labels were then used to calculate the probability of several properties. The minimum probability of only taking safe states (cf. Line 5) was shown to be 0.65 for Scenario 1 and 0.32 for Scenario 2. As expected, the probability of only taking safe states is higher for a shorter pipeline inspection. It is also important to ensure that a safe state will be reached from an unsafe state after a short period of time, as, e.g., in Line 6, where k is an integer. For every unsafe state, the minimum probability (for all possible resolutions of non-determinism) of reaching a safe state within k time steps is calculated. Then the minimum over all these probabilities is taken. Thus, it gives the minimum probability of reaching a safe state from an unsafe state in k time steps. PRISM experiments allow analysing this property automatically for a specified range of k. Using PRISM experiments, it was shown that in both scenarios the probability of reaching a safe state from an unsafe state is above 0.95 after 5 time steps and above 0.99 after 7 time steps. The probability of going to an unsafe state from a safe state should be as small as possible. This is analysed with the properties in Lines 7-8. First, the maximum probability (over all possible resolutions of non-determinism) for reaching an unsafe state from a safe state is calculated, and then the maximum (or average) is taken. Again, PRISM experiments were used to analyse this, the plotted graphs for Scenarios 1 and 2 are displayed in Fig. 6. They show that the probability of reaching an unsafe state from a safe state increases with the number of considered time steps. Furthermore, the probability of reaching an unsafe state from a safe state stabilises much later and at a higher value in Scenario 2 than in Scenario 1. While the maximum probability of reaching an unsafe state from a safe state stabilises after about 42 time steps at \(\approx\)0.37 in Scenario 1, it stabilises after about 76 time steps at \(\approx\)0.69 in Scenario 2. Similar differences can be observed for the average probability. ## 5 Related Work The analysis of behavioural requirements is often crucial when developing an SAS that operates in the uncertainty of a physical environment. These requirements often use quantitative metrics that change during runtime. Both rule-based and goal-based adaptation logics can be used to enable the SAS to meet its behavioural requirements. Many practitioners rely on formal methods to provide evidence for the system's compliance with such requirements [25, 20], but many different methods are used [15, 1]. We consider related work for family-based modelling and analysis approaches. Family-based model checking of transition systems with features allows to model check properties of multiple behavioural models in a single run, following the seminal work by Classen et al. [10]. Such model-checking tools can be encoded in well-known classical model checkers like SPIN [17], NuSMV [9] or PRISM [19]. In this paper, we used ProFeat [8], a software tool built on top of PRISM for the analysis of feature-aware probabilistic models. Alternatively, QFLan [23] offers probabilistic simulations to yield statistical approximations, thus trading 100% precision for scalability. In [6, 7], configurable systems are modelled and analysed as role-based systems, an extension of feature-oriented systems, with a focus on feature interaction; in contrast to our paper, they do not consider a separation between managed and managing subsystem. Software product lines (SPLs) can be seen as families of (software product) models where feature selection yields variations in the products (configurations). SPLs have previously been proposed to model static variability, i.e., variability during design time, for robotic systems [12]. In [3] it is argued that most of the costs for robotic systems come from non-reusable software. A robotic system mostly contains software tailored to the specific application and embodiment of the robot, and often even software libraries for common robotic functionalities are not reusable. Therefore, they must be re-developed all the time. Thus, a new approach for the development of robotic software using SPLs is proposed in [3]. Finally, dynamic SPLs (DSPLs) [13, 16] have been proposed to manage variability during runtime for self-adaptive robots [4]. There are several approaches that model, but do not analyse, SASs as DSPLs, e.g., [2, 11, 14]. For robotics, the authors in [12] propose the toolchain HyperFlex to model robotic systems as SPLs; it supports the design and reuse of reference architectures for robotic systems and was extended with the Robot Perception Specification Language for Figure 6: Results for reaching an unsafe state from a safe state in k time steps robotic perception systems in [5]. It allows to represent variability at different abstraction levels, and feature models from different parts of the system can be composed in several different ways. However, contrary to the approach used in this paper, HyperFlex only considers design time variability. Furthermore, it is only used for modelling robotic systems, not for analysing them. ## 6 Discussion and Future Work In this paper, we used a feature model together with a probabilistic, feature guarded transition system to model the managed subsystem of an AUV used for pipeline inspection, and a controller switching between these features to model the managing subsystem of the AUV. This allowed modelling the managed subsystem of the AUV as a family of systems, where each family member corresponds to a valid feature configuration of the AUV. The managing subsystem could then be considered as a control layer capable of dynamically switching between these feature configurations depending on both environmental and internal conditions. The tool ProFeat was used for probabilistic family-based model checking, analysing reward and safety properties. ProFeat allowed to model the two different layers of abstraction of an SAS, the managed and managing subsystem, which also makes it easier to understand the model and the adaptation logic. Furthermore, it makes analysing all configurations of the managed subsystem more efficient by enabling family-based model checking. However, it remains to be seen how this scales with larger models. The case study in this paper is of course a highly simplified model of an AUV and its mission. However, we showed that it is feasible to model and analyse a two-layered self-adaptive cyber-physical system as a family of configurations with a controller switching between them. To analyse a real AUV, both the models of the AUV and the environment, and in particular the probabilities, have to be adapted to the robot and the environment with the help of real data and domain experts. We plan to investigate this together with an industrial partner of the MSCA network REMARO (Reliable AI for Marine Robotics). In the future, we plan to investigate which kind of models can be modelled and analysed as we did with the case study to try to find a general methodology for modelling and analysing SASs as family-based systems. Furthermore, we plan to find optimal strategies for the managing subsystem, i.e., the controller switching between features, e.g., to minimise energy consumption. We would also like to find patterns between choosing a certain feature configuration and the effect of this on quality criteria of the system. Finding such control patterns could help to improve the adaptation logic of the managing subsystem to be more resilient towards faults. #### Acknowledgments. This work was supported by the European Union's Horizon 2020 Framework Programme through the MSCA network REMARO (Grant Agreement No 956200) and by the Italian MUR PRIN 2020TL3X8X project T-LADIES (Typeful Language Adaptation for Dynamic, Interacting and Evolving Systems).
2303.00564
Learning curves for deep structured Gaussian feature models
In recent years, significant attention in deep learning theory has been devoted to analyzing when models that interpolate their training data can still generalize well to unseen examples. Many insights have been gained from studying models with multiple layers of Gaussian random features, for which one can compute precise generalization asymptotics. However, few works have considered the effect of weight anisotropy; most assume that the random features are generated using independent and identically distributed Gaussian weights, and allow only for structure in the input data. Here, we use the replica trick from statistical physics to derive learning curves for models with many layers of structured Gaussian features. We show that allowing correlations between the rows of the first layer of features can aid generalization, while structure in later layers is generally detrimental. Our results shed light on how weight structure affects generalization in a simple class of solvable models.
Jacob A. Zavatone-Veth, Cengiz Pehlevan
2023-03-01T15:11:23Z
http://arxiv.org/abs/2303.00564v3
# Learning curves for deep structured Gaussian feature models ###### Abstract In recent years, significant attention in deep learning theory has been devoted to analyzing the generalization performance of models with multiple layers of Gaussian random features. However, few works have considered the effect of feature anisotropy; most assume that features are generated using independent and identically distributed Gaussian weights. Here, we derive learning curves for models with many layers of structured Gaussian features. We show that allowing correlations between the rows of the first layer of features can aid generalization, while structure in later layers is generally detrimental. Our results shed light on how weight structure affects generalization in a simple class of solvable models. ## I Introduction Characterizing how data structure and model architecture affect generalization performance is among the foremost goals of deep learning theory [1; 2]. A fruitful line of inquiry has focused on the properties of a class of simplified models that are asymptotically solvable: neural networks in which only the readout layer is trained and other weights are random, which are known as random feature models (RFMs) [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Though RFMs cannot capture the effects of representation learning on generalization in richly-trained neural networks [21; 12; 22], they have substantially advanced our understanding of how data structure and model architecture interact to give rise to a wide array of generalization phenomena observed in deep learning, including double-descent and benign overfitting [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 23]. However, these analyses consider the effect only of correlations in the data, and do not address the possibility of correlations between the random weights. It is standard to assume that the elements of the weight matrices at each layer are independent and identically distributed Gaussian random variables [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. As a result, how weight anisotropy affects generalization in deep RFMs--in particular, if it can affect the asymptotic scaling of generalization error with dataset size and network width [15; 18; 24]--remains unclear. In this note, we take the first step towards filling that gap in our theoretical understanding of RFMs by computing the asymptotic generalization error of the simplest class of deep RFMs with anisotropic weight correlations: models with linear activations. We show that anisotropic weight correlations beyond the first layer are generally detrimental for generalization. For a tractable class of anisotropic correlations--those with power law spectra [15; 18; 24]--we show that structure in deeper layers does not alter the scaling laws of generalization. We then move beyond the ridgeless estimator, and consider zero-temperature Bayesian inference in deep linear RFMs. Here, depending on the overall scale of the weight prior, structure can be either helpful or harmful, as changing the scale can introduce model mismatch. Taken together, these results are consistent with the intuition that representation learning at only the first layer of a deep linear model is sufficient to achieve optimal performance [25; 26; 12]. ## II Preliminaries We consider depth-\(L\) linear RFMs with input \(\mathbf{x}\in\mathbb{R}^{n_{0}}\) and scalar output given by \[g(\mathbf{x};\mathbf{v},\mathbf{F})=\frac{1}{\sqrt{n_{0}}}(\mathbf{F}\mathbf{ v})^{\top}\mathbf{x}, \tag{1}\] where the feature matrix \(\mathbf{F}\in\mathbb{R}^{n_{0}\times n_{L}}\) is fixed and the vector \(\mathbf{v}\in\mathbb{R}^{n_{L}}\) is trainable. If \(L=0\), corresponding to standard linear regression, the feature matrix is simply the identity: \(\mathbf{F}=\mathbf{I}_{n_{0}}\). If \(L>0\), we take the feature matrix to be defined by a product of \(L\) factors \(\mathbf{U}_{\ell}\in\mathbb{R}^{n_{\ell-1}\times n_{\ell}}\): \[\mathbf{F}=\frac{1}{\sqrt{n_{1}\cdots n_{L}}}\mathbf{U}_{1}\cdots\mathbf{U}_{ L}. \tag{2}\] We draw the random feature matrices independently from matrix Gaussian distributions \[\mathbf{U}_{\ell}\sim\mathcal{M}\mathcal{N}_{n_{\ell-1}\times n_{\ell}}( \mathbf{0},\mathbf{\Gamma}_{\ell},\mathbf{\Sigma}_{\ell}) \tag{3}\] for input covariance matrices \(\mathbf{\Gamma}_{\ell}\in\mathbb{R}^{n_{\ell-1}\times n_{\ell-1}}\) and output covariance matrices \(\mathbf{\Sigma}_{\ell}\in\mathbb{R}^{n_{\ell}\times n_{\ell}}\), such that \(\mathbb{E}[(U_{\ell})_{ij}(U_{\ell^{\prime}})_{i^{\prime}j^{\prime}}]=\delta_ {\ell\ell^{\prime}}(\Gamma_{\ell})_{ii^{\prime}}(\Sigma_{\ell})_{jj^{\prime}}\). Subject to the constraints of layer-wise independence and separability--which are required for the factors to be matrix-Gaussian distributed--this is the most general covariance structure one could consider. One might wish to relax this to include non-separable covariance tensors \(\mathbb{E}[(U_{\ell})_{ij}(U_{\ell^{\prime}})_{i^{\prime}j^{\prime}}]=\delta _{\ell\ell^{\prime}}(\chi_{\ell})_{ii^{\prime}jj^{\prime}}\), but this would spoil the matrix-Gaussianity of the factors, and to our knowledge does not appear to be addressable using standard methods [27; 28]. We generate training datasets according to a structured Gaussian covariate model, with \(p\) i.i.d. training examples \((\mathbf{x}_{\mu},y_{\mu})\) generated as \[\mathbf{x}_{\mu}\sim_{\text{i.i.d.}}\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{0 }),\qquad y_{\mu}=\frac{1}{\sqrt{n_{0}}}\mathbf{w}_{*}^{\top}\mathbf{x}_{\mu}+ \xi_{\mu}, \tag{4}\] where the teacher weight vector \(\mathbf{w}_{*}\) is fixed and the label noise follows \[\xi_{\mu}\sim_{\text{i.i.d.}}\mathcal{N}(0,\eta^{2}). \tag{5}\] We collect the covariates into a matrix \(\mathbf{X}\in\mathbb{R}^{p\times n_{0}}\), and the targets into a vector \(\mathbf{y}\in\mathbb{R}^{p}\). As in most works on RFMs [3; 4; 5; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20], our focus is on the ridge regression estimator \[\mathbf{v}=\operatorname*{arg\,min}_{\mathbf{v}}L\quad\text{for}\quad L=\frac{1 }{2}\left\|\frac{1}{\sqrt{n_{0}}}\mathbf{X}\mathbf{F}\mathbf{v}-\mathbf{y} \right\|^{2}+\frac{\lambda}{2}\|\mathbf{\Gamma}_{L+1}^{-1/2}\mathbf{v}\|_{2}^ {2}, \tag{6}\] where the positive-definite matrix \(\mathbf{\Gamma}_{L+1}\in\mathbb{R}^{n_{L}\times n_{L}}\) controls the anisotropy of the norm and the ridge parameter \(\lambda>0\) sets the regularization strength. This minimization problem has the well-known closed form solution \[\hat{\mathbf{v}}=\frac{1}{\sqrt{n_{0}}}\left(\lambda\mathbf{\Gamma}_{L+1}^{-1 }+\frac{1}{n_{0}}\mathbf{F}^{\top}\mathbf{X}^{\top}\mathbf{X}\mathbf{F}\right) ^{-1}\mathbf{F}^{\top}\mathbf{X}^{\top}\mathbf{y}. \tag{7}\] As in most works, we are chiefly interested in the ridgeless limit \(\lambda\downarrow 0\), in which the ridge regression solution gives the minimum \(\ell_{2}\) norm interpolant of the training data. We measure performance of this estimator by the generalization error \[\epsilon_{p,n_{0},\dots,n_{L}}=\mathbb{E}_{\mathbf{x}}\left(g( \mathbf{x};\hat{\mathbf{v}},\mathbf{F})-\mathbb{E}_{\xi}[y(\mathbf{x})]\right) ^{2}=\frac{1}{n_{0}}\|\mathbf{\Sigma}_{0}^{1/2}(\mathbf{F}\hat{\mathbf{v}}- \mathbf{w}_{*})\|^{2}, \tag{8}\] which is a random variable with distribution induced by the training data and feature weights. This leads us to an important observation: including structured input-input covariances is equivalent to transforming the feature-feature covariances. We state this formally as: **Lemma II.1**.: _Fix sets of matrices \(\{\mathbf{\Gamma}_{\ell}\}_{\ell=1}^{L+1}\) and \(\{\mathbf{\Sigma}_{\ell}\}_{\ell=0}^{L}\), and a target vector \(\mathbf{w}_{*}\). Let \(\epsilon_{p,n_{0},\dots,n_{L}}\) be the resulting generalization error as defined in (8). Let_ \[\tilde{\mathbf{\Gamma}}_{\ell} =\mathbf{I}_{n_{\ell-1}} \text{for }\ell=1,\dots,L+1, \tag{9}\] \[\tilde{\mathbf{\Sigma}}_{\ell} =\mathbf{\Gamma}_{\ell+1}^{1/2}\mathbf{\Sigma}_{\ell}\mathbf{ \Gamma}_{\ell+1}^{1/2} \text{for }\ell=0,\dots,L,\text{ and}\] (10) \[\tilde{\mathbf{w}}_{*} =\mathbf{\Gamma}_{1}^{-1/2}\mathbf{w}_{*}. \tag{11}\] _Let \(\tilde{\epsilon}_{p,n_{0},\dots,n_{L}}\) be the generalization error for these transformed covariance matrices and target. Then, for any \(\lambda>0\), we have the equality in distribution \(\epsilon_{p,n_{0},\dots,n_{L}}\overset{d}{=}\tilde{\epsilon}_{p,n_{0},\dots,n_ {L}}\)._ Proof of Lemma ii.1.: As the features and data are Gaussian, we can write \(\mathbf{X}\stackrel{{ d}}{{=}}\mathbf{\Sigma}_{0}^{1/2}\mathbf{Z}_{0}\) and \(\mathbf{U}_{\ell}\stackrel{{ d}}{{=}}\mathbf{\Gamma}_{\ell}^{1/2} \mathbf{Z}_{\ell}\mathbf{\Sigma}_{\ell}^{1/2}\) for unstructured Gaussian matrices \((Z_{\ell})_{ij}\sim_{\text{i.i.d.}}\mathcal{N}(0,1)\). Substituting these representations into the ridge regression solution (7) and the generalization error (8), the claim follows. Therefore, we may take \(\mathbf{\Gamma}_{\ell}=\mathbf{I}_{n_{\ell-1}}\) without loss of generality. Moreover, thanks to the rotation-invariance of the isotropic Gaussian factors \(\mathbf{Z}_{\ell}\), we may in fact take the remaining covariance matrices \(\mathbf{\Sigma}_{\ell}\) to be diagonal without loss of generality, so long as we then express \(\tilde{\mathbf{w}}_{*}\) in the basis of eigenvectors of \(\mathbf{\Sigma}_{0}\). An important qualitative takeaway of this result is that changing the covariance matrix of the inputs of the first layer \(\mathbf{\Gamma}_{1}\) is equivalent to modifying the data covariance matrix, which was in a simpler form observed in the shallow setting (\(L=1\)) by Pandey _et al._[29]. ## III Asymptotic learning curves Having defined the setting of our problem, we can define our concrete objective and state our main results, deferring their interpretation to the following section. We consider the standard proportional asymptotic limit \[p,n_{0},\ldots,n_{L}\rightarrow\infty,\quad\text{with}\quad n_{ \ell}/p\rightarrow\alpha_{\ell}\in(0,\infty), \tag{12}\] which we will refer to as the thermodynamic limit. Our goal is to compute the limiting generalization error: \[\epsilon=\lim_{p,n_{0},\ldots,n_{L}\rightarrow\infty}\mathbb{E}_{ \mathcal{D}}\frac{1}{n_{0}}\|\mathbf{\Sigma}_{0}^{1/2}(\mathbf{F}\mathbf{v}- \mathbf{w}_{*})\|^{2}, \tag{13}\] where \(\mathbb{E}_{\mathcal{D}}\) denotes expectation over all sources of quenched disorder in the problem, i.e., the training data and the random feature weights. In the thermodynamic limit, we expect the generalization error to concentrate, which is why we compute its average in (13) [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. To have a well-defined thermodynamic limit, the covariances \(\tilde{\mathbf{\Sigma}}_{\ell}\) and the teacher \(\tilde{\mathbf{w}}_{\ell}\) must be in some sense sufficiently well-behaved. We consider the following conditions, which are the generalization to our setting of those assumed in previous work [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]: **Assumption III.1**.: _We assume that we are given deterministic sequences of positive-definite matrices \(\tilde{\mathbf{\Sigma}}_{\ell}(n_{\ell})\) and vectors \(\tilde{\mathbf{w}}_{*}(n_{0})\) indexed by the system size, such that the limiting (weighted) spectral moment generating functions_ \[M_{\tilde{\mathbf{\Sigma}}_{\ell}}(z)=\lim_{n_{\ell}\rightarrow \infty}\frac{1}{n_{\ell}}\operatorname{tr}[\tilde{\mathbf{\Sigma}}_{\ell}(z \mathbf{I}_{n_{\ell}}-\tilde{\mathbf{\Sigma}}_{\ell})^{-1}]\quad\text{and} \quad\psi(z)=\lim_{n_{0}\rightarrow\infty}\frac{1}{n_{0}}\tilde{\mathbf{w}}_{ *}^{\top}\tilde{\mathbf{\Sigma}}_{0}(z\mathbf{I}_{n_{0}}+\tilde{\mathbf{ \Sigma}}_{0})^{-1}\tilde{\mathbf{w}}_{*} \tag{14}\] _are well-defined, for all \(\ell=0,\ldots,L\)._ We can now state our results. As a preliminary step, we first give an expression for the generalization error for a fixed teacher \(\tilde{\mathbf{w}}_{*}\) at finite ridge \(\lambda\). Then, we pass to the ridgeless limit, on which we focus for the remainder of the paper. At finite ridge, we have the following: **Proposition III.1**.: _Assume Assumption III.1 holds. For \(\lambda>0\), let \(\zeta\) solve the self-consistent equation_ \[\lambda=\frac{1-\zeta}{\zeta}\prod_{\ell=0}^{L}\frac{-\zeta}{ \alpha_{\ell}}M_{\tilde{\mathbf{\Sigma}}_{\ell}}^{-1}\left(-\frac{\zeta}{ \alpha_{\ell}}\right). \tag{15}\] _In terms of \(\zeta\), let \(\kappa_{\ell}(\zeta)\) solve_ \[\mathbb{E}_{\tilde{\sigma}_{\ell}}\left[\frac{\tilde{\sigma}_{ \ell}}{\kappa_{\ell}(\zeta)+\tilde{\sigma}_{\ell}}\right]=-M_{\tilde{\mathbf{ \Sigma}}_{\ell}}(-\kappa_{\ell}(\zeta))=\frac{\zeta}{\alpha_{\ell}} \tag{16}\] _for \(\ell=0,\ldots,L\), where \(\mathbb{E}_{\tilde{\sigma}_{\ell}}[\cdot]\) denotes expectation with respect to the limiting spectral distribution of \(\tilde{\mathbf{\Sigma}}_{\ell}\), and let_ \[\mu_{\ell}(\zeta)=-\frac{\alpha_{\ell}}{\zeta}\kappa_{\ell}(\zeta )M_{\tilde{\mathbf{\Sigma}}_{\ell}}^{\prime}\left(-\kappa_{\ell}(\zeta)\right) =1-\frac{\alpha_{\ell}}{\zeta}\mathbb{E}_{\tilde{\sigma}_{\ell}}\left[\left( \frac{\tilde{\sigma}_{\ell}}{\kappa_{\ell}(\zeta)+\tilde{\sigma}_{\ell}} \right)^{2}\right]. \tag{17}\] _Then, the learning curve (13) at finite ridge for a fixed target is given by_ \[\big{[}1+\big{(}\sum_{\ell=0}^{L}\tfrac{1-\mu_{\ell}}{\mu_{\ell}} \big{)}(1-\zeta)\big{]}\epsilon=\big{(}\sum_{\ell=1}^{L}\tfrac{1-\mu_{\ell}}{ \mu_{\ell}}\big{)}\kappa_{0}\psi(\kappa_{0})-\tfrac{\kappa_{0}^{2}}{\mu_{0}} \psi^{\prime}(\kappa_{0})+\big{(}\sum_{\ell=0}^{L}\tfrac{1-\mu_{\ell}}{\mu_{ \ell}}\big{)}\zeta\eta^{2}. \tag{18}\] Proof of Proposition iii.1.: We defer the derivation of (18) to Appendix A. To compute the disorder average in (13), we express the minimization problem in (6) as the zero-temperature limit \(\beta\to\infty\) of an auxiliary Gibbs distribution \(p(\mathbf{v})\propto e^{-\beta L}\), and evaluate the average over the random data random feature weights using the non-rigorous replica method from the statistical mechanics of disordered systems [31, 32]. This computation is lengthy but standard, and is closely related to the approach used in our previous works [12, 28]. All of our results are obtained under a replica-symmetric _Ansatz_; as the ridge regression problem (6) is convex, we expect replica symmetry to be unbroken [31, 33, 34]. From the self-consistent equation (15), we recognize that \(\zeta\) is is up to a sign the spectral moment generating function of the feature Gram matrix \(\mathbf{K}=\mathbf{XFF}^{\top}\mathbf{X}^{\top}/n_{0}\), which is a product-Wishart random matrix [28]: \[\zeta(\lambda)=-M_{\mathbf{K}}(-\lambda). \tag{19}\] This dependence falls out of the replica computation of the generalization error using an auxiliary Gibbs distribution; we emphasize that one could take an alternative approach in which the generalization error is first expressed in terms of \(M_{\mathbf{K}}\) and then use results on the spectra of product-Wishart matrices [28, 18]. In principle, we could now directly proceed to study how weight structure affects (18) for some fixed ridge \(\lambda\). However, as long as there is structure in the weights and/or the data, the self-consistent equation (15) must generally be solved numerically [28, 13]. To allow us to make analytical progress, we therefore focus on the ridgeless limit \(\lambda\downarrow 0\) for the remainder of the present paper, and leave careful analysis of the \(\lambda>0\) case to future work. This follows the path of most recent studies of models with linear random features [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. We therefore emphasize that we state Proposition III.1 merely as a preliminary result. Before giving our result for the generalization error in the ridgeless limit, we warn the reader of an impending, somewhat severe abuse of notation: in Proposition III.2 and for the remainder of the paper, we will re-define \(\kappa_{\ell}\) to be given by its value for the solution for \(\zeta\) appropriate in the regime of interest. Moreover, we will simply write \(\epsilon\) for \(\lim_{\lambda\downarrow 0}\epsilon\). **Proposition III.2**.: _Assume Assumption III.1 holds. For \(\ell=0,\ldots,L\), in the regime \(\alpha_{\ell}>1\), let \(\kappa_{\ell}\) be given by the unique non-negative solution to the implicit equation_ \[\frac{1}{\alpha_{\ell}}=-M_{\tilde{\mathbf{E}}_{\ell}}(-\kappa_{\ell})= \mathbb{E}_{\tilde{\sigma}_{\ell}}\left[\frac{\tilde{\sigma}_{\ell}}{\kappa_ {\ell}+\tilde{\sigma}_{\ell}}\right]. \tag{20}\] _In terms of \(\kappa_{\ell}\), let_ \[\mu_{\ell}=-\alpha_{\ell}\kappa_{\ell}M^{\prime}_{\tilde{\mathbf{E}}_{\ell} }(-\kappa_{\ell})=1-\alpha_{\ell}\mathbb{E}_{\tilde{\sigma}_{\ell}}\left[ \left(\frac{\tilde{\sigma}_{\ell}}{\kappa_{\ell}+\tilde{\sigma}_{\ell}} \right)^{2}\right]. \tag{21}\] _In the regime \(\alpha_{\min}<\alpha_{0}\), let \(\kappa_{\min}\) be the unique non-negative solution to the implicit equation_ \[\frac{\alpha_{\min}}{\alpha_{0}}=-M_{\tilde{\mathbf{E}}_{0}}(-\kappa_{\min} )=\mathbb{E}_{\tilde{\sigma}_{0}}\left[\frac{\tilde{\sigma}_{0}}{\kappa_{\min }+\tilde{\sigma}_{0}}\right]. \tag{22}\] _Then, letting \(\alpha_{\min}=\min\{\alpha_{1},\cdots,\alpha_{L}\}\), the learning curve (13) for a fixed target in the ridgeless limit \(\lambda\downarrow 0\) is given by_ \[\epsilon=\begin{cases}(\sum_{\ell=1}^{L}\frac{1-\mu_{\ell}}{\mu_{\ell}})\kappa _{0}\psi(\kappa_{0})-\frac{\kappa_{0}^{2}}{\mu_{0}}\psi^{\prime}(\kappa_{0})+ \big{(}\sum_{\ell=0}^{L}\frac{1-\mu_{\ell}}{\mu_{\ell}}\big{)}\eta^{2},&\alpha _{0},\alpha_{\min}>1\\ \frac{\kappa_{\min}\psi(\kappa_{\min})}{1-\alpha_{\min}}+\frac{\alpha_{\min} }{1-\alpha_{\min}}\eta^{2},&\alpha_{\min}<1,\alpha_{\min}<\alpha_{0}\\ \frac{\alpha_{0}}{1-\alpha_{0}}\eta^{2},&\alpha_{0}<1,\alpha_{0}<\alpha_{\min}.\end{cases} \tag{23}\] Proof of Proposition iii.2.: We derive (23) as the zero-ridge limit of Proposition III.1 in Appendix A. In Appendix C, we provide a notational dictionary to help compare this result to prior works. One special case of this result is **Corollary III.1**.: _If \(L=0\), we have_ \[\epsilon=\begin{cases}-\frac{\kappa_{0}^{2}}{\mu_{0}}\psi^{\prime}(\kappa_{0} )+\frac{1-\mu_{0}}{\mu_{0}}\eta^{2},&\alpha_{0}>1\\ \frac{\alpha_{0}}{1-\alpha_{0}}\eta^{2},&\alpha_{0}<1.\end{cases} \tag{24}\] This recovers the known, rigorously proved result for linear ridgeless regression [4, 5, 6, 7, 15, 16, 17]. For larger depths, an important simplifying case of Proposition III.2 is that in which the data and features are unstructured, in which case the generalization error is given by **Corollary III.2**.: _If \(\tilde{\mathbf{\Sigma}}_{\ell}=\mathbf{I}_{n_{\ell}}\) for \(\ell=0,\ldots,L\), we have, for any target satisfying \(\|\tilde{\mathbf{w}}_{\star}\|^{2}=n_{0}\),_ \[\epsilon=\begin{cases}\big{(}1+\sum_{\ell=1}^{L}\frac{1}{\alpha_{ \ell}-1}\big{)}\big{(}1-\frac{1}{\alpha_{0}}\big{)}+\big{(}\sum_{\ell=0}^{L} \frac{1}{\alpha_{\ell}-1}\big{)}\eta^{2},&\alpha_{0},\alpha_{\min}>1\\ \frac{1-\alpha_{\min}/\alpha_{0}}{1-\alpha_{\min}}+\frac{\alpha_{\min}}{1- \alpha_{\min}}\eta^{2},&\alpha_{\min}<1,\alpha_{\min}<\alpha_{0}\\ \frac{\alpha_{0}}{1-\alpha_{0}}\eta^{2},&\alpha_{0}<1,\alpha_{0}<\alpha_{\min }.\end{cases} \tag{25}\] Proof of Corollary iii.2.: We have \(M_{\mathbf{I}_{n_{\ell}}}(z)=1/(z-1)\), hence \(\kappa_{\ell}=\alpha_{\ell}-1\), \(\mu_{\ell}=1-1/\alpha_{\ell}\), and \(\kappa_{\min}=\alpha_{0}/\alpha_{\min}-1\). Finally, for any fixed teacher vector satisfying \(\|\tilde{\mathbf{w}}_{\star}\|^{2}=n_{0}\), we have \(\psi(z)=1/(z+1)\) if \(\tilde{\mathbf{\Sigma}}_{0}=\mathbf{I}_{n_{0}}\). Substituting these results into (23), we obtain (25). This recovers results obtained in our previous work [12], and in the single-layer case \(L=1\) recovers results obtained by Rocks and Mehta [13, 14], and by Hastie _et al._[5] (see Appendix C). In the slightly more general case of unstructured weights but structured features, we have **Corollary III.3**.: _If \(\tilde{\mathbf{\Sigma}}_{\ell}=\mathbf{I}_{n_{\ell}}\) for \(\ell=1,\ldots,L\), but \(\tilde{\mathbf{\Sigma}}_{0}\neq\mathbf{I}_{n_{0}}\), we have, for any target satisfying \(\|\tilde{\mathbf{w}}_{\star}\|^{2}=n_{0}\),_ \[\epsilon=\begin{cases}\big{(}\sum_{\ell=1}^{L}\frac{1}{\alpha_{ \ell}-1}\big{)}\kappa_{0}\psi(\kappa_{0})-\frac{\kappa_{0}^{2}}{\mu_{0}}\psi^ {\prime}(\kappa_{0})+\big{(}\frac{1-\mu_{0}}{\mu_{0}}+\sum_{\ell=1}^{L}\frac{ 1}{\alpha_{\ell}-1}\big{)}\eta^{2},&\alpha_{0},\alpha_{\min}>1\\ \frac{\kappa_{\min}\psi(\kappa_{\min})}{1-\alpha_{\min}}+\frac{\alpha_{\min}} {1-\alpha_{\min}}\eta^{2},&\alpha_{\min}<1,\alpha_{\min}<\alpha_{0}\\ \frac{\alpha_{0}}{1-\alpha_{0}}\eta^{2},&\alpha_{0}<1,\alpha_{0}<\alpha_{\min }.\end{cases} \tag{26}\] Proof of Corollary iii.3.: (26) follows from substituting the results of Corollary III.2 into (23). In the special case \(L=1\), this recovers the result obtained using rigorous methods in contemporaneous1 work by Bach [30]. Footnote 1: The first version of our work was posted to the arXiv one day before the first version of [30]. Another useful simplification can be obtained by further averaging over isotropically-distributed teachers \(\tilde{\mathbf{w}}_{\star}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{n_{0}})\), which gives **Corollary III.4**.: _Let \(\bar{\epsilon}=\mathbb{E}_{\tilde{\mathbf{w}}_{\star}\sim\mathcal{N}(\mathbf{0 },\mathbf{I}_{n_{0}})}[\epsilon]\). Then, we have_ \[\bar{\epsilon}=\begin{cases}\big{(}1+\sum_{\ell=1}^{L}\frac{1-\mu_{ \ell}}{\mu_{\ell}}\big{)}\frac{\kappa_{0}}{\alpha_{0}}+\big{(}\sum_{\ell=0}^{L }\frac{1-\mu_{\ell}}{\mu_{\ell}}\big{)}\eta^{2},&\alpha_{0},\alpha_{\min}>1\\ \frac{\alpha_{\min}\kappa_{\min}/\alpha_{0}}{1-\alpha_{\min}}+\frac{\alpha_{ \min}}{1-\alpha_{\min}}\eta^{2},&\alpha_{\min}<1,\alpha_{\min}<\alpha_{0}\\ \frac{\alpha_{0}}{1-\alpha_{0}}\eta^{2},&\alpha_{0}<1,\alpha_{0}<\alpha_{\min }.\end{cases} \tag{27}\] Proof of Corollary iii.4.: Observing that \(\mathbb{E}_{\tilde{\mathbf{w}}_{\star}}\psi(z)=-M_{\tilde{\mathbf{\Sigma}}_{0}}(-z)\), the claim follows from (23). Figure 1: Phase diagram of generalization in deep linear RFMs. For simplicity, we consider a model with a single hidden layer (\(L=1\)); the picture for deeper models is identical if one considers the narrowest hidden layer [12]. (a). Generalization error \(\epsilon\) for unstructured data and features from (25) as a function of training data density \(1/\alpha_{0}\) and hidden layer width \(\alpha_{1}/\alpha_{0}\) in the absence of label noise (\(\eta=0\); _left_) and in the presence of label noise (\(\eta=0.5\); _right_). (b). As in (a), but for power law structured data and weights, with \(\omega_{0}=\omega_{1}=1\), and \(\bar{\epsilon}\) given by (31). See Appendix E for numerical methods. In the special case of a single layer of unstructured feature weights (\(L=1\), \(\tilde{\mathbf{\Sigma}}_{1}=\mathbf{I}_{n_{1}}\)), this recovers the result of recent work by Maloney _et al._[18], who used a planar diagram method to the generalization error of single-hidden-layer linear RFMs with unstructured weights (see Appendix C). Another important simplifying case of Proposition III.2 is the limit in which the hidden layer widths are large, in which the generalization error of the deep RFM reduces to that of a shallow model, as given by Corollary III.1. More precisely, we have a large-width expansion given by: **Corollary III.5**.: _In the large-width regime \(\alpha_{1},\ldots,\alpha_{L}\gg 1\), assuming that the weight spectra have finite moments, the generalization error (23) expands as_ \[\epsilon=-\tfrac{\kappa_{0}^{2}}{\mu_{0}}\psi^{\prime}(\kappa_{0})+\tfrac{1- \mu_{0}}{\mu_{0}}\eta^{2}+\big{(}\sum_{\ell=1}^{L}\tfrac{\mathbb{E}_{\delta_{ \ell}}[\tilde{\sigma}_{\ell}^{2}]}{\mathbb{E}_{\delta_{\ell}}[\tilde{\sigma}_{ \ell}]^{2}}\tfrac{1}{\alpha_{\ell}}\big{)}(\kappa_{0}\psi(\kappa_{0})+\eta^{2} )+\mathcal{O}(\alpha_{1}^{-2},\ldots,\alpha_{L}^{-2}) \tag{28}\] _in the regime \(\alpha_{0}>1\); if \(\alpha_{0}<1\) the generalization error does not depend on the hidden layer widths so long as they are greater than 1._ Proof of Corollary iii.5.: See Appendix D. ## IV How does weight structure affect generalization? The first salient feature of these learning curves is that the addition of weight structure does not alter the phase diagram of generalization, which is illustrated in Figure 1. There are three qualitatively distinct phases present, depending on the data density and minimum layer width: the overparameterized regime \(\alpha_{0},\alpha_{\text{min}}>1\), the bottlenecked regime \(\alpha_{\text{min}}<1\), \(\alpha_{\text{min}}<\alpha_{0}\), and the overdetermined regime \(\alpha_{0}<1\), \(\alpha_{0}<\alpha_{\text{min}}\). This dependence on the narrowest hidden layer matches previous work on models with unstructured weights [12]2, and can be observed in the solutions to the ridge regression problem for fixed data (Appendix B). As \(\alpha_{\ell}\downarrow 1\), \(\kappa_{\ell}\downarrow 0\) and \(\mu_{\ell}\downarrow 0\), and the generalization error diverges. Similarly, the generalization error diverges as \(\alpha_{\text{min}}\uparrow 1\), or \(\alpha_{0}\uparrow 1\) in the presence of label noise. However, there are not multiple descents in these deep linear models, consistent with the qualitative picture of the effect of nonlinearity given by previous works [9; 10]. Footnote 2: Previous works on deep RFMs have used several different parameterizations of the thermodynamic limit [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 18; 19; 20]. We detail the conversion between these conventions in Appendix C. The second salient feature is that the matrices \(\tilde{\mathbf{\Sigma}}_{\ell}\) enter the generalization error independently; there are no 'interaction' terms involving products of the correlation matrices for different layers. This decoupling is expected given that the features are Gaussian [28]. Moreover, under the rescaling \(\tilde{\mathbf{\Sigma}}_{\ell}^{\prime}=\tau_{\ell}\tilde{\mathbf{\Sigma}}_{\ell}\) for \(\tau_{\ell}>0\), we have \(\kappa_{\ell}^{\prime}=\tau_{\ell}\kappa_{\ell}\) and \(\mu_{\ell}^{\prime}=\mu_{\ell}\). Therefore, (23) is sensitive only to the overall scale of \(\tilde{\mathbf{\Sigma}}_{0}\), not to the scales of \(\tilde{\mathbf{\Sigma}}_{1},\ldots,\tilde{\mathbf{\Sigma}}_{L}\). This scale-invariance can be observed directly from the ridgeless limit of the ridge regression estimator (7). We can gain intuition for the effect of having \(\tilde{\mathbf{\Sigma}}_{\ell}\not\propto\mathbf{I}_{n_{\ell}}\) for \(\ell\geq 1\) through the following argument: **Lemma IV.1**.: _Under the conditions of Proposition III.2, in the regime \(\alpha_{0},\alpha_{\text{min}}>1\), we have_ \[\epsilon\geq\big{(}\sum_{\ell=1}^{L}\tfrac{1}{\alpha_{\ell}-1}\big{)}\kappa_{0 }\psi(\kappa_{0})-\tfrac{\kappa_{0}^{2}}{\mu_{0}}\psi^{\prime}(\kappa_{0})+ \big{(}\tfrac{1-\mu_{0}}{\mu_{0}}+\sum_{\ell=1}^{L}\tfrac{1}{\alpha_{\ell}-1} \big{)}\eta^{2}. \tag{29}\] _That is, the generalization error for a given \(\tilde{\mathbf{\Sigma}}_{1},\cdots,\tilde{\mathbf{\Sigma}}_{L}\) is bounded from below by the generalization error for \(\tilde{\mathbf{\Sigma}}_{\ell}=\mathbf{I}_{n_{\ell}}\) for \(\ell=1,\ldots,L\)._ Proof of Lemma IV.1.: By definition (21), we have \(\mu_{\ell}=1-\alpha_{\ell}\mathbb{E}_{\delta_{\ell}}[(\tilde{\sigma}_{\ell}/( \kappa_{\ell}+\tilde{\sigma}_{\ell}))^{2}]\). By Jensen's inequality and the definition of \(\kappa_{\ell}\) (20), we have \(\mathbb{E}_{\tilde{\sigma}_{\ell}}[(\tilde{\sigma}_{\ell}/(\kappa_{\ell}+ \tilde{\sigma}_{\ell}))^{2}]\geq\mathbb{E}_{\tilde{\sigma}_{\ell}}[\tilde{ \sigma}_{\ell}/(\kappa_{\ell}+\tilde{\sigma}_{\ell})]^{2}=1/\alpha_{\ell}^{2}\). As \(\alpha_{\ell}>1\) by assumption, this bound is always positive. Therefore, we have \(\mu_{\ell}\leq 1-1/\alpha_{\ell}\) for any weight spectrum, which implies that \((1-\mu_{\ell})/\mu_{\ell}\geq 1/(\alpha_{\ell}-1)\). Substituting these bounds in to the general expression for the generalization error in this regime from (23), the claim follows. Therefore, having \(\tilde{\mathbf{\Sigma}}_{\ell}\neq\mathbf{I}_{n_{\ell}}\) for \(\ell=1,\ldots,L\) cannot improve generalization in the \(\alpha_{0},\alpha_{\text{min}}>1\) regime. This is consistent with the large-width expansion in Corollary III.5, where we can apply Jensen's inequality to bound the weight-dependence of the correction as \(\mathbb{E}_{\tilde{\sigma}_{\ell}}[\tilde{\sigma}_{\ell}^{2}]/\mathbb{E}_{ \tilde{\sigma}_{\ell}}[\tilde{\sigma}_{\ell}]^{2}\geq 1\), with equality only when the weights are unstructured. In other regimes, \(\tilde{\mathbf{\Sigma}}_{1},\cdots,\tilde{\mathbf{\Sigma}}_{L}\) do not affect the generalization error. In contrast, a similar argument shows that anisotropy in \(\tilde{\mathbf{\Sigma}}_{0}\) can be beneficial in the target-averaged case, at least in the absence of label noise. We formalize this as: **Lemma IV.2**.: _Under the conditions of Corollary III.4, in the absence of label noise (\(\eta=0\)), we have_ \[\bar{\epsilon}\leq\begin{cases}\big{(}1+\sum_{\ell=1}^{L}\frac{1-\mu_{\ell}}{\mu_ {\ell}}\big{)}\big{(}1-\frac{1}{\alpha_{0}}\big{)}\mathbb{E}[\tilde{\sigma}_{0 }],&\alpha_{0},\alpha_{\min}>1\\ \frac{(1-\alpha_{\min}/\alpha_{0})}{1-\alpha_{\min}}\mathbb{E}[\tilde{\sigma}_ {0}],&\alpha_{\min}<1,\alpha_{\min}<\alpha_{0}\\ 0,&\alpha_{0}<1,\alpha_{0}<\alpha_{\min}.\end{cases} \tag{30}\] _That is, \(\bar{\epsilon}\) for a given \(\tilde{\mathbf{\Sigma}}_{0}\) is bounded from above by the generalization error for a flat spectrum \(\tilde{\mathbf{\Sigma}}_{0}=\mathbb{E}[\tilde{\sigma}_{0}]\mathbf{I}_{n_{0}}\)._ Proof of Lemma IV.2.: For any \(z>0\), \(\tilde{\sigma}_{0}\mapsto\tilde{\sigma}_{0}/(z+\tilde{\sigma}_{0})\), is a concave function of \(\tilde{\sigma}_{\ell}\geq 0\), hence Jensen's inequality implies that \(\mathbb{E}_{\tilde{\sigma}_{0}}\left[\tilde{\sigma}_{0}/(z+\tilde{\sigma}_{0 })\right]\leq\mathbb{E}[\tilde{\sigma}_{0}]/(z+\mathbb{E}[\tilde{\sigma}_{0 }])\). Then, note that \(z\mapsto\mathbb{E}_{\tilde{\sigma}_{0}}\left[\tilde{\sigma}_{0}/(z+\tilde{ \sigma}_{0})\right]\) and \(z\mapsto\mathbb{E}[\tilde{\sigma}_{0}]/(z+\mathbb{E}[\tilde{\sigma}_{0}])\) are both decreasing functions of \(z\geq 0\), and both are equal to \(1\) when \(z=0\). Thus, if \(\kappa_{0}>0\) solves \(1/\alpha_{0}=\mathbb{E}_{\tilde{\sigma}_{0}}\left[\tilde{\sigma}_{0}/(\kappa _{0}+\tilde{\sigma}_{0})\right]\) as specified by its definition in (20) and \(\bar{\kappa}_{0}>0\) solves \(1/\alpha_{0}=\mathbb{E}[\tilde{\sigma}_{0}]/(\bar{\kappa}_{0}+\mathbb{E}[ \tilde{\sigma}_{0}])\), we must have \(\kappa_{0}\leq\bar{\kappa}_{0}=(\alpha_{0}-1)\mathbb{E}[\tilde{\sigma}_{0}]\). As its defining equation (22) is of the same form as (20), the corresponding bound for \(\kappa_{\min}\) follows immediately: \(\kappa_{\min}\leq(\alpha_{0}/\alpha_{\min}-1)\mathbb{E}[\tilde{\sigma}_{0}]\). Substituting these bounds into (27) with \(\eta=0\), the claim follows. If \(\mathbb{E}[\tilde{\sigma}_{0}]\) is not finite, then this bound is entirely vacuous: \(\bar{\epsilon}\leq\infty\). If we do not average over isotropically-distributed targets, then the effect of anisotropy in \(\tilde{\mathbf{\Sigma}}_{0}\) is harder to analyze. Previous works have, however, analyzed the interaction of data structure with a fixed target in great detail for models with \(L=0\) or \(L=1\), showing that targets that align with the top eigenvectors of \(\tilde{\mathbf{\Sigma}}_{0}\) are easier to learn [5, 15, 16, 34, 35]. ## V Power law spectra We can gain further intuition for the effect of weight structure by considering an approximately solvable model for anisotropic spectra: power laws [15, 18, 24]. Power law data spectra have recently attracted considerable attention as a possible model for explaining the scaling laws of generalization observed in large language models [15, 18, 24, 36]. Maloney _et al._[18] proposed a single-hidden-layer (\(L=1\)) linear RFM with power-law-structured data and unstructured weights as a model for neural scaling laws. Does introducing power law structure into the weights affect the scaling laws predicted by deep linear RFMs? We have the following result: **Corollary V.1**.: _At finite size, define each covariance matrix \(\tilde{\mathbf{\Sigma}}_{\ell}\) such that its \(j\)-th eigenvalue is \(\tilde{\sigma}_{\ell,j}=\tilde{\varsigma}_{\ell}(n_{\ell}/j)^{1+\omega_{\ell}}\) for some fixed scale factor \(\tilde{\varsigma}_{\ell}>0\) and exponent \(\omega_{\ell}>0\). Then, the limiting target-averaged generalization error is approximately_ \[\bar{\epsilon}\simeq\begin{cases}\big{(}1+\Omega_{L}+\sum_{\ell=1}^{L}\frac{1 }{\alpha_{\ell}-1}\big{)}\chi(\alpha_{0})+\big{(}\omega_{0}+\Omega_{L}+\sum_{ \ell=0}^{L}\frac{1}{\alpha_{\ell}-1}\big{)}\eta^{2},&\alpha_{0},\alpha_{\min}>1 \\ \frac{\chi(\alpha_{0}/\alpha_{\min})}{1-\alpha_{\min}}+\frac{\alpha_{\min}}{1- \alpha_{\min}}\eta^{2},&\alpha_{\min}<1,\alpha_{\min}<\alpha_{0}\\ \frac{\alpha_{0}}{1-\alpha_{0}}\eta^{2},&\alpha_{0}<1,\alpha_{0}<\alpha_{\min },\end{cases} \tag{31}\] _where \(\Omega_{L}=\sum_{\ell=1}^{L}\omega_{\ell}\) and for \(z>1\) we have \(\chi(z)\simeq-M_{\tilde{\mathbf{\Sigma}}_{0}}^{-1}(z)/z\) given by \(\chi(z)=\tilde{\varsigma_{0}}\left\{k(z^{\omega_{0}}-1)+\left[2+\omega_{0}(1- k)\right](1-1/z)\right\}\) for \(k=\operatorname{sinc}[\pi/(1+\omega_{0})]^{-(1+\omega_{0})}\)._ Proof of Corollary V.1.: Using the dictionary of notation in Appendix C, we can plug the approximate solutions for \(\kappa_{\ell}\) and \(\mu_{\ell}\) derived by Maloney _et al._[18] into (27) to obtain (31). Therefore, the power law exponents \(\omega_{1},\cdots,\omega_{L}\) of the weight covariances beyond the first layer, which enter only through their sum \(\Omega_{L}\), do not affect the scaling laws of the generalization error with the dataset size and network widths. In particular, in the absence of label noise (\(\eta=0\)) we can approximate the scaling of (31) in the regimes of large or small hidden layer width by \[\bar{\epsilon}\sim\begin{cases}\alpha_{0}^{\omega_{0}},&\alpha_{\min}>1,\alpha_ {0}\gg 1,\\ (\alpha_{0}/\alpha_{\min})^{\omega_{0}},&\alpha_{\min}<1,\alpha_{0}/\alpha_{\min }\gg 1,\end{cases} \tag{32}\] which recovers the results found by Maloney _et al._[18] for \(L=1\) with unstructured weights. This behavior, and the agreement of (31) with numerical experiments, is illustrated in Figure 2. Consistent with Lemma IV.1, generalization with power-law weight structure is never better than with unstructured weights, as can be seen by comparing (31) with (25). ## VI Bayesian inference and the Gibbs estimator at large prior variance Thus far, we have focused on ridge regression (6). Though this is the most commonly-considered estimator in studies of random feature models [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 18; 19; 20], one might ask whether our qualitative findings--in particular, that feature weight structure beyond the first layer is generally harmful for generalization--carry over to other estimators. Our approach to Proposition III.2 is easily extensible to the setting of _zero-temperature Bayesian inference_, which has recently attracted substantial interest [37; 38; 39; 40; 41; 12], sparked by work from Li and Sompolinsky [37]. In this case, we take seriously the Gibbs distribution \(p(\mathbf{v})\propto e^{-\beta L}\), which in the ridge regression case was simply a convenient tool, and interpret it as the Bayes posterior for a Gaussian likelihood of variance \(1/\beta\) and a Gaussian prior with covariance \(\boldsymbol{\Gamma}_{L+1}/(\beta\lambda)\). It is in this context conventional to fix \(\lambda=1/\beta\), such that the prior variance does not scale with \(\beta\). We can then study the average of the generalization error (13) under this posterior in the zero-temperature limit \(\beta\to\infty\), which we refer to as the generalization error of the Gibbs estimator. We emphasize that this is not identical to the Bayesian minimum mean squared error (MMSE) estimator given by the posterior mean, which would coincide with the ridgeless estimator in the zero-temperature limit (see Appendix A). For a deep RFM, this simply has the effect of adding a "thermal" variance term to the generalization error of the ridgeless estimator, which we describe in detail in Appendices A and B. We have: **Proposition VI.1**.: _With the same setup as in Proposition III.2, the generalization error of the Gibbs estimator for a RFM is_ \[\epsilon_{\rm BRF}=\epsilon_{\rm ridgeless}+\begin{cases}\prod_{\ell=0}^{L} \frac{\kappa_{\ell}}{\alpha_{\ell}},&\alpha_{0},\alpha_{\rm min}>1\\ 0,&\text{otherwise},\end{cases} \tag{33}\] _where \(\epsilon_{\rm ridgeless}\) is given by Proposition III.2, and \(\kappa_{\ell}\) is defined as in (20)._ Proof of Proposition VI.1.: We derive (33) alongside Proposition III.2 in Appendix A. The Gibbs estimator is sensitive to the scale of the random feature weight distributions through \(\kappa_{\ell}\), while as noted above the ridgeless estimator is not sensitive to their overall scale. This direct dependence on \(\kappa_{\ell}\) means that the simple argument of Lemma IV.1 cannot be applied. Indeed, in the limit of large prior variance, where the thermal variance term dominates, structure can improve the performance of the Gibbs estimator. We make this result precise in the following lemma: **Lemma VI.1**.: _In the setting of Proposition VI.1, consider Bayesian RFMs with weight covariances scaled as \(\tau_{\ell}\tilde{\Sigma}_{\ell}\) Figure 2: Generalization for power-law spectra. (a). Target-averaged generalization error \(\bar{\epsilon}\) as a function of training data density \(1/\alpha_{0}\) for shallow models (\(L=1\)) of varying hidden layer width \(\alpha_{1}/\alpha_{0}\) in the absence of label noise (\(\eta=0\)). Here, the data and weight spectra have identical power law decay \(\omega_{0}=\omega_{1}=1\). (b). As in (a), but in the presence of label noise (\(\eta=1/2\)). (c). As in (b), but for fixed hidden layer width \(\alpha_{1}/\alpha_{0}=4\), fixed data exponent \(\omega_{0}=1\), and varying weight exponents \(\omega_{1}\). In all cases, solid lines show the predictions of (31), while dots with error bars show the mean and standard error over 100 realizations of numerical experiments with \(n_{0}=1000\). See Appendix E for details of our numerical methods. _for \(\ell=1,\ldots,L\). Then, in the non-trivial regime \(\alpha_{0},\alpha_{\min}>1\) where the thermal variance is non-vanishing, we have_ \[\lim_{\tau_{1},\ldots,\tau_{L}\to\infty}\frac{\epsilon_{\rm BRF}}{ \prod_{\ell=1}^{L}\tau_{\ell}}=\prod_{\ell=0}^{L}\frac{\kappa_{\ell}}{\alpha_{ \ell}}\leq\frac{\kappa_{0}}{\alpha_{0}}\varsigma^{2}\prod_{\ell=1}^{L}\big{(}1- \frac{1}{\alpha_{\ell}}\big{)}, \tag{34}\] _where the scalars \(\kappa_{\ell}\) are defined in terms of the un-scaled covariances \(\tilde{\mathbf{Sigma}}_{\ell}\) as in (20) and \(\varsigma^{2}\equiv\prod_{\ell=1}^{L}\mathbb{E}_{\tilde{\sigma}_{\ell}}[\tilde {\sigma}_{\ell}]\). Therefore, in the limit of large prior variance, including structure in the weight priors is generically advantageous for generalization. If \(\mathbb{E}_{\tilde{\sigma}_{\ell}}[\tilde{\sigma}_{\ell}]\) is not finite, then the bound is vacuous._ Proof of Lemma VI.1.: The first part of (34) follows from (33) using the scaling properties of \(\kappa_{\ell}\), while the bound follows from the bounds on \(\kappa_{\ell}\) derived as part of Lemma IV.2. In contrast, weight structure is generally harmful for the Bayesian RFM in the limit of small prior variance, as its performance then coincides with the ridgeless RFM, as can be seen from the scaling of \(\kappa_{\ell}\). This example illustrates that there are cases in which, depending on the estimator used, weight structure in deeper layers can sometimes be helpful for generalization. However, whereas the ridgeless estimator is commonly used in practice, the Gibbs estimator is less standard, and the limit of large prior variance is certainly artificial. Therefore, we emphasize that we give this example to show that the behavior of the ridgeless estimator is not entirely general, not to show that weight structure can be helpful in practical settings. ## VII Discussion We have computed learning curves for models with many layers of structured Gaussian random features learning a linear target function, showing that structure beyond the first layer is generally detrimental for generalization. This result is consistent with the intuition that in deep linear models it is sufficient to modify the representation only at the first layer [12; 26]. It will be interesting to investigate whether this intuition carries over to nonlinear networks learning complex tasks [43; 25]. Though our results are obtained using the replica trick, and we do not address the possibility of replica symmetry breaking, they should be rigorously justifiable given the convexity of the ridge regression problem [31; 32; 33]. We note that the replica approach makes it straightforward to handle models of any finite depth [28]. The relevant averages could of course be computed with alternative random matrix theory techniques, which could allow for a fully rigorous proof [18; 19; 20; 5]. Here, we have considered only linear, Gaussian models. Prior works on RFMs with unstructured feature weights have established Gaussian equivalence theorems that state that the generalization error of a nonlinear model is equivalent in the proportional limit (12) to that of a linear Gaussian model with an effective noise term resulting from nonlinearity [3; 4; 5; 6; 7; 8; 9; 10; 11]. In very recent work, Schroder _et al._[19] and Bosch _et al._[20] have established Gaussian equivalence theorems for deep nonlinear RFMs with unstructured feature weights, while Cui _et al._[42] have extended some of these results to the setting of deep Bayesian neural networks when the target is of the same architecture. It will be important to investigate the effect of feature weight structure on Gaussian equivalence in future work, and determine whether our qualitative results carry over to nonlinear RFMs in the proportional limit. In closing, we note that RFMs with structured weights may also have relevance for biological neural networks. A recent study by Pandey _et al._[29] considered RFMs with a single layer of random features (\(L=1\)) with correlated rows (\(\mathbf{\Gamma}_{1}\neq\mathbf{I}_{n_{0}}\)). In several biologically-inspired settings, they showed that introducing this structure could improve generalization, consistent with our results. More broadly, biological neural networks are imbued with rich priors [44]; investigating what insights deep structured models can afford for neuroscience will be an interesting subject for further study. ###### Acknowledgements. We thank Alexander Atanasov, Blake Bordelon, Benjamin S. Ruben, and James B. Simon for helpful discussions and comments on a draft of our manuscript. JAZ-V and CP were supported by NSF Award DMS-2134157.
2307.12497
Embedding Integer Lattices as Ideals into Polynomial Rings
Many lattice-based crypstosystems employ ideal lattices for high efficiency. However, the additional algebraic structure of ideal lattices usually makes us worry about the security, and it is widely believed that the algebraic structure will help us solve the hard problems in ideal lattices more efficiently. In this paper, we study the additional algebraic structure of ideal lattices further and find that a given ideal lattice in a polynomial ring can be embedded as an ideal into infinitely many different polynomial rings by the coefficient embedding. We design an algorithm to verify whether a given full-rank lattice in $\mathbb{Z}^n$ is an ideal lattice and output all the polynomial rings that the given lattice can be embedded into as an ideal with time complexity $\mathcal{O}(n^3B(B+\log n)$, where $n$ is the dimension of the lattice and $B$ is the upper bound of the bit length of the entries of the input lattice basis. We would like to point out that Ding and Lindner proposed an algorithm for identifying ideal lattices and outputting a single polynomial ring that the input lattice can be embedded into with time complexity $\mathcal{O}(n^5B^2)$ in 2007. However, we find a flaw in Ding and Lindner's algorithm that causes some ideal lattices can't be identified by their algorithm.
Yihang Cheng, Yansong Feng, Yanbin Pan
2023-07-24T03:06:49Z
http://arxiv.org/abs/2307.12497v2
# A Coefficient-Embedding Ideal Lattice can be Embedded into Infinitely Many Polynomial Rings ###### Abstract Many lattice-based cryptosystems employ ideal lattices for high efficiency. However, the additional algebraic structure of ideal lattices usually makes us worry about the security, and it is widely believed that the algebraic structure will help us solve the hard problems in ideal lattices more efficiently. In this paper, we study the additional algebraic structure of ideal lattices further and find that a given ideal lattice in some fixed polynomial ring can be embedded as an ideal in infinitely many different polynomial rings. We explicitly present all these polynomial rings for any given ideal lattice. The interesting phenomenon tells us that a single ideal lattice may have more abundant algebraic structures than we imagine, which will impact the security of corresponding cryptosystems. For example, it increases the difficulties to evaluate the security of cryptosystems based on ideal lattices, since it seems that we need consider all the polynomial rings that the given ideal lattices can be embedded into if we believe that the algebraic structure will contribute to solve the corresponding hard problem. It also inspires us a new method to solve the ideal lattice problems by embedding the given ideal lattice into another well-studied polynomial ring. As a by-product, we also introduce an efficient algorithm to identify if a given lattice is an ideal lattice or not. Keywords:ideal lattice coefficient embedding complexity. ## 1 Introduction The research on lattice-based cryptography was pioneered by Ajtai [1] in 1996. He presented a family of one-way function based on the Short Integer Solution (SIS) problem, which has the average-case hardness under the worst-case assumptions for some lattice problems. In 1997, Ajtai and Dwork [3] introduced a public-key cryptosystem, whose average-case security can be based on the worst-case hardness of the unique-Shortest Vector Problem. In 2005, Regev [24] proposed another problem with average-case hardness, the Learning with Errors problem (LWE), and also a public-key encryption scheme based on LWE. Because of the average-case security, lattice-based cryptography has drawn considerable attentions from then on. Although there have been many cryptographic schemes based on LWE and SIS, the main drawback of such schemes is their limited efficiency, due to its large key size and slow computations. Especially, as the development of quantum computers, it becomes more urgent to design more practical lattice-based cryptosystems, since lattice-based cryptosystems are widely believed to be quantum-resistant. To improve the efficiency, additional algebraic structure is involved in the lattice to construct more practical schemes. Among them, ideal lattice plays an important role. In fact, as early as in 1998, Hoffstein, Pipher, and Silverman [15] introduced a lattice-based public-key encryption scheme known as NTRU, whose security is related to the ideal in the ring \(\mathbb{Z}[x]/(x^{n}-1)\). Due to the cyclic structure of the ideal lattice, the efficiency of NTRU is very high. Later, in 2010, Lyubashevsky, Peikert and Regev [19] presented a ring-based variant of LWE, called Ring-LWE, whose average-case hardness is based on worst-case assumptions on ideal lattices. In 2017, Peikert, Regev and Stephens-Davidowitz [22] refined the proof of the security of Ring-LWE for more algebraic number field. After the introduction of Ring-LWE, more and more practical cryptosystems based on ideal lattices have be constructed. There are two different ways to define ideal lattices. One is induced by the coefficient embedding from ring \(\mathbb{Z}[x]/f(x)\) into \(\mathbb{Z}^{n}\). NTRU uses coefficient embedding to define its lattice. It is very convenient to implement cryptosystems based on Ring-LWE with the coefficient embedding. In fact, almost all the ideal lattice-based cryptosystems are implemented via the coefficient embedding. However, it seems not easy to clarify the hardness of problems for the coefficient-embedding ideal lattice in general. The other one is defined by the canonical embedding from the algebraic integer ring of some number field \(K\) into \(\mathbb{C}^{n}\). This type of ideal lattice is usually employed in the security proof or hardness reduction in Ring-LWE based cryptography. It is widely believed that the additional algebraic structure of ideal lattice will help us solve its hard problems more efficiently. In 2016, Cramer, Ducas, Peikert and Regev [11] introduced a polynomial-time quantum algorithm to solve \(2^{\sqrt{n\log n}}\)-SVP in principal ideal lattices in the algebraic integer ring of \(\mathbb{Q}(\zeta_{m})\), where \(m\) is a power of some prime. In 2017, Cramer, Ducas and Wesolowski [12] extended the result to general ideals. In the same year, Holzer, Wunderer and Buchmann [16] extended the field to be \(\mathbb{Q}(\zeta_{m})\), where \(m=p^{a}q^{b}\) and \(p\), \(q\) are different primes. In 2019, Pellet-Mary, Hanrot and Stehle [23] introduced a pre-processing method (PHS algorithm) to solve \(\gamma\)-SVP for ideal lattices in any number field. The pre-processing phasing takes exponential time. Let \(n\) be the dimension of the number field \(K\) viewed as a \(\mathbb{Q}\)-vector space. Pellet-Mary _et al._ showed that by performing pre-processing on \(K\) in exponential time, their algorithm can, given any ideal lattice \(I\) of \(O_{K}\), for any \(\alpha\in[0,1/2]\) output a \(\exp(\widetilde{O}((n\log n)^{\alpha+1}/n))\) approximation of a shortest none-zero vector of \(I\) in time \(\exp(\widetilde{O}((n\log n)^{1-2\alpha}/n))+T\). For the classical method, \(T=\exp(\widetilde{O}((n\log n)^{1/2})\) if \(K\) is a cyclotomic field or \(T=\exp(\widetilde{O}((n\log n)^{2/3})\) for an arbitrary number field \(K\). In 2020, Bernard and Roux-Langlois [5] proposed a new "twisted" version of the PHS algorithm. They proved that Twisted-PHS algorithm performs at least as well as the original PHS algorithm and their algorithm suggested that much better approximation factors were achieved. In 2022, Bernard, Lesavourey, Nguyen and Roux-Langlois [6] extended the experiments of [5] to cyclotomic fields of degree up to 210 for most conductors \(m\). In 2021, Pan, Xu, Wadleigh and Cheng [21] found the connection between the complexity of the shortest vector problem (SVP) of prime ideals in number fields and their decomposition groups, and revealed lots of weak instances of ideal lattices in which SVP can be solved efficiently. In 2022, Boudgoust, Gachon and Pellet-Mary [8] generalized the work of Pan _et al._[21] and provided a simple condition under which an ideal lattice defines an easy instance of the shortest vector problem. Namely, they showed that the more automorphisms stabilize the ideal, the easier it was to find a short vector in it. As mentioned above, almost all the research on SVP is in the canonical-embedding ideal lattices and the research on SVP in the coefficient-embedding ideal lattices is few. In some rings, such as the algebraic integer rings of cyclotomic fields, the SVPs induced by the two different embeddings are connected with each other. In 2017, Baston [4] discussed the norm connection between the coefficient embedding and the canonical embedding in the cyclotomic fields. Let \(K=Q(\zeta_{m})\) and for any ideal \(I\in O_{K}\), let \(T\) be the transformation matrix from the coefficient-embedding lattice \(\mathcal{L}(B)\) to the canonical-embedding lattice \(\mathcal{L}(B^{\prime})\), which means \(TB=B^{\prime}\). Consider the singular value decomposition (SVD) of \(T\) and \[T=U\begin{pmatrix}s_{1}&0&\cdots&0\\ 0&s_{2}&\cdots&0\\ 0&\vdots&\ddots&0\\ 0&0&\cdots&s_{n}\end{pmatrix}V\] \(s_{1}\geq s_{2}\geq\cdots\geq s_{n}>0\), \(V\) and \(M\) unitary matrices. Define \(k_{2}=\frac{s_{1}}{s_{n}}\) and he showed that the less \(k_{2}\) is the more relevant of the SVPs are in the ideal lattices induced by two different embeddings of a fixed ideal. More specifically, according to lemma 3.5 of [4], given \(T\in\mathbb{C}^{n\times n}\) and \(x\in\mathbb{C}^{n}\), \(s_{1},s_{2},\cdots,s_{n}\) is the singular value of \(T\), then \(s_{n}(T)\cdot\|x\|_{2}\leq\|Tx\|_{2}\leq s_{1}(T)\cdot\|x\|_{2}\). This conclusion shows a direct reduction between the SVPs in two ideal lattices induced by different embeddings of a fixed ideal. Using the notion above, if there is an oracle to solve \(\gamma\)-SVP in \(\mathcal{L}(B)\), then using lemma 3.5 of [4] and the relation \(TB=B^{\prime}\) we can solve \(k_{2}\gamma\)-SVP in \(\mathcal{L}(B^{\prime})\) in polynomial time. We recall that a number field \(K\) is called monogenic if \(O_{K}=\mathbb{Z}[\alpha]\) for some \(\alpha\in K\). All the cyclotomic and quadratic fields are monogenic. Only in monogenic fields, \(O_{K}\) is isomorphic to a polynomial ring \(\mathbb{Z}[x]/f(x)\) for some monic irreducible integer polynomial \(f(x)\) and only in this case the ideal lattices induced by the coefficient embedding have the same algebraic structure with the one induced by canonical embedding of the same ideal in the sense of ring isomorphism. In theorem 3.1 of [4], Baston showed that \(k_{2}\) is only related to the monogenic number field \(K\) and has nothing to do with the chosen ideal or fractional ideal. Though the research on \(k_{2}\) in general monogenic number field is little, when \(K=Q(\zeta_{m})\) is a cyclotomic field, \(k_{2}=(\operatorname{rad}(m))^{1/2}\) for \(m\) is odd and \(k_{2}=(\operatorname{rad}(m)/2)^{1/2}\) for \(m\) is even, where \(\operatorname{rad}(\operatorname{m})\) means the different prime multiple of \(\operatorname{m}\) (see lemma 3.4 of [4]). Therefore when \(m=a^{l}\) with \(l\) large enough, the SVPs in ideal lattices induced by two embeddings of the same ideal are connected closely. When \(m=2^{l}\) for any \(l\geq 2\), \(k_{2}=1\) and the SVPs in two embedding ideal lattices are essentially the same. With the discussion of the Baston's results above, in some monogenic number fields especially some cyclotomic fields, we can use the result of the canonical-embedding ideal lattices to handle the coefficient-embedding ideal lattices. #### 2.0.1 Our contribution In this paper, we focus on the coefficient embedding. Our main contribution is to show that an ideal lattice in the ring \(\mathbb{Z}[x]/f(x)\), where \(f(x)\) is monic and \(f(x)\in\mathbb{Z}[x]\), can be embedded into infinitely many rings \(\mathbb{Z}[x]/g(x)\), where \(g(x)\) is monic and \(g(x)\in\mathbb{Z}[x]\) (**Theorem 1**). Besides, we show an efficient algorithm for computing all the rings that an ideal lattice can be embedded into and also judging whether a given integer lattice can be embedded into a polynomial ring (**Algorithm 1**). As we all know, a lattice is actually a discrete additive subgroup of \(\mathbb{R}^{m}\). The only difference between the general integer lattice and the ideal lattice is the multiplication structure of the ideal lattice. In fact, an integer lattice may be embedded into a polynomial ring \(\mathbb{Z}[x]/f(x)\), and it can be viewed as an ideal of \(\mathbb{Z}[x]/f(x)\). Hence, with this embedding, the integer lattice as an ideal of \(\mathbb{Z}[x]/f(x)\) is equipped with the multiplication of the ring \(\mathbb{Z}[x]/f(x)\). A natural question is that what will happen if we equip the same lattice with different "multiplication" or is the "multiplication" unique? Obviously if this can be done, the lattice will not change, but the ring changes, which means that a fixed integer lattice may be viewed as different ideals in different rings. We show that it is possible to embed a given ideal lattice as another ideal into infinitely many different polynomial rings by the coefficient embedding. We explicitly present all the polynomial rings for any given ideal lattice. It is widely believed that additional algebraic structure may lead a more efficient algorithm to solve the hard problems in ideal lattice than general lattices, such as the method of recovering a short generator proposed by Cramer _et al._[11] and the method of pre-processing any number field \(K\) proposed by Pellet-Mary _et al._[23]. The researches above are all in the canonical-embedding ideal lattices. The results of the SVP in canonical-embeddinng ideal lattices can also fit the case in coefficient-embedding ideal lattices in lots of rings. Though in a general monogenic number field the relation betweem SVPs in ideal lattices induced by two different embeddings of a fixed ideal is unclear, by the discussion of Baston's [4] above in lots of special cyclotomic fields the connection between the SVPs in ideal lattices induced by two different embeddings of the same ideal is very close and solving the \(\gamma\)-SVP in one embedding means solving the \(\beta\)-SVP in another embedding, where \(\gamma\) and \(\beta\) are close. If we happen to find that the algebraic structure for polynomial ring \(R\) that can help us solve the ideal lattice problems in \(R\) more efficiently, then our results shows that it's possible to solve the problem for other ideal lattices not in \(R\) as long as the ideal lattices can be embedded as ideals into \(R\). Similarly, when using the method in [23], the pre-processing of \(R\) can also be used to solve the problems for some ideal lattices not in \(R\), which implies that we may not need pre-process the new polynomial ring for every new ideal lattices. Moreover, once we find a weak ideal lattice in which the lattice problem can be solved more efficiently, we can solve the problems for infinite ideal lattices in different polynomial rings. Though the integer lattice is fixed, they are different as ideals in different polynomial rings. It seems that a weak ideal will spread as infinite weak ideals. On the other hand, the abundant embedding relations will impact the security of the cryptosystems based on ideal lattices. When considering the security, it's necessary to evaluate all the corresponding ideals in the polynomial rings that the given ideal lattices can be embedded into instead of just one single ideal lattice. We have to point out that all of the observations above shed a shadow on the security of ideal lattice-based cryptosystems. As a by-product, an efficient algorithm to identify an ideal lattice is introduced. We first show an equivalent condition between the integer lattice and the coefficient-embedding ideal lattice. According to this condition, we introduce a polynomial-time algorithm that is more efficient than the algorithm proposed by Ding and Lindner [13]. Moreover, we present an explicit form for all possible polynomial rings that the ideal lattice can be embedded into in theory instead of the implicit form obtained in the experiment as in [13]. The explicit form can help us theoretically analyze the algebraic properties of there ideals directly. #### 1.0.1 Roadmap The paper is organized as follows. In Section 2, some preliminaries are presented. In Section 3, we reveal the embedding relation in details and give some application scenes. In Section 4, an algorithm for identifying a coefficient-embedding ideal lattice is introduced together with the complexity analysis. In the final section, we give a brief conclusion. ## 2 Preliminaries In this paper we denote by \(\mathbb{C}\), \(\mathbb{R}\), \(\mathbb{Q}\) and \(\mathbb{Z}\) the complex number field, the real number field, the rational number field and the integer ring respectively. We denote a matrix by a capital letter in bold and denote a vector by a lower-case letter in bold. To represent the element of a matrix, we use the lower-case letter. For example, the element of matrix \(\mathbf{A}\) at the \(i\)-th row and \(j\)-th column is denoted by \(a_{ij}\), while its \(i\)-th row is denoted by \(\mathbf{a}_{i}\). Since we have the inner products in \(\mathbb{R}^{n}\) and \(\mathbb{C}^{n}\) respectively, we can define the norm of vectors, that is, \(\|\mathbf{v}\|:=<\mathbf{v},\mathbf{v}>\) in \(\mathbb{R}^{n}\) and \(\|\mathbf{v}\|:=<\mathbf{v},\boldsymbol{\nabla}>\) in \(\mathbb{C}^{n}\). For two integers \(a\) and \(b\), \(a|b\) means that \(b\) is divisible by \(a\). Otherwise, we write \(a\not|\ b\). For integer \(a\) and a matrix \(\mathbf{A}\), \(a|\mathbf{A}\) means that every entry of \(\mathbf{A}\) can be divisible by \(a\). For a polynomial \(f(x)\in\mathbb{Z}[x]\), denote by \(\mathbb{Z}[x]/f(x)\) for simplicity the quotient ring \(\mathbb{Z}[x]/(f(x)\mathbb{Z}[x])\). For a map \(\sigma\), and a set \(S\), denote by \(\sigma(S)\) the set \(\{\sigma(x):x\in S\}\). ### Lattice Lattices are discrete subgroups of \(\mathbb{R}^{m}\), or equivalently, Definition 1: (Lattice) Given n linearly independent vectors \(\mathbf{B}=\begin{pmatrix}\mathbf{b}_{1}\\ \mathbf{b}_{2}\\ \vdots\\ \mathbf{b}_{n}\end{pmatrix}\), where \(\mathbf{b}_{i}\in\mathbb{R}^{m}\), the lattice \(\mathcal{L}(\mathbf{B})\) generated by \(\mathbf{B}\) is defined as follows: \[\mathcal{L}(\mathbf{B})=\{\sum_{i=1}^{n}x_{i}\mathbf{b}_{i}:x_{i}\in\mathbb{Z} \}=\{\mathbf{x}\mathbf{B}:\mathbf{x}\in\mathbb{Z}^{n}\}.\] We call \(\mathbf{B}\) a basis of \(\mathcal{L}(\mathbf{B})\), m and n the dimension and rank of \(\mathcal{L}(\mathbf{B})\) respectively. When \(m=n\), we say \(\mathcal{L}(\mathbf{B})\) is full-rank. When \(n>1\), there are infinitely many bases for a lattice \(\mathcal{L}\), and any two bases are related to each other by a unimodular matrix, which is an invertible integer matrix. More precisely, given a lattice \(\mathcal{L}(\mathbf{B}_{1})\), \(\mathbf{B}_{2}\) is also a basis of the lattice if and only if there exists a unimodular matrix \(\mathbf{U}\) s.t. \(\mathbf{B}_{1}=\mathbf{U}\mathbf{B}_{2}\). Hard problems in latticesThe shortest vector problem (SVP) is one of the most famous hard problems in lattices. SVP is the question of finding a nonzero shortest vector in a given lattice \(\mathcal{L}\), whose length is denoted by \(\lambda_{1}(\mathcal{L})\). The approximating-SVP with factor \(\gamma\), denoted by \(\gamma\)-SVP, asks to find a short nonzero lattice vector \(\mathbf{v}\) such that \[\|\mathbf{v}\|\leq\gamma\cdot\lambda_{1}(\mathcal{L}).\] In fact, The hardness of \(\gamma\)-SVP depends on \(\gamma\). When \(\gamma=1\), \(\gamma\)-SVP is exactly the original SVP, and for constant \(\gamma\), this problem is known to be NP-hard under randomized reduction [2]. Many cryptosystems are based on the hardness of (decision) \(\gamma\)-SVP when \(\gamma\) is in polynomial size. By now we have not found any polynomial-time classical algorithm to deal with such cases. The existing polynomial algorithms such as LLL [17], BKZ [26] can find the situation when \(\gamma=\exp(n)\). ### Hermite Normal Form For the integer matrix, there is a very important standard form known as the Hermite Normal Form (HNF). For simplicity, we just present the definition of HNF for the non-singular integer matrix. Definition 2: (Hermite Normal Form) A non-singular matrix \(\mathbf{H}\in\mathbb{Z}^{n\times n}\) is said to be in HNF, if * \(h_{i,i}>0\) for \(1\leq i\leq n\). * \(h_{j,i}=0\) for \(1\leq j<i\leq n\). * \(0\leq h_{j,i}<h_{i,i}\) for \(1\leq i<j\leq n\). The Hermite Normal Form has some important properties. See [14, 20, 18] for more details. Lemma 1: _For any integer matrix \(\mathbf{A}\), there exists a unimodular matrix \(\mathbf{U}\) such that \(\mathbf{H}\)=\(\mathbf{UA}\) is in HNF. Moreover, HNF can be computed in polynomial time._ For integer lattices, we have Lemma 2: _For any lattice \(\mathcal{L}\subset\mathbb{Z}^{n}\), there exists a unique basis \(\mathbf{H}\) in HNF. We call \(\mathbf{H}\) the HNF basis of \(\mathcal{L}\)._ Sometimes we do not need the whole HNF of an integer matrix. So we introduce the Incomplete Hermite Normal Form of an integer matrix, which is also a special basis of the integer lattice. Definition 3: (Incomplete Hermite Normal Form) A non-singular matrix \(\mathbf{B}\in\mathbb{Z}^{n\times n}\) is said to be in Incomplete Hermite Normal Form, if * \(b_{n,n}>0\); * \(b_{i,n}=0\) for \(1\leq i\leq n-1\). Given a full-rank integer matrix \(\mathbf{B}\), \[\mathbf{B}=\begin{pmatrix}b_{1,1}&b_{1,2}&\cdots&b_{1,n}\\ b_{2,1}&b_{2,2}&\cdots&b_{2,n}\\ \vdots&\vdots&\ddots&\vdots\\ b_{n,1}&b_{n,2}&\cdots&b_{n,n}\end{pmatrix},\] it is well known that by the Extended Euclidean Algorithm we can find a unimodular matrix \(\mathbf{U}\), such that \[U\begin{pmatrix}b_{1,n}\\ b_{2,n}\\ \vdots\\ b_{n,n}\end{pmatrix}=\begin{pmatrix}0\\ 0\\ \vdots\\ d\end{pmatrix},\] where \(d=\gcd(b_{1,n},b_{2,n},...,b_{n,n})\). Then we have \[\mathbf{B}^{\prime}=U\mathbf{B}=\begin{pmatrix}\mathbf{D}&\mathbf{0}\\ \mathbf{b}^{\prime}&d\end{pmatrix}\] is in Incomplete Hermite Normal Form, where \(\mathbf{D}\in\mathbb{Z}^{(n-1)\times(n-1)}\), \(\mathbf{b}^{\prime}\in\mathbb{Z}^{n-1}\). About the Incomplete Hermite Normal Form, it is easy to conclude the following lemma. So we omit the proof. Lemma 3: _For any non-singular matrix \(\mathbf{B}\in\mathbb{Z}^{n\times n}\), \(\mathbf{B}\) is said to be in Incomplete Hermite Normal Form, if_ * _we can find a unimodular matrix_ \(\mathbf{U}\) _in polynomial time, such that_ \(\mathbf{B}^{\prime}=\mathbf{U}\mathbf{B}\) _is in Incomplete Hermite Normal Form._ * _For any unimodular matrix_ \(\mathbf{U}\) _and_ \(\mathbf{V}\) _such that_ \(\mathbf{B}^{\prime}=\mathbf{U}\mathbf{B}\) _and_ \(\mathbf{B}^{\prime\prime}=\mathbf{VB}\) _are in Incomplete Hermite Normal Form,_ \(\mathbf{B}^{\prime}\) _and_ \(\mathbf{B}^{\prime\prime}\) _are not necessarily equal, but_ \[b^{\prime}_{n,n}=b^{\prime\prime}_{n,n}=\gcd(b_{1,n},b_{2,n},...,b_{n,n}).\] _Specially, notice that the HNF_ \(\mathbf{H}\) _of_ \(\mathbf{B}\) _is also in Incomplete Hermite Normal Form. We immediately have_ \[h_{n,n}=\gcd(b_{1,n},b_{2,n},...,b_{n,n}).\] ### Ideal lattices An algebraic number field \(K\) is an extension field of the rationals \(\mathbb{Q}\) such that its dimension \([K:\mathbb{Q}]\) as a \(\mathbb{Q}\)-vector space (i.e., its degree) is finite. An element \(x\) in the algebraic number field \(K\) is said to be integral over \(\mathbb{Z}\) if the coefficients of the minimal polynomial of \(x\) over \(\mathbb{Q}\) are all integers. All the elements which are integral over \(\mathbb{Z}\) in \(K\) make up a set \(O_{K}\). \(O_{K}\) is actually a ring called the algebraic integer ring of \(K\) over \(\mathbb{Z}\). \(O_{K}\) is a finitely generated free \(\mathbb{Z}\)-module of dimension \([K:\mathbb{Q}]\). The basis of \(O_{K}\) as a free \(\mathbb{Z}\)-module is called the integer basis, which is also a basis of \(K\) as a \(\mathbb{Q}\)-vector space. Canonical-embedding ideal latticeIf \(\Omega\supset K\) is an extension field such that \(\Omega\) is algebraically closed over \(\mathbb{Q}\), then there are exactly \([K:\mathbb{Q}]\) field embeddings of \(K\) into \(\Omega\). For convenience, we regard \(\Omega\) as the complex field \(\mathbb{C}\). Any ideal of \(O_{K}\) is a full-rank submodule of \(O_{K}\). Let \([K:\mathbb{Q}]=n\). This structure induces a canonical embedding: \[\Sigma:O_{K} \rightarrow\mathbb{C}^{n}\] \[a \mapsto(\Sigma_{i}(a))_{i=1,...,n},\] where \(\Sigma_{i}\)'s are the \(n\) different embeddings from \(K\) into \(\mathbb{C}\). Definition 4: (Canonical-embedding Ideal Lattice) Given a number field \(K\) and any ideal I of \(O_{K}\), \(\Sigma\)(I) is called the canonical-embedding ideal lattice. Coefficient-embedding ideal latticeDenote by \(\mathbb{Z}^{(n)}[x]\) the set of all the polynomials in \(\mathbb{Z}[x]\) with degree \(\leq\)\(n-1\). We use the symbol \(\sigma\) to represent the following linear map: \[\sigma:\mathbb{Z}^{(n)}[x] \rightarrow\mathbb{Z}^{n}\] \[\sum_{i=1}^{n}a_{i}x^{i-1} \mapsto(a_{1},a_{1},...,a_{n}),\] where linear map means that * For any \(f(x)\), \(g(x)\in\mathbb{Z}^{(n)}[x]\), \(\sigma(f(x)+g(x))=\sigma(f(x))+\sigma(g(x))\); * For any \(f(x)\in\mathbb{Z}^{(n)}[x]\) and \(z\in\mathbb{Z}\), \(\sigma(zf(x))=z\sigma(f(x))\). We can also define its inverse, which is linear too: \[\sigma^{-1}:\mathbb{Z}^{n} \rightarrow\mathbb{Z}^{(n)}[x]\] \[(a_{1},a_{1},\cdots,a_{n}) \mapsto\sum_{i=1}^{n}a_{i}x^{i-1}.\] In what follows, we focus on ideal lattices induced by ideals of the ring \(\mathbb{Z}[x]/f(x)\), where \(f(x)\) is a monic polynomial of degree \(n\). Obviously, any element in \(\mathbb{Z}^{(n)}[x]\) can be viewed as a representative in the ring \(\mathbb{Z}[x]/f(x)\). So we abuse the symbol \(\sigma\) to represent the the following coefficient embedding. \[\sigma:\mathbb{Z}[x]/f(x) \rightarrow\mathbb{Z}^{n}\] \[\sum_{i=1}^{n}a_{i}x^{i-1} \mapsto(a_{1},a_{1},...,a_{n}).\] Therefore, under the coefficient embedding, any ideal of \(\mathbb{Z}[x]/f(x)\) can be viewed as an integer lattice. Definition 5: (Coefficient-embedding Ideal Lattice) Given \(\mathbb{Z}[x]/f(x)\), where \(f(x)\) is a monic polynomial of degree n, and any ideal \(I\) of \(\mathbb{Z}[x]/f(x)\), \(\sigma\)(I) is called the coefficient-embedding ideal lattice, which is of course an integer lattice. Roughly speaking, the canonical-embedding ideal lattice are usually used in the theoretical analysis of lattice-related hard problems and lattice-based cryptosystems whereas the coefficient-embedding ideal lattices are usually used in the implementation of lattice-based cryptosystems. Most of the practical lattice-based cryptosystems are employing the coefficient-embedding ideal lattices. In some cases, the SVP problem in canonical-embedding ideal lattices and coefficient-embedding ideal lattices are equivalent or very closely connected with each other as we've mentioned in the latter of the introduction. In the third section, we first show and prove a naturally equivalent condition (**Lemma 4**) that whether an integer lattice can be embedded into a given polynomial ring. It's not complicated and it's a direct application of the **Definition **5**. Though the result of **Lemma 4** may have been used in the early research, we haven't found a detailed description. Hence, we rewrite and prove the **Lemma 4** formally. Our main theorem (**Theorem 1**) is motivated by **Lemma 5** proposed by Zhang, Liu and Lin [29]. They show that the coefficients of HNF of a coefficient-embedding ideal lattice satisfy some special condition. Using the result of **Lemma 5** (\(h_{n,n}|h_{i,j}\)) together with the equivalent condition of **Lemma 4**, we prove the **Theorem 1**. Next, we show some potential application of our main theorem, and give some examples. Though the examples may not be practical in the reality for now, it do supply us a new angle to deal with the SVP in ideal lattices or integer lattices. In the fourth section, we propose **Algorithm 1** to judge whether an integer lattice can be embedded into a polynomial ring as an ideal and compute all the rings that the lattice can be embedded into as an ideal if it's an ideal lattice. We originally want to make use of the property of **Lemma 5**, but we find the property of **Lemma 5** is too weak to judge whether an integer lattice is an ideal lattice. Hence, based on this ideal we introduce **Definition 3** the Incomplete Hermite Normal Form and propose another equivalent condition (**Theorem 2** ) to judge an ideal lattice. Based on **Theorem 2**, we propose **Algorithm 1**, and give a simple analysis of the complexity. ## 3 An ideal lattice can be embedded into different rings We stress that in the following, we focus on the coefficient-embedding ideal lattice, and in this section, we'll show how an coefficient-embedding ideal lattice can be embedded into different rings. The idea behind is much simple. It is well known that a lattice is just an additive group. However, when it is also equipped with some "multiplication", then it becomes an ideal lattice. A natural question is that what will happen if we equip the same lattice with different "multiplication"? Obviously if this can be done, the lattice will not change, but the ideal changes, which means that an ideal lattice can be viewed as different ideals in different rings. ### Two Properties of ideal lattices Before stating our main theorem, we present two crucial properties of coefficient-embedding ideal lattices. #### 3.1.1 Deciding an ideal lattice We next present an easy way to tell if a given lattice is a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/f(x)\) or not. Lemma 4: _For any monic polynomial \(f(x)\in\mathbb{Z}[x]\) with degree \(n\), a lattice \(\mathcal{L}(\mathbf{B})\) with any basis \(\mathbf{B}\) is a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/f(x)\) if and only if \(\sigma(x\sigma^{-1}(\mathbf{b}_{i})\mod f(x))\in\mathcal{L}(\mathbf{B})\) for \(i=1,\cdots,n\), where \(\mathbf{b}_{i}\) is the \(i\)-th row vector of \(\mathbf{B}\), and \(\sigma\) is the map defined in Section 2.3._ Proof: If \(\mathcal{L}(\mathbf{B})\) is a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/f(x)\), then \(\sigma^{-1}(\mathbf{b}_{i})\)'s are in the corresponding ideal. It is obvious that \(x\sigma^{-1}(\mathbf{b}_{i})\mod f(x)\) must be in the ideal too, which means that \(\sigma(x\sigma^{-1}(\mathbf{b}_{i})\mod f(x))\in\mathcal{L}(\mathbf{B})\). If there exists a monic polynomial \(f(x)\in\mathbb{Z}[x]\) with degree \(n\), such that \(\sigma(x\sigma^{-1}(\mathbf{b}_{i})\mod f(x))\in\mathcal{L}(\mathbf{B})\) for \(i=1,\cdots,n\), we show that \(\sigma^{-1}(\mathcal{L}(\mathbf{B}))\) must be an ideal in \(\mathbb{Z}[x]/f(x)\). It is easy to check that \(\sigma^{-1}(\mathcal{L}(\mathbf{B}))\) is an additive group, due to the fact that \(\sigma\) is an additive homomorphism. Since \(\sigma(x\sigma^{-1}(\mathbf{b}_{i})\mod f(x))\in\mathcal{L}(\mathbf{B})\), then for any lattice vector \(\mathbf{v}=\sum_{i=1}^{n}z_{i}\mathbf{b}_{i}\), \(z_{i}\in\mathbb{Z}\), we have \[\sigma(x\sigma^{-1}(\mathbf{v})\mod f(x))=\sum_{i=1}^{n}z_{i}\sigma(x\sigma^{- 1}(\mathbf{b}_{i})\mod f(x))\in\mathcal{L}(\mathbf{B}).\] Applying the result on the lattice vector \(\sigma(x\sigma^{-1}(\mathbf{v})\mod f(x))\), we will have \[\sigma(x^{2}\sigma^{-1}(\mathbf{v}))=\sigma(x\sigma^{-1}(\sigma(x\mathbf{v} \mod f(x))))\in\mathcal{L}(\mathbf{B}).\] Hence, for any positive integer \(k\), we know that \[\sigma(x^{k}\sigma^{-1}(\mathbf{v}))\in\mathcal{L}(\mathbf{B}).\] Then for any \(g(x)=\sum_{i=1}^{n}g_{i}x^{i-1}\in\mathbb{Z}[x]/f(x)\) and any lattice vector \(\mathbf{v}\), \[\sigma(g(x)\sigma^{-1}(\mathbf{v})\mod f(x))=\sum_{i=1}^{n}g_{i}\sigma(x^{i-1 }\sigma^{-1}(\mathbf{v})\mod f(x))\in\mathcal{L}(\mathbf{B}).\] The lemma follows. #### 3.2.2 HNF of ideal lattices The following lemma tells us some divisibility relation among the elements in the HNF basis of a coefficient-embedding ideal lattice, which has been proved in [29]. For completeness, we present the whole proof simply. Lemma 5 ([29]): _Let \(\mathbf{H}\) be the HNF basis of the full-rank coefficient-embedding ideal lattice \(\mathcal{L}(\mathbf{B})\) in the ring \(\mathbb{Z}[x]/f(x)\)._ \[\mathbf{H}=\begin{pmatrix}h_{1,1}&0&\cdots&0\\ h_{2,1}&h_{2,2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ h_{n,1}&\cdots&\cdots&h_{n,n}\end{pmatrix}.\] _Then \(h_{i,i}|h_{j,l}\), for \(1\leq l\leq j\leq i\leq n\). Specially, \(h_{n,n}|h_{i,j}\), \(i,j\leq n\)._ Proof: By induction on \(i\), it's trivial for \(i=1\). Assume the result holds for \(i\leq k\leq n-1\). It remains to show that for \(i=k+1\), \(h_{k+1,k+1}|h_{j,l}\) where \(1\leq l\leq j\leq k+1\leq n\). Let \(\mathbf{h}_{i}\) be the \(i\)-th row of \(\mathbf{H}\). Note that for any ideal \(I\) of \(\mathbb{Z}[x]/f(x)\) and for all \(g(x)\in I\), \(xg(x)\in I\). Specially \(x\sigma^{-1}(\mathbf{h}_{k})\in I\), where \(\sigma\) is the coefficient-embedding. Since \(\mathbf{H}\) is a basis of the ideal lattice, it is very simple to imply that there must exist \(y_{i}\in\mathbb{Z}\), for \(i=1,2,\cdots,k+1\) such that: \[\left(0\;h_{k,1}\cdots\;h_{k,k}\;0\;\cdots\;0\right)=\sum_{i=1}^{k+1}y_{i} \mathbf{h}_{i}.\] Hence, \[h_{k,k} = y_{k+1}h_{k+1,k+1}\] \[h_{k,k-1} = y_{k}h_{k,k}+y_{k+1}h_{k+1,k}\] \[\vdots\] \[h_{k,1} = \sum_{i=2}^{k+1}y_{i}h_{i,2}\] \[0 = \sum_{i=1}^{k+1}y_{i}h_{i,1}\] From the first equation, we get \(y_{k+1}=\frac{h_{k,k}}{h_{k+1,k+1}}\in\mathbb{Z}\), and \[h_{k+1,k} = \frac{h_{k,k-1}-y_{k}h_{k,k}}{h_{k,k}}h_{k+1,k+1}\] \[h_{k+1,k-1} = \frac{h_{k,k-2}-y_{k-1}h_{k-1,k-1}-y_{k}h_{k,k-1}}{h_{k,k}}h_{k+1,k+1}\] \[\vdots\] \[h_{k+1,2} = \frac{h_{k,1}-\sum_{i=2}^{k}y_{i}h_{i,2}}{h_{k,k}}h_{k+1,k+1}\] \[h_{k+1,1} = \frac{-\sum_{i=1}^{k}y_{i}h_{i,1}}{h_{k,k}}h_{k+1,k+1}\] From the induction hypothesis, we have \(h_{k,k}|h_{j,l}\) for \(1\leq l\leq j\leq k\leq n\). So the coefficient of \(h_{k+1,k+1}\) in each equation is in fact an integer. Therefore, \(h_{k+1,k+1}|h_{k+1,l},1\leq l\leq k+1\). Since \(h_{k+1,k+1}|h_{k,k}\), we know \(h_{k+1,k+1}|h_{j,l}\), where \(1\leq l\leq j\leq k+1\leq n\). Thus, the result holds for \(i=k+1\). By induction, \(h_{i,i}|h_{j,l}\), \(1\leq l\leq j\leq i\leq n\). So \(h_{n,n}|h_{i,j}\), \(1\leq i\leq j\leq n\). Lemma 5 follows. Remark 1: Note that in the proof of Lemma 5, to conclude that \(h_{n,n}|h_{i,j}\), \(i,j\leq n\), what we need is \(\sigma(x\sigma^{-1}(\mathbf{h}_{k}))\in\mathcal{L}(\mathbf{B})\) for \(k=1,\cdots,n-1\). We do not care about if \(\sigma(x\sigma^{-1}(\mathbf{h}_{n}))\) is in \(\mathcal{L}(\mathbf{B})\) or not. Lemma 5 presents us the divisibility relation among the elements in the HNF basis of a coefficient-embedding ideal lattice. Actually, not only it's crucial to our main theorem, but also we can regard it as a tool to quickly rule out some integer lattices not being an ideal lattice in any polynomial ring. See Section 4 for more details. ### Main theorem We next present our main theorem. Theorem 3.1: _For any full-rank coefficient-embedding ideal lattice \(\mathcal{L}(\mathbf{B})\) in the ring \(\mathbb{Z}[x]/f(x)\), where \(f(x)\) is monic and \(\text{deg}(f(x))=n\), there exists infinitely many monic \(g(x)\in\mathbb{Z}[x]\) with degree \(n\), s.t. \(\mathcal{L}(\mathbf{B})\) is also a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/g(x)\)._ _More precisely, let \(d=\gcd(b_{1,n},b_{2,n},...,b_{n,n})\). Then \(\mathcal{L}(\mathbf{B})\) is also a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/g(x)\), where \(g(x)\in\mathbb{Z}[x]\) is a monic polynomial with degree \(n\), if and only if_ \[\sigma(f(x)-g(x))\in\mathcal{L}(\frac{\mathbf{B}}{d}),\] _or equivalently,_ \[g(x)\in f(x)+\sigma^{-1}(\mathcal{L}(\frac{\mathbf{B}}{d})).\] Proof: Consider the HNF basis of \(\mathcal{L}(\mathbf{B})\), \[\mathbf{H}=\begin{pmatrix}h_{1,1}&0&\cdots&0\\ h_{2,1}&h_{2,2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ h_{n,1}&\cdots&\cdots&h_{n,n}\end{pmatrix}.\] For convenience, we denote by \(\mathbf{h}_{i}\) the \(i\)-th row of \(\mathbf{H}\), and then \(\mathbf{h}_{i}\) is a vector in \(\mathbb{Z}^{n}\). (i) If there is a monic \(g(x)\in\mathbb{Z}[x]\) with degree \(n\), s.t. \(\mathcal{L}(\mathbf{B})\) is also a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/g(x)\), we next prove that \(\sigma(f(x)-g(x))\in\mathcal{L}(\frac{\mathbf{B}}{d})\). By Lemma 4, \(\mathcal{L}(\mathbf{H})=\mathcal{L}(\mathbf{B})\) is a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/f(x)\), then we have \[\sigma(x\sigma^{-1}(\mathbf{h}_{n})\mod f(x))\in\mathcal{L}(\mathbf{B}).\] Note that \[x\sigma^{-1}(\mathbf{h}_{n})\mod f(x)=\sum_{i=1}^{n-1}h_{n,i}x^{i}-h_{n,n}(f(x )-x^{n}).\] We have \[\big{(}0\ h_{n,1}\...\ h_{n,n-1}\big{)}-h_{n,n}\sigma(f(x)-x^{n})\in\mathcal{L} (\mathbf{B}). \tag{1}\] Similarly, since \(\mathcal{L}(\mathbf{B})\) is also a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/g(x)\), we have \[\left(0\;h_{n,1}\;...\;h_{n,n-1}\right)-h_{n,n}\sigma(g(x)-x^{n})\in\mathcal{L}( \mathbf{B}). \tag{2}\] Subtracting the left side of (1) from the left side of (2), we immediately have \[h_{n,n}\sigma(f(x)-g(x))\in\mathcal{L}(\mathbf{B}).\] By Lemma 3, \(h_{n,n}=d\), we have \[\sigma(f(x)-g(x))\in\mathcal{L}(\frac{\mathbf{B}}{d}).\] (ii) We next prove that for any polynomial \(g(x)\), such that \(\sigma(f(x)-g(x))\in\mathcal{L}(\frac{\mathbf{B}}{d})\), any full-rank coefficient-embedding ideal lattice \(\mathcal{L}(\mathbf{B})\) in the ring \(\mathbb{Z}[x]/f(x)\) can also be viewed as a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/g(x)\). First, \(g(x)\) is obviously a monic polynomial with degree \(n\). Note that by Lemma 5, \(h_{n,n}|h_{i,j}\), then \(d=h_{n,n}\) divide all the components of every lattice vector in \(\mathcal{L}(\mathbf{B})\), which means that \(\mathcal{L}(\frac{\mathbf{B}}{d})\) is an integer lattice and once \(\sigma(f(x)-g(x))\in\mathcal{L}(\frac{\mathbf{B}}{d})\), \(g(x)\in\mathbb{Z}[x]\). By Lemma 4 again, \(\mathcal{L}(\mathbf{H})=\mathcal{L}(\mathbf{B})\) is a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/f(x)\), then we have \[\sigma(x\sigma^{-1}(\mathbf{h}_{i})\mod f(x))\in\mathcal{L}(\mathbf{B}),\] for \(i=1,\cdots,n\). To prove that \(\mathcal{L}(\mathbf{B})\) is also a coefficient-embedding ideal lattice in \(\mathbb{Z}[x]/g(x)\), by Lemma 4 it is enough to show that \(\sigma(x\sigma^{-1}(\mathbf{h}_{i})\mod g(x))\in\mathcal{L}(\mathbf{B})\), for \(i=1,\cdots,n\). Note that for \(i=1,\cdots,n-1\), \[\sigma(x\sigma^{-1}(\mathbf{h}_{i})\mod g(x))=\sigma(x\sigma^{-1}(\mathbf{h}_ {i})\mod f(x))\in\mathcal{L}(\mathbf{B}).\] Since\(\sigma(f(x)-g(x))\in\mathcal{L}(\frac{\mathbf{B}}{d})\), there exists a lattice vector \(\mathbf{v}\in\mathcal{L}(\mathbf{B})\) such that \(d(f(x)-g(x))=h_{n,n}(f(x)-g(x))=\sigma^{-1}(\mathbf{v})\). Then for \(i=n\), \[\sigma(x\sigma^{-1}(\mathbf{h}_{n})\mod g(x)) =\sigma(\sum_{i=1}^{n-1}h_{n,i}x^{i}-h_{n,n}(g(x)-x^{n}))\] \[=\sigma(\sum_{i=1}^{n-1}h_{n,i}x^{i}-h_{n,n}(f(x)-x^{n})+\sigma^{ -1}(\mathbf{v}))\] \[=\sigma(x\sigma^{-1}(\mathbf{h}_{n})\mod f(x))+\mathbf{v}\in \mathcal{L}(\mathbf{B}).\] The theorem follows. Remark 2: The HNF \(\mathbf{H}\) in the proof can be replaced by any Incomplete Hermite Normal Form. ### Applications For most lattice-based cryptosystems, their security is guaranteed by the hardness of lattice problems such as \(\gamma\)-SVP. Hence, the hardness of lattice problem in ideal lattice is widely considered as the security foundation of Ring-LWE based cryptosystems. Due to the additional algebraic structure, the problem for the ideal lattice is usually conjectured to be easier than that for the general integer lattice. Some recent progress supports the argument well. Obviously, the algebraic structure depends on the polynomial ring that the ideal belongs to. However, Theorem 3.1 shows us that an ideal lattice can be embedded as some ideals into different polynomial rings, which means that an ideal lattice may have different "algebraic structure" in different rings although the lattice stays the same. This phenomenon inspires us to consider the following method to solve the hard problems for a given ideal lattice. By changing the polynomial ring, is it possible to transform the given ideal lattice as another ideal in which the lattice problems can be solved more efficiently by using the new algebraic structure? It seems hard to present a negative answer if the algebraic structure can indeed help solve the hard problems, since we have to consider infinite ideals and hence infinite algebraic structures. This no doubt increases the difficulty to show that the lattice problem for some fixed ideal lattice is hard. On the other hand, if we can utilize the algebraic structure to solve the lattice problems in some ideal lattice, then we can solve the problems for infinite ideal lattice in different rings. We would like to stress that as a lattice, these ideal lattices are same. However, as ideals, they are different. It seems that a weak ideal will spread as infinite weak ideals. Next we present some concrete examples to show the potential risk inspired by Theorem 3.2. #### 3.3.1 Pre-processing a fixed ring brings more. In [23], Pellet-Mary _et al._ showed pre-processing the number field can help solve \(\gamma\)-SVP in canonical-embedding ideal lattices more efficiently. However, pre-processing usually costs too much time. One may think for different number fields, we have to do different pre-processing. By Theorem 3.2, we know that pre-processing a fixed number field will also help us solve \(\gamma\)-SVP more efficiently in ideals that is not in the algebraic integer ring of the fixed number field. Consider the ring \(\mathbb{Z}[x]/(x^{n}+1)\), where \(n=2^{k}\), which is one of the most used rings in cryptosystems. It is well known that the lengths of vectors induced by the same element under the coefficient embedding and the canonical embedding are the same up to a fixed factor, which means the hardness of SVP in such two embedding ideal lattices are equivalent. Hence, by the method in [23], we can pre-process the ring \(\mathbb{Z}[x]/(x^{n}+1)\), and then can solve \(\gamma\)-SVP in its any coefficient-embedding ideal lattice more efficiently. Below we give a simple example to show how to apply the pre-processing on the ideal in other polynomial ring. Example 1: Given a coefficient-embedding ideal lattice in the ring \(\mathbb{Z}[x]/(x^{n}+x^{n-1}+2x^{n-2}+1)\) induced by the ideal \(<x+2>\), where \(n=2^{k}\), the basis has the form \[\mathbf{B}=\begin{pmatrix}2&1&0&\cdots&\cdots&0\\ 0&2&1&\cdots&\cdots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&\cdots&0&2&1\\ -1&0&\cdots&0&-2&1\end{pmatrix}.\] Note that the greatest common divisor \(d\) of the entries in the last column is \(1\). To verify that \(\mathcal{L}(\mathbf{B})\) can be embedded as an ideal into the ring \(\mathbb{Z}[x]/(x^{n}+1)\), according to Theorem 1.1, it's sufficient to verify the following relation: \[\sigma((x^{n}+x^{n-1}+2x^{n-2}+1)-(x^{n}+1))=\big{(}0\;0\;\cdots\;0\;2\;1\big{)} \in\mathcal{L}(\mathbf{B}),\] which is obvious. Therefore, by pre-processing the field \(\mathbb{Q}[x]/(x^{n}+1)\), we can also handle the hard problems on the ideal lattice \(\mathcal{L}(\mathbf{B})\) in \(\mathbb{Z}[x]/(x^{n}+x^{n-1}+2x^{n-2}+1)\). By the discussion above, our theorem has a good chance to amplify the results of the research on the ideal lattice of certain rings. #### 4.2.2 Changing the ring may be not enough for the security. Sometimes, we want to choose a special ring for the cryptosystems to resist some potential attacks. This may work in general. However, for some fixed ideal lattices, this may be not enough to obtain the desired security. For example, NTRUPrime [7] uses the ring \(\mathbb{Z}[x]/(x^{p}-x-1)\) to resist the potential subfield attacks against NTRU, where \(p\) is an odd prime. We next present a simple example to show that some ideals generated by polynomials with small coefficients in the ring \(\mathbb{Z}[x]/(x^{p}-x-1)\) can also be embedded as an ideal into some \(\mathbb{Z}[x]/f(x)\), where \(f(x)\) is reducible. However, a reducible \(f(x)\) may cause some potential security risk. Example 2: For convenience, we assume that \(p\) is large enough. Consider the coefficient-embedding ideal lattice induced by the principal ideal \(<x^{p-1}-x^{2}-x>\) in the ring \(\mathbb{Z}[x]/(x^{p}-x-1)\). We show that such ideal lattice can be embedded as an ideal into the ring \(\mathbb{Z}[x]/f(x)\), where \(f(x)=(x+1)(x^{p-1}-x-1)\) is reducible. The lattice basis of is \[\mathbf{B}=\left(\begin{array}{cccccc}0&-1&-1&0&0&\cdots&1\\ 1&1&-1&-1&0&\cdots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots\end{array}\right).\] Note that the greatest common divisor \(d\) of the entries in the last column is \(1\). According to Theorem 1.1, it's sufficient to verify the following relation: \[\sigma((x+1)(x^{p-1}-x-1)-(x^{p}-x-1))=(0,-1,-1,0,\cdots,1)\in\mathcal{L}( \mathbf{B}),\] and \((0,-1,-1,0,\cdots,1)\) is exactly the first row of \(\mathbf{B}\). ## 4 Identifying an Ideal Lattice We also present an algorithm to identify the ideal lattice, which is faster than that in [13]. ### Main theorem Inspired by Theorem 1, we find a new equivalent condition between integer lattices and coefficient-embedding ideal lattices, which is described as below. Theorem 4.1: _Given a full-rank integer lattice \(\mathcal{L}(\mathbf{B})\), let \(\mathbf{B}^{\prime}=\begin{pmatrix}\mathbf{D}&\mathbf{0}\\ \mathbf{b}^{\prime}&b_{n,n}^{\prime}\end{pmatrix}\) be any Incomplete Hermit Normal Form of \(\mathbf{B}\). Then \(\mathcal{L}(\mathbf{B})\) is an ideal lattice if and only if there exists a \(\mathbf{T}\in\mathbb{Z}^{(n-1)\times n}\), s.t.\(\left(\mathbf{0}\;\mathbf{D}\right)=\mathbf{TB}\)._ Proof: It can be easily check the "only if" part by Lemma 4, since for an ideal lattice \(\mathcal{L}(\mathbf{B})\) in \(\mathbb{Z}[x]/g(x)\), there exists a \(\mathbf{T}\in\mathbb{Z}^{(n-1)\times n}\), s.t. \(\left(\mathbf{0}\;\mathbf{D}\right)=\mathbf{TB}\) if and only if \(\sigma(x\sigma^{-1}(\mathbf{b}_{i}^{\prime})\mod g(x))\in\mathcal{L}(\mathbf{ B})\) for \(i=1,\cdots,n-1\). For "if" part, to indicate that \(\mathcal{L}(\mathbf{B})\) is an ideal lattice, we need to find a monic polynomial \(g(x)\) of degree \(n\), s.t. \(\mathcal{L}(\mathbf{B})\) can be embedded as an ideal into \(\mathbb{Z}[x]/g(x)\), or \(\sigma(x\sigma^{-1}(\mathbf{b}_{i}^{\prime})\mod g(x))\in\mathcal{L}(\mathbf{ B})\) for \(i=1,\cdots,n\) by Lemma 4. Note that for any polynomial \(g(x)\) with degree \(n\), \(\sigma(x\sigma^{-1}(\mathbf{b}_{i}^{\prime})\mod g(x))\in\mathcal{L}(\mathbf{ B})\) for \(i=1,\cdots,n-1\) since there exists a \(\mathbf{T}\in\mathbb{Z}^{(n-1)\times n}\), s.t. \(\left(\mathbf{0}\;\mathbf{D}\right)=\mathbf{TB}\). It remains to show that there exists a monic polynomial \(g(x)\) of degree \(n\), such that \(\sigma(x\sigma^{-1}(\mathbf{b}_{n}^{\prime})\mod g(x))\in\mathcal{L}(\mathbf{ B})\). We first present a lemma, which will be proven later. Lemma 6: _If \(\left(\mathbf{0}\;\mathbf{D}\right)=\mathbf{TB}\), then \(\mathbf{B}^{\prime}/b_{n,n}^{\prime}\in\mathbb{Z}^{(n-1)\times(n-1)}\)_ By Lemma 6, \(\frac{1}{b_{n,n}^{\prime}}(\left(0\;\mathbf{b}^{\prime}\right)+\mathcal{L}( \mathbf{B}))\subset\mathbb{Z}^{n}\). Taking any \[\mathbf{g}=\left(g_{1}\;g_{2}\;\cdots\;g_{n}\right)\in\frac{1}{b_{n,n}^{ \prime}}(\left(0\;\mathbf{b}^{\prime}\right)+\mathcal{L}(\mathbf{B})), \tag{3}\] the integer polynomial \(g(x)=x^{n}+g_{n}x^{n-1}+\cdots+g_{1}\) is what we want, since \[\sigma(x\sigma^{-1}(\mathbf{b}_{n}^{\prime})\mod g(x))=\left(0\;\mathbf{b}^{ \prime}\right)-b_{n,n}^{\prime}\left(g_{1}\;g_{2}\;\cdots\;g_{n}\right)\in \mathcal{L}(\mathbf{B}).\] It remains to prove Lemma 6. Proof: (Lemma 6) According to Lemma 2, \(\mathcal{L}(\mathbf{B}^{\prime})\) has a unique HNF basis, denoted by \(\mathbf{H}=(h_{i,j})_{1\leq i\leq n,1\leq j\leq n}\). By Lemma 3, we know that \(b_{n,n}^{\prime}=h_{n,n}\). It can be easily concluded that the lattice \(\mathcal{L}(\mathbf{D})\) has a unique HNF basis, \(\mathbf{H}^{\prime}=(h_{i,j})_{1\leq i\leq n-1,1\leq j\leq n-1}\), which implies that there exists a unimodular matrix \(\mathbf{U}\in\mathbb{Z}^{(n-1)\times(n-1)}\) such that \(\mathbf{H}^{\prime}=\mathbf{UD}\). Since \(\left(\mathbf{0}\;\mathbf{D}\right)=\mathbf{TB}\), we have \(\mathbf{U}\left(\mathbf{0}\;\mathbf{D}\right)=\mathbf{UTB}\), which is exactly \[\begin{pmatrix}0&h_{1,1}&0&...&0\\ 0&h_{2,1}&h_{2,2}&...&0\\.&.&.&.&.\\ 0&h_{n-1,1}&h_{n-1,2}&...&h_{n-1,n-1}\end{pmatrix}=\mathbf{UTB}. \tag{4}\] Note that \(\mathcal{L}(\mathbf{U}\mathbf{TB})\subset\mathcal{L}(\mathbf{B})\). What Equation (4) tells us is \[\sigma(x\sigma^{-1}(\mathbf{h}_{k}))\in\mathcal{L}(\mathbf{B}),\text{ for }k=1, \cdots,n-1.\] By the discussion in Remark 1, we have \(b^{\prime}_{n,n}=h_{n,n}|\mathbf{B}\). ### Algorithm and analysis According to Theorem 3.1, there is a very simple algorithm to identify if a given integer lattice is an ideal lattice or not. ``` 0:\(\mathbf{B}\in\mathbb{Z}^{n\times n}\), \(\text{rank}(\mathbf{B})=n\). 0:False if \(\mathcal{L}(\mathbf{B})\) is not a coefficient-embedding ideal lattice; Otherwise output a set \(S\subset\mathbb{Z}^{n}\) s.t. for any \((g_{1},g_{2},...,g_{n})\in S\), \(\mathcal{L}(\mathbf{B})\) can be embedded as an ideal into \(\mathbb{Z}[x]/(g_{1}x+g_{2}x^{2}+...+g_{n}x^{n-1}+x^{n})\). 1:Compute any Incomplete Hermit Normal Form \(\mathbf{B}^{\prime}=\begin{pmatrix}\mathbf{D}&\mathbf{0}\\ \mathbf{b}^{\prime}&b^{\prime}_{n,n}\end{pmatrix}\) of \(\mathbf{B}\) by unimodular transformation; 2:if\(b^{\prime}_{n,n}\not|\mathbf{B}\)then return False; 3:endif 4:if\(\left(\mathbf{0}\;\mathbf{D}\right)\mathbf{B}^{-1}\notin\mathbb{Z}^{(n-1) \times n}\)then return False; 5:endif 6:Output \(S=\frac{1}{b^{\prime}_{n,n}}(\left(0\;\mathbf{b}^{\prime}\right)+\mathcal{L}( \mathbf{B}))\). ``` **Algorithm 1** Identifying an ideal lattice Remark 3: In Step 1, we can also compute the HNF of \(\mathcal{L}(\mathbf{B})\), and then use the divisibility relation described in Lemma 5 to rule out some integer lattices that can't be embedded as an ideal into any polynomial ring. This may speedup the algorithm in practice, since many "random" integer lattices can not pass such check. CorrectnessThe correctness of our algorithm is guaranteed by Theorem 3.1. ComplexityWe next analyze the time complexity. For Step 1, we can use the following algorithm to compute an Incomplete Hermite Normal Form for \(\mathbf{B}\in\mathbb{Z}^{n\times n}\) with a unimodular transformation, whose idea has already been described in Section 2.2. It is easy to check that the integer matrix \(\begin{pmatrix}-b_{i+1,n}/d\;b_{i,n}/d\\ x&y\end{pmatrix}\) is unimodular since its determinant is \(-1\). Hence, the transformation in Step 3 will not change the lattice \(\mathcal{L}(\mathbf{B})\). After Step 3 for each \(i\), we have \(b_{i,n}=0\) and \(b_{i+1,n}=d\) computed by Step 2, which means that the output is in Incomplete Hermite Normal Form. For the time complexity, we assume that for the input \(\mathbf{B}\), the absolute value of its every entry is bounded by \(2^{B}\). It is easy to conclude that for the \(i\)-th loop, at the beginning, we have * \(|b_{i,j}|<2^{i*B+1}\), \(|b_{i+1,j}|<2^{B}\) for \(j=1,\cdots,n\), especially we have \(|b_{i,n}|<2^{B}\); * \(|x|<2^{B}\), \(|y|<2^{B}\), \(d<2^{B}\). Note that the Extended Euclidean Algorithm takes \(O(\log\!|a|\!\log\!|b|)\) bit operations on input \((a,b)\). Then for the \(i\)-th loop, with the plain integer multiplication we have: * Step 2 costs \(O(B^{2})\) bit operations; * Step 3 costs \(O(i*nB^{2})\) bit operations; Hence, for the total \(n\) loops, Algorithm 2 needs \(O(n^{3}B^{2})\) bit operations, and we have the following result. Lemma 7: _For a non-singular matrix \(\mathbf{B}\in\mathbb{Z}^{n\times n}\), the absolute value of whose entries is bounded by \(2^{B}\), Algorithm 2 takes \(O(n^{3}B^{2})\) bit operations to compute an Incomplete Hermite Normal Form of \(\mathbf{B}\) by a unimodular transformation._ For Step 4, we refer to Theorem 37 of [27] for more details. Theorem 3.1 (Theorem 37 in [27]): _There exists a Las Vegas algorithm that takes as input a non-singular \(\mathbf{A}\in\mathbb{Z}^{n\times n}\) and \(\mathbf{b}\in\mathbb{Z}^{n}\), and returns as output the vector \(\mathbf{b}\mathbf{A}^{-1}\in\mathbb{Q}^{n}\). If the absolute value of the entries of \(\mathbf{A}\) is bounded by \(2^{B}\), and the absolute value of the entries of \(\mathbf{b}\) is bounded by \(2^{nB}\), then the expected cost of the algorithm is \(O((\log n)\mathbf{M}\mathbf{M}(n)\mathbf{M}\mathbf{Z}(B+\log n))\) bit operations, where \(\mathbf{M}\mathbf{M}(n)\) means two \(n\times n\) matrices can be multiplied using at most \(\mathbf{M}\mathbf{M}(n)\) integer multiplications and \(\mathbf{M}\mathbf{Z}(B)\) means two \(B\)-bits integer can be multiplied using at most \(\mathbf{M}\mathbf{Z}(B)\) bit operations. This result assumes that \(\mathbf{M}\mathbf{Z}(t)=O(\mathbf{M}\mathbf{M}(t)/t)\)._ It is It is well known that the classical plain multiplication method allows \(\mathbf{M}\mathbf{Z}(B)=O(B^{2})\), and the Schonhage-Strassen algorithm [25] allows \(\mathbf{M}\mathbf{Z}(B)=O(B(\log\!B)(\log\log\!B)\). For \(\mathbf{M}\mathbf{M}(n)\), the classical plain multiplication method allows \(\mathbf{M}\mathbf{M}(n)=2n^{3}-n^{2}\), and the asymptotically faster method allows \(\mathbf{M}\mathbf{M}(n)=O(n^{2.376})\). We refer to [28] and [10] for more details and further discussion. For simplicity, we adopt the classical plain method for both integer multiplication and matrix multiplication, that is \(\mathbf{M}\mathbf{Z}(B)=O(B^{2})\) and \(O(n^{3})\). Then by Theorem 4.1 a simple analysis shows that Step 4 in Algorithm 1 costs \(O(n^{4}\log n(B+\log n)^{2})\) bit operations Together with Lemma 7, we have Theorem 4.2: _Given \(\mathbf{B}\in\mathbb{Z}^{n\times n}\), \(\text{rank}(\mathbf{B})=n\), and the absolute value of the entries of \(\mathbf{B}\) is bounded by \(2^{B}\), then there is a Las Vegas algorithm with expected complexity \(O(n^{4}\log n(B+\log n)^{2})\) to identify whether \(\mathcal{L}(\mathbf{B})\) is an ideal lattice or not._ Remark 4: It is claimed in [13] that the algorithm presented by Ding and Lindner to identify an ideal lattice costs \(O(n^{4}B^{2})\) bit operations. However, we have to point out that there is some flaw in the complexity analysis in \(O(n^{4}B^{2})\). The algorithm in [13] needs to compute \(n-2\) powers of \(\mathbf{B}\), that is, \(\mathbf{B}^{k}\) for \(k=2,\cdots,n-1\). It is claimed this can be done within \(O(n^{4}B^{2})\) bit operations. However, when \(k\) grows bigger, the bit size of the entries in \(\mathbf{B}^{k}\) will be \(O(kB)\) instead of \(B\). Hence the correct time complexity should be \[\sum_{k=2}^{n-1}O(n^{3}*k*B^{2})=O(n^{5}B^{2}).\] So our algorithm is faster than the algorithm in [13], due to the fact that our algorithm just checks if the systems of equations have integer solutions or not. ## 5 Conclusion In this paper, we reveal the embedding relation between the coefficient-embedding ideal lattice and the integer lattice, which gives us a new method to solve the ideal lattice problems by embedding the given ideal lattice into the well-studied polynomial ring. Hence, it's not proper anymore to judge the security of a cryptosystem based on ideal lattice by just considering a single ring. The embedding relation no doubt increases the difficulties of evaluating the security for any cryptosystem based on ideal lattices. Since the ideal lattice is a special case of the module lattice, it's possible that there is a similar embedding relation between integer lattices and module lattices. Therefore, it's worth researching that how to generalize our theory to module lattice.
2301.08999
Control of the Cauchy problem on Hilbert spaces: A global approach via symbol criteria
Let $A$ and $B$ be invariant linear operators with respect to a decomposition $\{H_{j}\}_{j\in \mathbb{N}}$ of a Hilbert space $\mathcal{H}$ in subspaces of finite dimension. We give necessary and sufficient conditions for the controllability of the Cauchy problem $$ u_t=Au+Bv,\,\,u(0)=u_0,$$ in terms of the (global) matrix-valued symbols $\sigma_A$ and $\sigma_B$ of $A$ and $B,$ respectively, associated to the decomposition $\{H_{j}\}_{j\in \mathbb{N}}$. Then, we present some applications including the controllability of the Cauchy problem on compact manifolds for elliptic operators and the controllability of fractional diffusion models for H\"ormander sub-Laplacians on compact Lie groups. We also give conditions for the controllibility of wave and Schr\"odinger equations in these settings.
Duván Cardona, Julio Delgado, Brian Grajales, Michael Ruzhansky
2023-01-21T19:48:53Z
http://arxiv.org/abs/2301.08999v1
# Control of the Cauchy problem on Hilbert spaces: ###### Abstract. Let \(A\) and \(B\) be invariant linear operators with respect to a decomposition \(\{H_{j}\}_{j\in\mathbb{N}}\) of a Hilbert space \(\mathcal{H}\) in subspaces of finite dimension. We give necessary and sufficient conditions for the controllability of the Cauchy problem \[u_{t}=Au+Bv,\ u(0)=u_{0},\] in terms of the (global) matrix-valued symbols \(\sigma_{A}\) and \(\sigma_{B}\) of \(A\) and \(B\), respectively, associated to the decomposition \(\{H_{j}\}_{j\in\mathbb{N}}\). Then, we present some applications including the controllability of the Cauchy problem on compact manifolds for elliptic operators and the controllability of fractional diffusion models for Hormander sub-Laplacians on compact Lie groups. We also give conditions for the controllibility of wave and Schrodinger equations in these settings. Key words and phrases:Control theory, Diffusion models, Exact controllability, fractional models, Controllability cost 2020 Mathematics Subject Classification: 35S30, 42B20; Secondary 42B37, 42B35 The authors were supported by the FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial Differential Equations, by the Methusalem programme of the Ghent University Special Research Fund (BOF) (Grant number 01M01021). B. Grajales has been partially supported by Universidad de Pamplona. J. Delgado is also supported by Vice. Inv.Universidad del Valle Grant CI 71329, Math-AmSud and Minciencias-Colombia under the project MATHAMSUD21-MATH-03. M. Ruzhansky is also supported by EPSRC grant EP/R003025/2. ontroduction ### Outline and methodology In this work we develop an approach to determine the controllability of the Cauchy problem on Hilbert spaces. Although there exists a very well-known criterion for the controllability of this problem, which is based on the Hilbert Uniqueness Method due to J. L. Lions [55, 56], which reduces the controllability of a system to the validity of the corresponding observability inequality for the adjoint system, here we provide a criterion inspired from the microlocal analysis of pseudo-differential operators (inspired in the notion of the symbol of an operator, see Hormander [46]), which decouples the system \[\frac{du}{dt}=Au+Bv(t),\ u(0)=u_{0},\ t\in[0,T], \tag{1.1}\] (here \(A\) and \(B\) are densely defined operators on a separable Hilbert space \(\mathcal{H}\)) in an infinite number of finite-dimensional control systems \[d\widehat{u}_{\ell}/dt=A_{\ell}\widehat{u}_{\ell}+B_{\ell}\widehat{v}_{\ell}( t),\ \widehat{u}_{\ell}(0)=\widehat{u}_{0,\ell}\in H_{\ell},\ t\in[0,T],\ \ell\in\mathbb{N}_{0}, \tag{1.2}\] where one is allowed to apply the _Kalman criterion_, see [50]. In this context, Kalman's criterion says that the _rank condition_ \[\mathrm{Rank}[B_{\ell},A_{\ell}B_{\ell},\cdots,A_{\ell}^{n_{\ell}-1}B_{\ell}] =n_{\ell}=\dim(H_{\ell}), \tag{1.3}\] provides a necessary and sufficient condition for the controllability of (1.2). Our approach shows that the exact controllability of the system (1.1) implies the exact controllability of any system (1.2). On the other hand, our approach also shows that if every system (1.2) is controllable and its controllability cost is uniformly bounded in \(\ell\) (that is, if the coupled systems (1.2) has a _globally finite controllability cost_) we are able to provide the exact controllability of (1.1). The decoupling procedure from (1.1) to (1.2) is carried out in such a way that the family of finite-dimensional subspaces \((H_{\ell})_{\ell\in\mathbb{N}_{0}}\) provides an orthogonal decomposition of the underlying space \(\mathcal{H}=\bigoplus_{\ell}H_{\ell}.\) Relative to this decomposition, and to any choice of an orthonormal basis \(\mathcal{B}_{\ell}\) of \(H_{\ell},\) there is a canonical Fourier transform \[\textbf{(FT):}\ u\mapsto\widehat{u}(\ell)\in\mathbb{C}^{n_{\ell}}, \tag{1.4}\] provided by the orthogonal projections \(P_{\ell}:\mathcal{H}\to H_{\ell},\) that in view of the Fourier inversion formula \[u=\sum_{\ell\in\mathbb{N}_{0}}P_{\ell}u, \tag{1.5}\] is given by \(\widehat{u}(\ell)=[P_{\ell}u]_{\mathcal{B}_{\ell}},\) that is the coordinate vector of \(P_{\ell}u\) with respect to the basis \(\mathcal{B}_{\ell}.\) Certainly one has the identification \(\widehat{u}(\ell)\cong P_{\ell}u.\) In other words the decoupling procedure from (1.1) into (1.2) is nothing else that taking the Fourier transform of the system (1.1) relative to the decomposition \((H_{\ell})_{\ell\in\mathbb{N}_{0}}.\) We explain it in the following diagram: \[\boxed{(1.1)\Longrightarrow(\mathbf{FT})\Longrightarrow(1.2)} \tag{1.6}\] On the other hand, a fact that is important for our further analysis is the coupling procedure from all the systems in (1.2) to (1.1). According to the standard terminology in _quantum mechanics_ we do it by using the quantisation procedure _symbol-to-operator_. In our case (and under the identification \(\widehat{u}(\ell)\cong P_{\ell}u\)) the sequences \((A_{\ell})_{\ell\in\mathbb{N}}\) and \((B_{\ell})_{\ell\in\mathbb{N}},\) are the symbols of the operators \(A\) and \(B\), respectively. The quantisation procedures \[(A_{\ell})_{\ell\in\mathbb{N}}\mapsto A=\mathbf{QP}((A_{\ell})_{\ell\in \mathbb{N}}),\ (B_{\ell})_{\ell\in\mathbb{N}}\mapsto B=\mathbf{QP}((B_{\ell})_{\ell\in \mathbb{N}}),\lx@note{footnote}{Here, we have employed the notation $A=\mathbf{QP}((A_{\ell})_{\ell\in\mathbb{N}})$ to indicate that the operator $A$ is the quantisation of the sequence $(A_{\ell})_{\ell\in\mathbb{N}}.$ We have employed $\mathbf{QP}$ to abbreviate _``Quntisation procedure''_. In the standard terminology of the theory of pseudo-differential operators one also write $A=\mathbf{Op}((A_{\ell})_{\ell\in\mathbb{N}})$ to indicate that $A$ is the operator associated to the $symbol$ $(A_{\ell})_{\ell\in\mathbb{N}}.$} \tag{1.7}\] allow to analyse the properties of the operators \(A\) and \(B\), from the properties of their symbols \(\sigma_{A}(\ell):=A_{\ell}\) and \(\sigma_{B}(\ell):=B_{\ell}\), \(\ell\in\mathbb{N}\), respectively. In other words, the coupling procedure from (1.2) to (1.1) is carried out by quantising the systems in (1.2). We explain it in the following diagram: \[\boxed{(1.2)\Longrightarrow(\mathbf{QP})\Longrightarrow(1.1)} \tag{1.8}\] As we will notice, the coupling and the decoupling procedure will be effective, in the sense that the information in the following diagram is preserved \[\boxed{(1.1)\Longrightarrow(\mathbf{FT})\Longrightarrow(1.2)\Longrightarrow (\mathbf{QP})\Longrightarrow(1.1)} \tag{1.9}\] if the operators \(A\) and \(B\) leave invariant the orthogonal decomposition \((H_{\ell})_{\ell\in\mathbb{N}}.\) So, a fundamental _geometric property_ assumed during this work is that every \(H_{\ell}\) is an invariant subspace of \(A\) and \(B\), respectively, that is, \[\forall\ell,\ \ AH_{\ell}\subset H_{\ell}\text{ and }BH_{\ell}\subset H_{\ell}.\] Having explained the methodology of our approach we are going to explain our main result. We will also give several general examples of this property. ### The main result According to the theory of invariant operators on Hilbert spaces developed by the second and fourth author in [27, 28], \(A\) and \(B\) are Fourier multipliers on \(\mathcal{H}\) (associated to the decomposition \((H_{\ell})_{\ell\in\mathbb{N}}\)). The construction of the global matrix-valued symbol \(\sigma_{T}\) in [27, 28] of a Fourier multiplier \(T\) on a Hilbert space \(\mathcal{H}\) can be found in Theorem 2.1 of Subsection 2.5. With the notations employed in Subsection 2.5 we present our main Theorem 3.5 in Section 3. In the case where \(A\) generates a (strongly continuous semigroup) \(C_{0}\)-semigroup, our main theorem essentially says that \[\boxed{(1.1)\textbf{ is controllable}\ \Longleftrightarrow\forall\ell,(1.2) \textbf{ satisfies the Kalman condition}\ (1.3)} \tag{1.10}\] and we compute in a sharp way the relation between the controllability cost of (1.1) with respect to the _global controllability cost_ of the systems in (1.3), see Definition 3.4. We refer the reader to Section 3 for details. We observe that in the case where \(M\) is a compact manifold without boundary, the Fourier analysis notion discussed above can be associated to any elliptic and positive elliptic pseudo-differential operator \(E\) on \(M\) in the sense of Seeley [67, 68]. Our approach includes this setting and other applications will be presented in the next subsection, see also Section 4. ### Applications From the mathematical perspective, there are many different contexts in which one can obtain an orthogonal decomposition of a Hilbert space \(\mathcal{H}\). For instance and for our purposes we give some examples: * On a compact manifold \(M\) without boundary, \(L^{2}(M)=\bigoplus_{\ell}H_{\ell}\) can be decomposed into the eigenspaces \(H_{\ell}=\operatorname{Ker}(E-\lambda_{\ell}I)\) of a positive and elliptic pseudo-differential operator \(E\) on \(M\). * On a compact manifold \(M\) without boundary, (in particular, on an arbitrary compact Lie group \(M=G\)) \(L^{2}(M)=\bigoplus_{\ell}H_{\ell}\) can be decomposed into the eigenspaces \(H_{\ell}=\operatorname{Ker}(\mathcal{L}^{s/2}-\lambda_{\ell}I)\) of a fractional power of a positive sub-Laplacian \(\mathcal{L}=-\sum_{j=1}^{k}X_{j}^{2}\), associated to a Hormander system of vector-fields \(\mathcal{X}=\{X_{1},\cdots,X_{k}\}\) satisfying the Hormander condition. It means that the vector fields in \(\mathcal{X}\) and their iterated commutators span the tangent space \(TM\). * In the case of \(\mathbb{R}^{n}\), \(L^{2}(\mathbb{R}^{n})=\bigoplus_{\ell}H_{\ell}\) can be decomposed into the eigenspaces \(H_{\ell}=\operatorname{Ker}(\mathscr{H}-\lambda_{\ell}I)\) of the harmonic oscillator \(\mathscr{H}=-\Delta_{x}+|x|^{2}\), (or of more general anharmonic oscillators \(\mathscr{H}_{l_{1},l_{2}}=(-\Delta_{x})^{l_{1}}+|x|^{2l_{2}}\), the fractional relativistic Schrodinger operators, and of course the special case of relativistic Schrodinger operators \(\sqrt{I-\Delta}+|x|^{2l}\)). * On open bounded domains \(\Omega\) of \(\mathbb{R}^{n}\), \(L^{2}(\Omega)=\bigoplus_{\ell}H_{\ell}\) can be decomposed into the eigenspaces \(H_{\ell}=\operatorname{Ker}((-\Delta)^{s}-\lambda_{\ell}I)\) of the spectral fractional Laplacian \((-\Delta)^{s}\) with homogeneous boundary Dirichlet data on \(\partial\Omega\). * In any separable complex Hilbert space \(\mathcal{H}\) admitting an unbounded self-adjoint operator \(E\) with discrete spectrum (according to the spectral theorem), the previous situations supply examples. Then, our analysis will include models of the form (1.1) and then the following specific situations: * **Control of fractional elliptic problems.**\(M\) is a closed manifold, \(\mathcal{H}=L^{2}(M)\), \(A=E^{s}\) is a positive power of an elliptic operator \(E\) on \(M\), and \(B\) being an operator commuting with \(E\), (this condition assures that \(B\) leaves invariant the eigenspaces of \(A\)). * **Control of fractional subelliptic problems.** Again, on a closed manifold \(M\), \(A\) can be a fractional power of a positive sub-Laplacian \(\mathcal{L}=-\sum_{j=1}^{k}X_{j}^{2}\), associated to a Hormander system of vector-fields \(\mathcal{X}=\{X_{1},\cdots,X_{k}\}\) satisfying the Hormander condition and \(B\) being a continuous linear operator commuting with \(A\). * **Control of fractional diffusion models for anharmonic operators and relativistic Schrodinger operators.**\(A\) can be the harmonic oscillator \(\mathscr{H}=-\Delta_{x}+|x|^{2}\), acting on \(C_{0}^{\infty}(\mathbb{R}^{n})\subset L^{2}(\mathbb{R}^{n})\) (or \(A\) can be a more general anharmonic oscillator of the type \(\mathscr{H}_{l_{1},l_{2}}=(-\Delta_{x})^{l_{1}}+|x|^{2l_{2}}\)) and \(B\) commuting with \(A\). Similarly, \(A\) can be a relativistic Schrodinger operator of the form \(\sqrt{I-\Delta}+|x|^{2l}\) and \(B\) commuting with \(A\). * **Control in compact Lie groups setting.**\(M=G\) is a compact Lie group and \(A\) and \(B\) are continuous linear operators on \(C^{\infty}(G)\) being left-invariant. This mean that they commute with the left-action \(L_{x}:f\mapsto f(x\cdot)\) of the group to \(C^{\infty}(G)\). Although the approach of this work, summarised in (1.10), is designed for analysing general Cauchy problems on Hilbert spaces, our degree of generality is justified by a variety of applications (where particular contexts are given by the previous examples). Therein we allow the analysis of non-local models, namely, where the main term \(A\) is a non-local operator as in the case of the fractional Laplacian \((-\Delta)^{s}\) on an open domain \(\Omega\) or on a closed manifold \(M,\) the fractional sub-Laplacian \((-\mathcal{L})^{s},\) any positive power \(E^{s}\) of an elliptic operator \(E,\) or any PDE where \(A\) having a discrete spectrum has principal terms involving pseudo-differential and/or integral terms. ### State-of-the-art There has been a growing activity with respect to the research on the controllability of fractional diffusion models and other differential problems involving non-local operators. Next, we give some references related with this work. #### 1.4.1. A general overview The growing research activity in the setting of fractional diffusion models is justified by emerging models in different branches of science and engineering. For instance, non-local and fractional equations appear as models in turbulence problems [8], in image processing [41], population dynamics [24], and e.g. in optimal control of fractional Laplacians with variable exponent models applied to image denoising [5, 7]. In addition, several of the recent works in literature (see e.g. [15, 16, 2, 3, 1]) have been using numerical methods for fractional Laplacians where the techniques are based in the works by Glowinski and J. L. Lions, see e.g. [42, 43]. From the mathematical point of view, there has been a huge activity with recent pioneering works including extension techniques, see Caffarelli and Silvestre [17, 18], and other models including inverse problems and analysis of non-local PDE, see e.g. S. Dipierro, X. Ros-Oton, and E. Valdinoci [31, 32], Fall and Felli [37], Ghosh, Ruland, Salo, and Uhlmann [40]. For the recent activity involving optimal control of diffusion models, we refer to [4, 5, 6] and for the analysis of fractional hyperbolic and dispersive problems we refer to [11, 12, 13, 38]. Other recent works involving control and numerical methods for fractional diffusion models can be found in the recent works [10, 14, 23] and the extensive list of references therein. In the wide spectrum of the numerical analysis of non-local models, the _penalized uniqueness method_ has been used e.g. by Boyer, Hubert, and Rousseau in [16] and by Glowinski and Lions in [42] to compute control functions numerically in the analysis of fractional Laplacians. As we discussed above, these operators are of worthy interest since they appear in a large number of models describing practical situations. Among them we refer to [57] where a realization of the fractional Schrodinger equation is applied to optics, to [70] where the fractional Laplacian appears in the scalar Helmholtz equation which is used in electromagnetic interrogation in Earth's interior, and a classical paper by Mandelbrot and Van Ness [58] dealing with fractional Brownian motions. It is worth to mention that an alternative approach to study the controllability or even, the approximate controllability of the Cauchy problem \[du/dt=Au+Bv(t),\ u(0)=u_{0},\ t\in[0,T],\] is to accurately discretize the operator \(A.\) Some of these methods have been developed in [1, 2, 9, 47] for the fractional Laplacian and in [61] for the _Dirichlet fractional Laplacian_, which is is defined as the power of the Laplace operator, obtained by using the spectral decomposition of the Laplacian. #### 1.4.2. Control theory on compact manifolds The study of the controllability on Riemannian manifolds has a long tradition. The fundamental models to be understood are the _heat_ and the _wave_ equation. The exact controllability of the wave equation was proved firstly by Chen and Millman [22]. The exact controllability of the heat equation in the setting of internal control was proved in the seminal work of Lebeau and Robbiano [53]. In particular, the method developed in [53] changed the perspective on the field, by reducing the observability inequality of the adjoint system to the validity of a spectral inequality, see Jerison and Lebeau [48] and Lebeau and Zuazua [54]. Such a spectral inequality is a generalisation of the spectral inequality due to Donnelly and Fefferman [33, 34, 35, 36]. About fractional diffusion models for positive powers of general elliptic pseudo-differential operators, we refer the reader to [19, 21]. On the other hand, we observe that in [26], the controllability in small time for the Navier-Stokes equations of incompressible fluids on compact two-dimensional manifolds was proved using purely analytic tools by Coron and Furkisov. In the setting of the fractional heat equation on open domains (with the suitable boundary conditions) several of these works have been dedicated to the internal controllability problem and the numerical analysis for this problems have been concentrated in the one dimensional case. To illustrate this, consider the fractional heat equation \[\frac{du}{dt}=-(\Delta_{M})^{s/2}u+1_{\omega}v, \tag{1.11}\] where \(\omega\subset M\) is an open subset. For \(s=2\) the null-controllability of the model (1.11) was proved by Lebeau and Robbiano [53]. Micu and Zuazua [59] and Biccari and Hernandez-Santamaria proved that, in one dimension, it is null controllable with a control function \(v\in L^{2}(\omega\times(0,T))\) if and only if \(1<s<2\), and the authors have analysed the approximate controllability of the system for \(0<s<1\), see also [9]. In [14], Biccari, Warma, and Zuazua proved the same result with bounded control functions. ### Organisation of the work This paper is organised as follows: in Section 2 we present the preliminaries about the Fourier analysis on Hilbert spaces and the theory of invariant operators and their symbol properties as developed in [27, 28]. We also present the construction of the matrix-valued symbols for continuous linear operators on compact Lie groups as developed in [64] and the results of the abstract control theory used in this work, namely, the Kalman condition and the observability criterion for the controllability of the Cauchy problem on Hilbert spaces. Our main result in the form of Theorem 3.5 will be presented in Section 3. Section 4 is dedicated to presenting a variety of applications of our main result. It is organised as follows: In Subsection 4.2 we analyse the controllability of diffusion models for elliptic operators on compact manifolds. In Subsection 4.3 we revisit Theorem 3.5 in the context of a compact Lie group \(G\) and we give a criterion in Theorem 4.7 adapted to the Cauchy problem for general left-invariant operators on \(G.\) The criterion is refined in terms of the matrix-valued symbols of the operators constructed from the group Fourier transform on \(G,\) or equivalently, from the representation theory of the group as developed in [64]. Subsection 4.4 is dedicated to the controllability of fractional models determined by powers of Hormander sub-Laplacians on compact Lie groups. In Subsection 4.5 we deduce the controllability of the heat operator from the controllability of the wave operator via a Kalman type analysis on each representation space. In Subsection 4.6 we present the Kalman condition for the control of the Schrodinger equation as an application of Theorem 3.5. Finally, in Section 5 we present some conclusions about the symbol criteria approach developed in this work. ## 2. Fourier multipliers and abstract control theory ### Fourier multipliers on Hilbert spaces We now recall the notion of invariant operators introduced in [28] and which is based on the following theorem: **Theorem 2.1**.: _Let \(\mathcal{H}\) be a complex Hilbert space and let \(\mathcal{H}^{\infty}\subset\mathcal{H}\) be a dense linear subspace of \(\mathcal{H}\). Let \(\{d_{j}\}_{j\in\mathbb{N}_{0}}\subset\mathbb{N}\) and let \(\{e_{j}^{k}\}_{j\in\mathbb{N}_{0},1\leqslant k\leqslant d_{j}}\) be an orthonormal basis of \(\mathcal{H}\) such that \(e_{j}^{k}\in\mathcal{H}^{\infty}\) for all \(j\) and \(k\). Let \(H_{j}:=\operatorname{span}\{e_{j}^{k}\}_{k=1}^{d_{j}}\), and let \(P_{j}:\mathcal{H}\to H_{j}\) be the orthogonal projection. For \(f\in\mathcal{H}\), we denote \(\widehat{f}(j,k):=(f,e_{j}^{k})_{\mathcal{H}}\) and let \(\widehat{f}(j)\in\mathbb{C}^{d_{j}}\) denote the column of \(\widehat{f}(j,k)\), \(1\leqslant k\leqslant d_{j}.\) Let \(T:\mathcal{H}^{\infty}\to\mathcal{H}\) be a linear operator. Then the following conditions are equivalent:_ 1. _For each_ \(j\in\mathbb{N}_{0}\)_, we have_ \(T(H_{j})\subset H_{j}\)_._ 2. _For each_ \(\ell\in\mathbb{N}_{0}\) _there exists a matrix_ \(\sigma_{T}(\ell)\in\mathbb{C}^{d_{\ell}\times d_{\ell}}\) _such that for all_ \(e_{j}^{k}\)__ \[\widehat{Te_{j}^{k}}(\ell,m)=\sigma_{T}(\ell)_{mk}\delta_{j\ell}.\] 3. _For each_ \(\ell\in\mathbb{N}_{0}\) _there exists a matrix_ \(\sigma_{T}(\ell)\in\mathbb{C}^{d_{\ell}\times d_{\ell}}\) _such that_ \[\widehat{Tf}(\ell)=\sigma_{T}(\ell)\widehat{f}(\ell)\] _for all_ \(f\in\mathcal{H}^{\infty}.\)__ _The matrices \(\sigma_{T}(\ell)\) in_ (B) _and_ (C) _coincide._ _The equivalent properties_ (A)-(C) _follow from the condition_ 1. _For each_ \(j\in\mathbb{N}_{0}\)_, we have_ \(TP_{j}=P_{j}T\) _on_ \(\mathcal{H}^{\infty}\)_._ _If, in addition, \(T\) extends to a bounded operator \(T\in\mathscr{L}(\mathcal{H})\) then_ (D) _is equivalent to_ (A)-(C)_._ _Remark 2.2_.: Under the assumptions of Theorem 2.1, we have the direct sum decomposition \[\mathcal{H}=\bigoplus_{j=0}^{\infty}H_{j},\quad H_{j}=\operatorname{span}\{e _{j}^{k}\}_{k=1}^{d_{j}}, \tag{2.1}\] and we have \(d_{j}=\dim H_{j}.\) _Remark 2.3_.: In terms of the notation of Theorem 2.1, for any \(f\in\mathcal{H}\), the Fourier transform \[\widehat{f}:\mathbb{N}_{0}\to\bigcup_{\ell\in\mathbb{N}_{0}}\mathbb{C}^{d_{\ell }\times d_{\ell}},\ \widehat{f}(j)=((f,e^{1}_{j})_{\mathcal{H}},\cdots,(f,e^{k}_{j})_{\mathcal{H}}, \cdots,(f,e^{d_{j}}_{j})_{\mathcal{H}})^{T}, \tag{2.2}\] relative to the subspace decomposition \(\{H_{j}\}_{j\in\mathbb{N}_{0}}\) admits the Fourier inversion formula \[f=\sum_{\ell\in\mathbb{N}_{0}}(\widehat{f}(\ell),e_{\ell})_{\mathbb{C}^{d_{ \ell}}}, \tag{2.3}\] where \((x,y)\mapsto(x,y)_{\mathbb{C}^{d_{\ell}}}\) denotes the standard inner product on \(\mathbb{C}^{d_{\ell}}\), and each \(e_{\ell}\) is the column vector \[e_{\ell}=(e^{1}_{\ell},\cdots,e^{k}_{\ell},\cdots,e^{d_{\ell}}_{\ell})^{T}. \tag{2.4}\] Note that the Plancherel formula takes the form \[\forall f\in\mathcal{H},\ \|f\|_{\mathcal{H}}^{2}=\sum_{\ell\in\mathbb{N}_{0}} \|\widehat{f}(\ell)\|_{\mathbb{C}^{d_{\ell}}}^{2}. \tag{2.5}\] _Remark 2.4_.: The two applications that we will consider will be with \(\mathcal{H}=L^{2}(M)\) for a compact manifold \(M\) with \(H_{j}\) being the eigenspaces of an elliptic classical pseudo-differential operator \(E\), or with \(\mathcal{H}=L^{2}(G)\) for a compact Lie group \(G\) with \[H_{j}=\operatorname{span}\{\xi_{km}\}_{1\leqslant k,m\leqslant d_{\ell}}\] for a unitary irreducible representation \(\xi\in[\xi_{j}]\in\widehat{G}\). The difference is that in the first case we will have the eigenvalues of \(E\) corresponding to \(H_{j}\)'s are all distinct, while in the second case the eigenvalues of the Laplacian on \(G\) for which \(H_{j}\)'s are the eigenspaces, may coincide. **Definition 2.5**.: In view of properties (A) and (C), respectively, an operator \(T\) satisfying any of the equivalent properties (A)-(C) in Theorem 2.1, will be called an _invariant operator_, or a _Fourier multiplier relative to the decomposition \(\{H_{j}\}_{j\in\mathbb{N}_{0}}\)_ in (2.1). If the collection \(\{H_{j}\}_{j\in\mathbb{N}_{0}}\) is fixed once and for all, we can just say that \(T\) is _invariant_ or a _Fourier multiplier_. The family of matrices \(\sigma\) will be called the _matrix symbol of \(T\) relative to the partition \(\{H_{j}\}\) and to the basis \(\{e^{k}_{j}\}\)_. _Remark 2.6_.: By following the notations in Definition 2.5, in view of the Fourier inversion formula in (2.3), we have the matrix-valued quantisation formula \[Tf=\sum_{\ell\in\mathbb{N}_{0}}(\sigma_{T}(\ell)\widehat{f}(\ell),e_{\ell})_{ \mathbb{C}^{d_{\ell}}},\ f\in\mathcal{H}^{\infty}. \tag{2.6}\] As a consequence of Theorem 2.1, we have the following construction of global matrix-valued symbols on compact manifolds without boundary. **Theorem 2.7** (Fourier multipliers on compact manifolds).: _Let \(M\) be a closed manifold. Consider \(E\in\Psi^{\nu}_{\nu,+}(M)\) be a classical positive pseudo-differential operator of order \(\nu>0\) on \(M.\) Let_ * \(H_{j}=\operatorname{Ker}(E-\lambda_{j}I)\) _be the family of eigenspaces of_ \(E\)_,_ * \(P_{j}:L^{2}(M)\to H_{j}\) _be the corresponding orthogonal projections,_ * \(\{d_{j}\}_{j\in\mathbb{N}_{0}}\subset\mathbb{N}\) _be the sequence formed by the dimensions of each_ \(H_{j},\)__ _ * _and assume that_ \(\mathcal{B}=\{e_{j}^{k}\}_{j\in\mathbb{N}_{0},1\leqslant k\leqslant d_{j}}\) _is an orthonormal basis of_ \(L^{2}(M),\) _where each_ \(H_{j}\) _is spanned by the basis_ \(\mathrm{span}\{e_{j}^{k}\}_{k=1}^{d_{j}}.\)__ _For \(f\in L^{2}(M)\), we denote by \(\widehat{f}(j,k):=(f,e_{j}^{k})_{\mathcal{H}}\) the Fourier coefficients of \(f\) relative to the basis \(\mathcal{B}.\) Let_ \[\widehat{f}(j)\in\mathbb{C}^{d_{j}}\] _denote the column of \(\widehat{f}(j,k)\), \(1\leqslant k\leqslant d_{j}.\) Let \(T:C^{\infty}(M)\to L^{2}(M)\) be a linear operator. Then the following conditions are equivalent:_ * _For each_ \(j\in\mathbb{N}_{0}\)_, we have_ \(T(H_{j})\subset H_{j}.\)__ * _For each_ \(\ell\in\mathbb{N}_{0}\) _there exists a matrix_ \(\sigma_{T}(\ell)\in\mathbb{C}^{d_{\ell}\times d_{\ell}}\) _such that for all_ \(e_{j}^{k}\)__ \[\widehat{Te_{j}^{k}}(\ell,m)=\sigma_{T}(\ell)_{mk}\delta_{j\ell}.\] * _For each_ \(\ell\in\mathbb{N}_{0}\) _there exists a matrix_ \(\sigma_{T}(\ell)\in\mathbb{C}^{d_{\ell}\times d_{\ell}}\) _such that_ \[\widehat{Tf}(\ell)=\sigma_{T}(\ell)\widehat{f}(\ell)\] _for all_ \(f\in C^{\infty}(M).\)__ _The matrices \(\sigma_{T}(\ell)\) in_ (B) _and_ (C) _coincide. The equivalent properties_ (A)-(C) _follow from the condition_ * _For each_ \(j\in\mathbb{N}_{0}\)_, we have_ \(TP_{j}=P_{j}T\) _on_ \(\mathcal{H}^{\infty}\)_. If, in addition,_ \(T\) _extends to a bounded operator_ \(T\in\mathscr{L}(L^{2}(M))\) _then_ (D) _is equivalent to_ (A)-(C)_._ _Remark 2.8_.: Let \(A,B:\mathcal{H}^{\infty}\to\mathcal{H}\) be Fourier multipliers. Assume that \(A\) is the generator of a \(C_{0}\)-semigroup \(S(t),\) that is \[\forall v\in\mathcal{H}^{\infty},\,\,Av=\lim_{t\to 0}\frac{1}{t}\left(S(t)v-v \right), \tag{2.7}\] where the limit is taken with respect to the norm on \(\mathcal{H}\). In this case one has that \[\forall\ell\in\mathbb{N},\,\,\,\sigma_{S(t)}(\ell)=e^{t\sigma_{A}(\ell)}\, \,\text{and}\,\,\sigma_{B*}(\ell)=\sigma_{B}(\ell)^{*}. \tag{2.8}\] ### Abstract control theory In this section we will present some results about the controllability of a general control system of the form \[\frac{du}{dt}=Au+Bv(t),\,\,t\in[0,T], \tag{2.9}\] where \(A\) and \(B\) are continuous linear \(\mathcal{H}\)-valued operators defined on dense subspaces of Hilbert spaces \(\mathcal{H}\) and \(\mathcal{V},\) respectively, and \(A\) is the generator of a strongly continuous semigroup \(S(t),\,\,t>0.\) For this we will follow J. M. Coron [25, Chapter IV]. Let us begin by specifying the definition of controllability. **Definition 2.9**.: The system (2.9) is controllable in time \(T>0\) if, for every \(u_{0},u_{T}\in D(A),\) there exists an input (or control) map \(v:[0,T]\to D(B)\) such that the solution \(u\) of the Cauchy problem \[\left\{\begin{aligned} &\frac{du}{dt}=Au+Bv,\\ &\\ & u(0)=u_{0},\end{aligned}\right.\] reaches \(u_{T}\) at time \(T\), that is, \(u(T)=u_{T}.\) There is an extensive bibliography dedicated to the study of the controllability of the system (2.9). We refer, for instance, to [63] and [71]. A large number of analytic and numerical methods have been developed and used in different contexts, for example, R. Kalman proposed and proved a criterion to determine whether a finite-dimensional linear system is controllable. This criterion is presented in the following theorem. **Theorem 2.10** (Kalman's criterion [49]).: _If \(\mathcal{H}\) and \(\mathcal{V}\) have finite dimensions \(n\) and \(m\) respectively, then the system (2.9) is controllable in time \(T>0\) if and only if_ \[\operatorname{rank}\big{[}B,AB,\cdots,A^{n-1}B\big{]}=n. \tag{2.10}\] Equality (2.10) is called the _rank Kalman condition_ and we can observe that it does not depend on the time \(T\) so, in particular, the Kalman's criterion implies that in finite dimension, the system is controllable in any time if it is controllable in some time \(T>0\). Another useful tool to determine whether or not a system is controllable in a time \(T>0\) is given by the next theorem. For a proof of this theorem, see, for instance [25, p. 57]. **Theorem 2.11** (Observality criterion).: _The system (2.9) is controllable at a time \(T>0\) if and only if there exists a constant \(c_{T}>0\) such that_ \[\int\limits_{0}^{T}||B^{*}S(t)^{*}z||_{\mathcal{V}}^{2}dt\geq c_{T}^{2}||z||_ {\mathcal{H}}^{2},\ \forall z\in D(A^{*}), \tag{2.11}\] _where \(D(A^{*})\) denotes the domain of the adjoint operator \(A^{*}\) of \(A.\)_ Inequality (2.11) is usually called the _observability inequality_ for the system (2.9) and we will use it later in the proof of our main theorem. _Remark 2.12_.: Let \(c(T)>0\) be the supremum of the constants \(c_{T}>0\) satisfying the observability inequality (2.11). By following the standard terminology of the control theory, the constant \[\boxed{\mathscr{C}_{T}:=1/c(T)}\] is called the _controllability cost_ of the controllable system (2.9). ### Left-invariant operators on compact Lie groups In order to record the equivalence between Fourier multipliers and left-invariant operators let us start with some basics about the Fourier analysis of a compact Lie group. Let \(dx\) be the Haar measure on a compact Lie group \(G.\) The Hilbert space \(L^{2}(G)\) will be endowed with the inner product \[(f,g)=\int\limits_{G}f(x)\overline{g(x)}dx.\] According to the Peter-Weyl theorem the spectral decomposition of \(L^{2}(G)\) can be done in terms of the entries of unitary representations on a compact Lie group \(G\). To present such a theorem we will give some preliminaries. **Definition 2.13** (Unitary representation of a compact Lie group).: A continuous and unitary representation of \(G\) on \(\mathbb{C}^{\ell}\) is any continuous mapping \(\xi\in\operatorname{Hom}(G,\operatorname{U}(\ell)),\) where \(\operatorname{U}(\ell)\) is the Lie group of unitary matrices of order \(\ell\times\ell.\) The integer number \(\ell=\dim_{\xi}\) is called the dimension of the representation \(\xi\) since it is the dimension of the representation space \(\mathbb{C}^{\ell}.\) _Remark 2.14_ (Irreducible representations).: A subspace \(W\subset\mathbb{C}^{d_{\xi}}\) is called \(\xi\)-invariant if for any \(x\in G,\)\(\xi(x)(W)\subset W,\) where \(\xi(x)(W):=\{\xi(x)v:v\in W\}.\) The representation \(\xi\) is irreducible if its only invariant subspaces are \(W=\emptyset\) and \(W=\mathbb{C}^{d_{\xi}},\) the trivial ones. On the other hand, any unitary representation \(\xi\) is a direct sum of unitary irreducible representations. We denote it by \(\xi=\xi_{1}\oplus\cdots\oplus\xi_{j},\) with \(\xi_{i}\) being irreducible representations on factors \(\mathbb{C}^{d_{\xi_{i}}}\) that decompose the representation space \[\mathbb{C}^{d_{\xi}}=\mathbb{C}^{d_{\xi_{1}}}\oplus\cdots\oplus\mathbb{C}^{d_ {\xi_{j}}}.\] **Definition 2.15** (Equivalent representations).: Two unitary representations \[\xi\in\operatorname{Hom}(G,\operatorname{U}(d_{\xi}))\text{ and }\eta\in \operatorname{Hom}(G,\operatorname{U}(d_{\eta}))\] are equivalent if there exists a linear invertible map \(S:\mathbb{C}^{d_{\xi}}\to\mathbb{C}^{d_{\eta}}\) such that for any \(x\in G,\)\(S\xi(x)=\eta(x)S.\) The mapping \(S\) is called an intertwining operator between \(\xi\) and \(\eta.\) The set of all the intertwining operators between \(\xi\) and \(\eta\) is denoted by \(\operatorname{Hom}(\xi,\eta).\) _Remark 2.16_ (Schur Lemma).: In view of the 1905's Schur lemma, if \(\xi\in\operatorname{Hom}(G,\operatorname{U}(d_{\xi}))\) is irreducible, then \(\operatorname{Hom}(\xi,\xi)=\mathbb{C}I_{d_{\xi}}\) is formed by scalar multiples of the identity matrix \(I_{d_{\xi}}\) of order \(d_{\xi}.\) **Definition 2.17** (The unitary dual).: The relation \(\sim\) on the set of unitary representations \(\operatorname{Rep}(G)\) defined by: \(\xi\sim\eta\)_if and only if \(\xi\) and \(\eta\) are equivalent representations, is an equivalence relation. The quotient \[\widehat{G}:=\operatorname{Rep}(G)/\sim\] is called the unitary dual of \(G.\) The unitary dual encodes all the Fourier analysis on the group. The Fourier transform is defined as follows. **Definition 2.18** (Group Fourier transform).: If \(\xi\in\operatorname{Rep}(G),\) the Fourier transform \(\mathscr{F}_{G}\) associates to any \(f\in C^{\infty}(G)\) a matrix-valued function \(\mathscr{F}_{G}f\) defined on \(\operatorname{Rep}(G)\) as follows \[(\mathscr{F}_{G}f)(\xi)\equiv\widehat{f}(\xi)=\int\limits_{G}f(x)\xi(x)^{*}dx,\ \xi\in\operatorname{Rep}(G).\] _Remark 2.19_ (The Fourier inversion formula on a compact Lie group).: The discrete Schwartz space \(\mathscr{S}(\widehat{G}):=\mathscr{F}_{G}(C^{\infty}(G))\) is the image of the Fourier transform on the class of smooth functions. This operator admits a unitary extension from \(L^{2}(G)\) into \(\ell^{2}(\widehat{G})\), with \[\ell^{2}(\widehat{G})=\left\{\phi:\mathbb{V}[\xi]\in\widehat{G},\,\phi(\xi)\in \mathbb{C}^{d_{\xi}\times d_{\xi}}\text{ and }\|\phi\|_{\ell^{2}(\widehat{G})}:=\left(\sum_{[\xi]\in \widehat{G}}d_{\xi}\|\phi(\xi)\|_{\mathrm{HS}}^{2}\right)^{\frac{1}{2}}<\infty \right\}. \tag{2.12}\] The norm \(\|\phi(\xi)\|_{\mathrm{HS}}\) is the standard Hilbert-Schmidt norm of matrices. The Fourier inversion formula takes the form \[f(x)=\sum_{[\xi]\in\widehat{G}}d_{\xi}\mathrm{Tr}[\xi(x)\widehat{f}(\xi)],\,f \in L^{2}(G), \tag{2.13}\] where the summation is understood in the sense that from any equivalence class \([\xi]\) we choose randomly a unitary representation. _Remark 2.20_.: The Plancherel theorem for the group Fourier transform takes the form \[\forall f\in L^{2}(G),\,\,\,\|f\|_{L^{2}(G)}=\left(\sum_{[\xi]\in\widehat{G}} d_{\xi}\|\widehat{f}(\xi)\|_{\mathrm{HS}}^{2}\right)^{\frac{1}{2}}. \tag{2.14}\] Let \(A:C^{\infty}(G)\to C^{\infty}(G)\) be a continuous linear operator with respect to the standard Frechet structure on \(C^{\infty}(G).\) There is a way of associating to the operator \(A\) a matrix-valued function \(\sigma_{A}\) defined on the non-commutative phase space \(G\times\widehat{G}\) to rewrite the operator \(A\) in terms of the Fourier inversion formula and in terms of the Fourier transform. Such an expression is called the dequantisation formula. To introduce it we require the following definition. **Definition 2.21** (Right convolution kernel of an operator).: The Schwartz kernel theorem associates to \(A\) a kernel \(K_{A}\in\mathscr{D}^{\prime}(G\times G)\) such that \[Af(x)=\int\limits_{G}K_{A}(x,y)f(y)dy,\,\,f\in C^{\infty}(G).\] The distribution defined via \(R_{A}(x,y):=K_{A}(x,xy^{-1})\) that provides the convolution identity \[Af(x)=\int\limits_{G}R_{A}(x,y^{-1}x)f(y)dy,\,\,f\in C^{\infty}(G),\] is called the right-convolution kernel of \(A.\) _Remark 2.22_ (The dequantisation formula).: Now, we will associate a global symbol \(\sigma_{A}:G\times\mathrm{Rep}(G)\rightarrow\cup_{\ell\in\mathbb{N}}\mathbb{C }^{\ell\times\ell}\) to \(A.\) Indeed, for a given \(x_{0}\in G,\) we can consider the continuous linear operator \(A_{x_{0}}:C^{\infty}(G)\to C^{\infty}(G)\) defined by \[A_{x_{0}}f(x)=\int\limits_{G}R_{A}(x_{0},y^{-1}x)f(y)dy=(f*R_{A}(x_{0},\cdot)) (x),\] and after taking the Fourier transform we get \[\widehat{A_{x_{0}}f}(\xi)=\widehat{R_{A}(x_{0},\cdot)}(\xi)\widehat{f}(\xi).\] Then, the Fourier inversion formula gives the following representation of the operator \(A_{x_{0}}\) in terms of the Fourier transform, \[A_{x_{0}}f(x)=\sum_{[\xi]\in\widehat{G}}d_{\xi}\mathrm{Tr}[\xi(x)\widehat{R_{A}( x_{0},\cdot)}(\xi)\widehat{f}(\xi)],\,f\in C^{\infty}(G), \tag{2.15}\] and, therefore, \[Af(x)=A_{x}f(x)=\sum_{[\xi]\in\widehat{G}}d_{\xi}\mathrm{Tr}[\xi(x)\widehat{R_ {A}(x,\cdot)}(\xi)\widehat{f}(\xi)],\,f\in C^{\infty}(G). \tag{2.16}\] We define the symbol of \(A\) at \((x,\xi)\in G\times\mathrm{Rep}(G)\) as \[\sigma_{A}(x,\xi):=\widehat{R_{A}(x,\cdot)}(\xi), \tag{2.17}\] so that \[Af(x)=\sum_{[\xi]\in\widehat{G}}d_{\xi}\mathrm{Tr}[\xi(x)\sigma_{A}(x,\xi) \widehat{f}(\xi)],\,f\in C^{\infty}(G). \tag{2.18}\] The formula (2.18) is independent of the choice of the representation \(\xi\) from any equivalent class \([\xi]\in\widehat{G}.\) This is a consequence of the Fourier inversion formula. In the following quantisation theorem, we observe that the distribution \(\sigma_{A}\) in (2.18) is unique and can be written in terms of the operator \(A,\) see Theorems 10.4.4 and 10.4.6 of [64, Pages 552-553]. **Theorem 2.23**.: _Let \(A:C^{\infty}(G)\to C^{\infty}(G)\) be a continuous linear operator. The following statements are equivalent._ * _The distribution_ \(\sigma_{A}:G\times\widehat{G}\to\cup_{\ell\in\mathbb{N}}\mathbb{C}^{\ell\times\ell}\) _satisfies the quantisation formula_ \[\forall f\in C^{\infty}(G),\,\forall x\in G,\,\,Af(x)=\sum_{[\xi]\in\widehat{G }}d_{\xi}\mathrm{Tr}[\xi(x)\sigma_{A}(x,\xi)\widehat{f}(\xi)].\] (2.19) * \(\forall(x,\xi)\in G\times\mathrm{Rep}(G),\,\sigma_{A}(x,\xi)=\widehat{R_{A}( x,\cdot)}(\xi).\)__ * \(\forall(x,\xi),\,\sigma_{A}(x,\xi)=\xi(x)^{*}A\xi(x),\,\text{where }A\xi(x):=(A\xi_{ij}(x))_{i,j=1}^{d_{\xi}}.\)__ **Example 2.24** (Spectrum of the Laplacian).: Let \[\mathbb{X}=\{X_{1},\cdots,X_{n}\}\] be an orthonormal basis of the Lie algebra \(\mathfrak{g}.\) The positive Laplacian on \(G\) is the second order differential operator \[\mathcal{L}_{G}=-\sum_{j=1}^{n}X_{j}^{2}. \tag{2.20}\] The operator \(\mathcal{L}_{G}\) is independent of the choice of the orthonormal basis \(\mathbb{X}\) of \(\mathfrak{g}.\) The \(L^{2}\)-spectrum of \(\mathcal{L}_{G}\) is a discrete set that can be enumerated in terms of the unitary dual \(\widehat{G}\) \[\mathrm{Spect}(\mathcal{L}_{G})=\{\lambda_{[\xi]}:[\xi]\in\widehat{G}\}. \tag{2.21}\] Of particular interest for our further analysis will be the Japanese bracket function \[\langle t\rangle:=(1+t)^{\frac{1}{2}},\,t\geqslant-1. \tag{2.22}\] In particular the symbol of the operator \(\langle\mathcal{L}_{G}\rangle=(1+\mathcal{L}_{G})^{\frac{1}{2}}\) is given by \[\sigma_{\langle\mathcal{L}_{G}\rangle}([\xi]):=\langle\xi\rangle I_{d_{\xi}}, \ \ \langle\xi\rangle:=\langle\lambda_{[\xi]}\rangle. \tag{2.23}\] Consider the action of the group \(G\) on \(C^{\infty}(G)\) given by \(\rho:(x,f)\longmapsto f\circ L_{x},\) where \(L_{x}(y):=xy.\) A continuous operator \(A:C^{\infty}(G)\to C^{\infty}(G)\) is called _left-invariant_ if \(A\) commutes with \(\rho(x,\cdot)\) for all \(x\in G,\) i.e., if it satisfies the following property \[A(f\circ L_{x})=(Af)\circ L_{x},\ f\in C^{\infty}(G),\ x\in G.\] **Proposition 2.25** ([64]).: _The following statements are equivalent:_ * \(A\) _is left-invariant._ * \(R_{A}(x,y)=R_{A}(zx,y),\ \forall x,y,z\in G.\)__ * \(\sigma_{A}(x_{1},\xi)=\sigma_{A}(x_{2},\xi),\) _for all_ \(x_{1},x_{2}\in G\) _and_ \(\xi\in\operatorname{Rep}(G).\)__ * \(A_{x_{0}}=A,\ \forall x_{0}\in G.\)__ In particular, the proposition above says that for a left-invariant operator \(A,\) the symbol \(\sigma_{A}(x,\xi)\) does not depend on \(x,\) so in this case we can define \(\sigma_{A}(\xi):=\sigma_{A}(x,\xi)\) for any \(x\in G.\) _Remark 2.26_.: Let \(A,B:C^{\infty}(G)\to C^{\infty}(G)\) be continuous linear operators. Assume that \(A\) is the generator of a \(C_{0}\)-semigroup \(S(t),\) that is \[\forall f\in C^{\infty}(G),\ Af=\lim_{t\to 0}\frac{1}{t}\left(S(t)f-f\right), \tag{2.24}\] where the limit is taken with respect to the \(L^{2}\)-norm on \(G\). If \(A\) is left-invariant, note that \[\forall\xi\in\widehat{G},\ \ \sigma_{S(t)}(\xi)=e^{t\sigma_{A}(\xi)}\text{ and } \sigma_{B^{*}}(\xi)=\sigma_{B}(\xi)^{*}. \tag{2.25}\] _Remark 2.27_ (Fourier multipliers on compact manifolds vs invariant operators).: If \(A:C^{\infty}(G)\to L^{2}(G)\) is an invariant operator (with respect to the Laplacian \(\mathcal{L}_{G}\)) then we have two notions of global symbols for \(A.\) One is defined in terms of the representation theory of the group \(G\) and we will denote this symbol by \((\sigma_{A}(\xi))_{[\xi]\in\widehat{G}},\) and the other one is that defined when we consider the compact Lie group as a manifold, and in this case the symbol will be denoted by \((\sigma_{A}(l))_{l\in\mathbb{N}_{0}}.\) The relation of this two symbols has been established in [28, Page 25]. Now, we describe this relation. In the setting of compact Lie groups the unitary dual being discrete, we can enumerate the unitary dual as \([\xi_{j}],\) for \(j\in\mathbb{N}_{0}.\) In this way we fix the orthonormal basis \[\{e_{jk}\}_{k=1}^{d_{j}}=\{d_{\xi_{j}}^{\frac{1}{2}}(\xi_{j})_{il}\}_{i,l=1}^{ d_{\xi_{j}}} \tag{2.26}\] where \(d_{j}=d_{\xi_{j}}^{2}.\) Then, we have the subspaces \(H_{j}=\operatorname{span}\{(\xi_{j})_{i,l}:i,l=1,\cdots,d_{\xi_{j}}\}.\) With the notation above we have \[\sigma_{A}(l)=\begin{bmatrix}\sigma_{A}(\xi_{l})&0_{d_{\xi_{l}}\times d_{\xi _{l}}}&0_{d_{\xi_{l}}\times d_{\xi_{l}}}&\cdots&0_{d_{\xi_{l}}\times d_{\xi _{l}}}\\ 0_{d_{\xi_{l}}\times d_{\xi_{l}}}&\sigma_{A}(\xi_{l})&0_{d_{\xi_{l}}\times d_{ \xi_{l}}}&\cdots&0_{d_{\xi_{l}}\times d_{\xi_{l}}}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0_{d_{\xi_{l}}\times d_{\xi_{l}}}&0_{d_{\xi_{l}}\times d_{\xi_{l}}}&0_{d_{\xi_ {l}}\times d_{\xi_{l}}}&\cdots&\sigma_{A}(\xi_{l})\\ \end{bmatrix}_{d_{l}\times d_{l}}.\] ## 3. Control theory on Hilbert spaces: symbol criteria In this section we present a controllability criterion for the Cauchy problem associated to Fourier multipliers on Hilbert spaces. As it was shown in the previous section these are operators leaving invariant a fixed decomposition of the Hilbert space in subspaces of finite dimension. Such a result is presented below as Theorem 3.5 and will be formulated in terms of the global matrix-valued symbols of the operators. For our further analysis we require the following definition. **Definition 3.1** (Image of the Cauchy problem under the Fourier transform).: Let \(\mathcal{H}\) be a complex Hilbert space and let \(\mathcal{H}^{\infty}\subset\mathcal{H}\) be a dense linear subspace of \(\mathcal{H}\). Let \(\mathcal{H}=\bigoplus_{j}H_{j}\) be a decompositon of \(\mathcal{H}\) in orthogonal subspaces \(H_{j}\) of dimension \(d_{j}\in\mathbb{N}\). Let \(A,B:\mathcal{H}^{\infty}\to\mathcal{H}\) be Fourier multipliers relative to the decomposition \(\{H_{j}\}_{j\in\mathbb{N}_{0}}\). Consider the Cauchy problem \[\text{(CP):}\ \begin{cases}\dfrac{du}{dt}=Au+Bv,\\ \\ u(0)=u_{0}\in\mathcal{H}^{\infty}.\end{cases} \tag{3.1}\] We define the image of (CP) under the Fourier transform relative the the decomposition \(\{H_{j}\}_{j\in\mathbb{N}_{0}}\) to be the infinite family of finite-dimensional dynamical systems \[((\text{CP}),\ell):\begin{cases}\dfrac{d\widehat{u}(\ell)}{dt}=\sigma_{A}( \ell)\widehat{u}(\ell)+\sigma_{B}(\ell)\widehat{v}(\ell),\\ \\ \dfrac{\widehat{u(0)}(\ell)}{\widehat{u(0)}(\ell)}=\widehat{u}_{0}(\ell)\in \mathbb{C}^{d_{\ell}}.\end{cases},\ \ell\in\mathbb{N}_{0}. \tag{3.2}\] The previous definition is motivated by the reduction of the controllability of the system (3.1) **Lemma 3.2**.: _The following statements are equivalent._ 1. _For all_ \(\ell\in\mathbb{N},\) _the Cauchy problem_ \(((\text{CP}),\ell)\) _in (_4.6_) is a controllable dynamical system at a time_ \(T>0\)_._ 2. \(\forall\ell\in\mathbb{N}_{0}\)_, the Kalman condition_ \[\operatorname{rank}\big{[}\sigma_{B}(\ell),\ \sigma_{A}(\ell)\sigma_{B}(\ell),\ \cdots,\ \sigma_{A}(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)\big{]}=d_{\ell}\] (3.3) _is satisfied._ 3. \(\forall\ell\in\mathbb{N}_{0},\ \exists c=c(\ell,T)>0\) _such that_ \[\int\limits_{0}^{T}||\sigma_{B}(\ell)^{*}\exp{(t\sigma_{A}(\ell)^{*})}z||^{2}_ {\mathbb{C}^{d_{\ell}}}dt\geq c(\ell,T)^{2}||z||^{2}_{\mathbb{C}^{d_{\ell}}}, \ \forall z\in\mathbb{C}^{\ell}.\] (3.4) Proof.: Note that the equivalence \((1)\Longleftrightarrow(2)\) follows from Kalman's criterion (see Theorem 2.10). On the other hand, the equivalence \((1)\Longleftrightarrow(3)\) is nothing else that the observability criterion in Theorem 2.11. It is clear then the equivalence \((2)\Longleftrightarrow(3).\) The proof of Lemma 4.4 is complete. _Remark 3.3_.: Let \(c_{\ell,T}\) be the supremum of the constants \(c=c(\ell,T)>0\) satisfying the observability inequality (4.8). According to the usual nomenclature of the control theory, the constant \[\boxed{\mathscr{C}_{\ell,T}:=1/c_{\ell,T}}\] is called the controllability cost of the Cauchy problem (3.2). **Definition 3.4**.: We will say that the image of the Cauchy problem (3.1) under the Fourier transform associated to the decomposition \(\{H_{j}\}_{j\in\mathbb{N}},\) has a _finite global controllability cost_ if \[\mathscr{C}_{T}:=\sup_{\ell\in\mathbb{N}}\mathscr{C}_{\ell,T}<\infty. \tag{3.5}\] Now, we present the following criterion for the controllability of the Cauchy problem for Fourier multipliers on Hilbert spaces. **Theorem 3.5**.: _Let \(\mathcal{H}\) be a complex Hilbert space and let \(\mathcal{H}^{\infty}\subset\mathcal{H}\) be a dense linear subspace of \(\mathcal{H}\). Let \(\mathcal{H}=\bigoplus_{j}H_{j}\) be a decompositon of \(\mathcal{H}\) in orthogonal subspaces \(H_{j}\) of dimension \(d_{j}\in\mathbb{N}.\) Let \(A,B:\mathcal{H}^{\infty}\to\mathcal{H}\) be Fourier multipliers relative to the decomposition \(\{H_{j}\}_{j\in\mathbb{N}_{0}}.\)_ (1) _If the Cauchy problem_ \[\left\{\begin{aligned} &\frac{du}{dt}=Au+Bv,\\ &\\ & u(0)=u_{0}\in\mathcal{H}^{\infty},\end{aligned}\right. \tag{3.6}\] _is controllable, then for any \(\ell\in\mathbb{N}_{0},\) the global symbols \(\sigma_{A}(\ell)\) and \(\sigma_{B}(\ell)\) of \(A\) and \(B,\) respectively, satisfy the Kalman condition:_ \[\operatorname{rank}\big{[}\sigma_{B}(\ell),\ \sigma_{A}(\ell)\sigma_{B}( \ell),\ \cdots,\ \sigma_{A}(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)\big{]}=d_{\ell}. \tag{3.7}\] _Additionally, if \(A\) generates a strongly continuous semigroup on \(\mathcal{H},\) the image of the Cauchy problem (3.6) under the Fourier transform relative to the decomposition \((H_{j})_{j\in\mathbb{N}},\) has a finite global controllability cost at time \(T>0,\) that is_ \[\mathscr{C}_{T}:=\sup_{\ell\in\mathbb{N}_{0}}\mathscr{C}_{\ell,T}<\infty.\] _Moreover,_ \[\mathscr{C}_{T}\leq\tilde{\mathscr{C}}_{T},\] _where \(\tilde{\mathscr{C}}_{T}\) is the controllability cost of (3.6)._ 1. _Conversely, assume that_ \(A\) _is the generator of a strongly continuous semigroup on_ \(\mathcal{H},\) _and that the Kalman condition (_3.7_) is satisfied for each_ \(\ell\in\mathbb{N}_{0}\)_. Assume that the image of the Cauchy problem (_3.6_) under the Fourier transform relative to the decomposition_ \((H_{j})_{j\in\mathbb{N}},\) _has a finite global controllability cost in time_ \(T>0,\) _that is,_ \[\mathscr{C}_{T}:=\sup_{\ell\in\mathbb{N}_{0}}\mathscr{C}_{\ell,T}<\infty.\] _Then, the Cauchy problem (_3.6_) is controllable at time_ \(T>0,\) _and its controllability costs_ \(\tilde{\mathscr{C}}_{T}\) _satisfies the inequality_ \[\mathscr{C}_{T}\geq\tilde{\mathscr{C}}_{T}.\] (3.8) Proof.: For the proof of (1) let us analyse the image of (3.6) under the Fourier transform associated to the decomposition \(H_{j}\), \(j\in\mathbb{N}\), in order to deduce (3.7). For the proof of (2), by following the standard strategy of the control theory, we will reduce the controllability of the system (3.6) to the validity of the observability inequality (2.11) in Theorem 2.11. * Assume that the Cauchy problem (3.6) is controllable. By fixing \(\ell\in\mathbb{N}_{0}\), and taking the Fourier transform relative to the decomposition \(\{H_{j}\}_{j\in\mathbb{N}_{0}}\), we get \[\frac{d\widehat{u}(\ell)}{dt}+\widehat{Au}(\ell)=\widehat{Bv}(\ell)\] and, since \(A,B\) are Fourier multipliers we have the following identity in terms of the symbols \(\sigma_{A}\) and \(\sigma_{B}\) of \(A\) and \(B\), respectively, \[\frac{d\widehat{u}(\ell)}{dt}+\sigma_{A}(\ell)\widehat{u}(\ell)=\sigma_{B}( \ell)\widehat{v}(\ell).\] (3.9) This is a dynamical system in the set of square matrices of order \(d_{\ell}\). In order to prove the controllability of (3.9) let us take \[\zeta_{0},\zeta_{T}\in\mathbb{C}^{d_{\ell}}.\] For any \(\ell^{\prime}\in\mathbb{N}_{0}\) define \[u_{0}=(e_{\ell^{\prime}},\zeta_{0})\delta_{\ell,\ell^{\prime}}\text{ and }u_{T}:=(e_{\ell},\zeta_{T})\delta_{\ell,\ell^{\prime}}.\] Observe that the Fourier coefficients of \(u_{0}\) and of \(u_{T}\) satisfy that \[\forall\ell\neq\ell^{\prime},\;\;\widehat{u}_{0}(\ell^{\prime})=0_{\mathbb{C} ^{d_{\ell^{\prime}}}}=\widehat{u}_{T}(\ell^{\prime}).\] Since each \(H_{j}\subset\mathcal{H}^{\infty}\), the vectors \(u_{0},u_{T}\) belong to \(\mathcal{H}^{\infty}\) and they satisfy that \(\widehat{u_{0}}(\ell)=\zeta_{0}\) and \(\widehat{u_{T}}(\ell)=\zeta_{T}.\) Since (3.6) is controllable, there exists an input function \(v\) such that the solution \(u\) of (3.6) satisfies \(u(T)=u_{T}\), so \(\widehat{u}(\ell)\) is a solution of (3.9) with \(\widehat{u}(\ell)(0)=\zeta_{0}\) and \(\widehat{u}(\ell)(T)=\zeta_{T}\), i.e., (3.9) is controllable. By Kalman's criterion (see Theorem 2.10) we conclude that \[\operatorname{rank}\big{[}\sigma_{B}(\ell),\;\sigma_{A}(\ell)\sigma_{B}(\ell ),\;\cdots,\;\sigma_{A}(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)\big{]}=d_{\ell}.\] (3.10) To end the proof of (1) we have to prove the estimate of the _globally finite controllability cost_ of the system (3.9). In view of Theorem 2.11, we have the observability inequality \[\int\limits_{0}^{T}||B^{*}S(t)^{*}f||_{\mathcal{H}}^{2}dt\geq\left(\frac{1}{ \mathscr{\bar{C}}_{T}}\right)^{2}||f||_{\mathcal{H}}^{2},\;\forall f\in \mathcal{H}^{\infty},\] (3.11) where \(S(t)\) is the \(C_{0}\)-semigroup generated by \(A\). Via Plancherel theorem it is equivalent to the inequality \[\int\limits_{0}^{T}\sum_{\ell\in\mathbb{N}_{0}}||\sigma_{B}(\ell)^{*}\exp{(t \sigma_{A}(\ell)^{*})}\widehat{f}(\ell)||_{\mathbb{C}^{d_{\ell}}}^{2}dt\geq \left(\frac{1}{\mathscr{\bar{C}}_{T}}\right)^{2}\sum_{\ell\in\mathbb{N}_{0}} ||\widehat{f}(\ell)||_{\mathbb{C}^{d_{\ell}}}^{2}\;\forall f\in\mathcal{H}^{ \infty}.\] (3.12) Now, if \(\ell_{0}\) is fixed, and \(z\in\mathbb{C}^{d_{\ell_{0}}}\backslash\{0\}\) is an arbitrary coordinate vector, let us consider the vector \(v_{z}\in\mathcal{H}^{\infty}\) determined by the following Fourier coefficients \[\widehat{v}_{z}(\ell)=z\delta_{\ell,\ell_{0}},\;\ell\in\mathbb{N}_{0}.\] (3.13) Plugging (3.13) into (3.12) we have that \[\int\limits_{0}^{T}||\sigma_{B}(\ell_{0})^{*}\exp{(t\sigma_{A}(\ell_{0})^{*})}z|| _{\mathbb{C}^{d_{\ell_{0}}}}^{2}dt\geq\left(\frac{1}{\mathscr{E}_{T}}\right)^{2 }||z||_{\mathbb{C}^{d_{\ell}}}^{2}.\] This inequality is the observability inequality of the system (3.9) when \(\ell=\ell_{0}\). Note that if \(\mathscr{C}_{\ell,T}\) is the controllability costs of (3.9) when \(\ell=\ell_{0}\), then we have the inequality \[\left(\frac{1}{\mathscr{C}_{\ell_{0},T}}\right)^{2}\geq\left(\frac{1}{ \mathscr{E}_{T}}\right)^{2}\] from which we deduce that \[\mathscr{C}_{T}=\sup_{\ell_{0}}\mathscr{C}_{T,\ell_{0}}\leq\mathscr{\hat{C}}_ {T},\] as desired. The proof of (1) is complete. * Now, let us prove (2). So, conversely, suppose that \[\forall\ell\in\mathbb{N}_{0},\ \mathrm{rank}\left[\sigma_{B}(\ell),\ \sigma_{A}(\ell)\sigma_{B}(\ell),\ \cdots,\ \sigma_{A}(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)\right]=d_{\ell}.\] We want to prove the controllability of the Cauchy problem (3.6) in any time \(T>0\). According to Theorem 2.11, it is sufficient to show that there exists \(c_{T}>0\) such that \[\int\limits_{0}^{T}||B^{*}S(t)^{*}f||_{\mathcal{H}}^{2}dt\geq c_{T}^{2}||f||_{ \mathcal{H}}^{2},\ \forall f\in\mathcal{H}^{\infty},\] (3.14) where \(S(t)\) is the \(C_{0}\)-semigroup generated by \(A\). By the Kalman criterion, we know that the system \[\frac{d\gamma_{\ell}}{dt}+\sigma_{A}(\ell)\gamma_{\ell}=\sigma_{B}(\ell)v_{\ell}\] (3.15) is controllable for every \(\ell\in\mathbb{N}_{0}.\) In consequence, the inequality \[\int\limits_{0}^{T}||\sigma_{B}(\ell)^{*}\exp{(t\sigma_{A}(\ell)^{*})}z||_{ \mathbb{C}^{d_{\ell}}}^{2}dt\geq c_{\ell,T}^{2}||z||_{\mathbb{C}^{d_{\ell}}}^{ 2},\ \forall z\in\mathbb{C}^{d_{\ell}},\] holds and let us denote by \(c_{\ell,T}>0\) the largest constant that satisfies this inequality. In particular, for \(z=\widehat{f}(\ell)\) we get \[\int\limits_{0}^{T}||\sigma_{B}(\ell)^{*}\exp{(t\sigma_{A}(\ell)^{*})}\widehat {f}(\ell)||_{\mathbb{C}^{d_{\ell}}}^{2}dt\geq c_{\ell,T}^{2}||\widehat{f}(\ell )||_{\mathbb{C}^{d_{\ell}}}^{2}.\] By summing over \(\ell\in\mathbb{N}_{0}\), we obtain \[\sum_{\ell\in\mathbb{N}_{0}}\int\limits_{0}^{T}||\sigma_{B}(\ell)^{*}\exp{(t \sigma_{A}(\ell)^{*})}\widehat{f}(\ell)||_{\mathbb{C}^{d_{\ell}}}^{2}dt\geq \sum_{\ell\in\mathbb{N}_{0}}c_{\ell,T}^{2}||\widehat{f}(\ell)||_{\mathbb{C}^{ d_{\ell}}}^{2}.\] Consequently, \[\int\limits_{0}^{T}\sum\limits_{\ell\in\mathbb{N}_{0}}||\sigma_{B}(\ell)^{*} \exp{(t\sigma_{A}(\ell)^{*})}\widehat{f}(\ell)||_{\mathbb{C}^{d_{\ell}}}^{2}dt \geqslant\sum\limits_{\ell\in\mathbb{N}_{0}}c_{\ell,T}^{2}||\widehat{f}(\ell)|| _{\mathbb{C}^{d_{\ell}}}^{2}.\] Using the semigroup property in (2.8) we have that \[\sigma_{B}(\ell)^{*}\exp{(t\sigma_{A}(\ell)^{*})}\widehat{f}(\ell)=\sigma_{B^{ *}S(t)^{*}}(\ell)\widehat{f}(\ell)\] we have that \[\int\limits_{0}^{T}\sum\limits_{\ell\in\mathbb{N}_{0}}||\sigma_{B^{*}S(t)^{*} }(\ell)\widehat{f}(\ell)||_{\mathbb{C}^{d_{\ell}}}^{2}dt\geqslant c_{T}^{2} \sum\limits_{\ell\in\mathbb{N}_{0}}||\widehat{f}(\ell)||_{\mathbb{C}^{d_{ \ell}}}^{2},\] where \[c_{T}:=\inf\limits_{\ell\in\mathbb{N}_{0}}c_{\ell,T}.\] By Plancherel's formula (see (2.5)) we have that \[\int\limits_{0}^{T}||B^{*}S(t)^{*}f||_{\mathcal{H}}^{2}dt=\int\limits_{0}^{T }\sum\limits_{\ell\in\mathbb{N}_{0}}||\sigma_{B^{*}S(t)^{*}}(\ell)\widehat{f} (\ell)||_{\mathbb{C}^{d_{\ell}}}^{2}dt\] and since \[||f||_{\mathcal{H}}^{2}=\sum\limits_{\ell\in\mathbb{N}_{0}}||\widehat{f}( \ell)||_{\mathbb{C}^{d_{\ell}}}^{2},\] the equality (3.14) holds with \[c_{T}:=\inf\limits_{\ell\in\mathbb{N}_{0}}c_{\ell,T}=\inf\limits_{\ell\in \mathbb{N}_{0}}1/\mathscr{C}_{\ell,T}=1/\sup\limits_{\ell\in\mathbb{N}_{0}} \mathscr{C}_{\ell,T}<\infty.\] The proof of (2) is complete. Indeed, note that the controllability cost \(\tilde{\mathscr{C}}_{T}\) of (3.6) is the infimum of the constants \(c_{T}>0\) satisfying (3.14), from where we deduce that \[\mathscr{C}_{T}\geqslant\tilde{\mathscr{C}}_{T}. \tag{3.16}\] Having proved (1) and (2) the proof of Theorem 3.5 is complete. ## 4. Applications ### Decoupling Algorithm In this section we present a variety of applications of the criterion of controllability in Theorem 3.6 and/or of the following algorithm developed during its proof. **Algorithm 4.1**.: We start by fixing two densely defined operators \(A\) and \(B\) on a separable Hilbert space \(\mathcal{H}\) satisfying the hypothesis of Theorem 3.5. * Algorithm: Criterion for the controllability of the Cauchy problem (3.6). * Input: To give the (global) controllability cost of the systems defined below in (4.1). * Output: To estimate the cost of controllability of the control system (3.6) from above. * Instructions: Step 1. To compute the group Fourier transform of the system (3.6). Then one obtains an infinite number of control systems \[\begin{cases}\frac{d\widehat{u}(\ell)}{dt}=\sigma_{A}(\ell)\widehat{u}(\ell)+ \sigma_{B}(\ell)\widehat{v}(\xi),\\ \\ \widehat{u(0)}(\ell)=\widehat{u}_{0}(\ell),\end{cases},\,\ell\in\mathbb{N}_{0}, \tag{4.1}\] At this point we recognize the input of our algorithm: Step 2. To reduce the controllability of the system (3.6) to an observability inequality that involves the (global) controllability cost of the systems in (4.1). Step 3. To estimate the controllability cost of the system in (3.6) in terms of the (global) controllability cost of the systems in (4.1). Step 4. If the estimated (global) controllability cost of the systems in (4.1) is finite, we are able to deduce the controllability of (4.1). ### Control for the Cauchy problem on compact manifolds Let \(M\) be a closed manifold (compact and without boundary). Consider \(E\in\Psi^{\nu}_{cl,+}(M)\) be a classical positive pseudo-differential operator of order \(\nu>0\) on \(M,\) see Hormander [46]. Let * \(H_{j}=\operatorname{Ker}(E-\lambda_{j}I)\) be the family of eigenspaces of \(E,\) * \(P_{j}:L^{2}(M)\to H_{j}\) be the corresponding orthogonal projections, * \(\{d_{j}\}_{j\in\mathbb{N}_{0}}\subset\mathbb{N}\) be the sequence formed by the dimensions of each \(H_{j}.\) The following corollary gives the controllability for the Cauchy problem associated to \(E\)-invariant operators. **Corollary 4.2**.: _Let \(M\) be a compact manifold without boundary. Let \(E\) be a positive classical pseudo-differential operator on \(M\) and let us consider the operators \(A,B:C^{\infty}(M)\to C^{\infty}(M)\) being \(E\)-invariant operators._ 1. _If the Cauchy problem_ \[\begin{cases}\frac{du}{dt}=Au+Bv,\\ \\ u(0)=u_{0}\in C^{\infty}(M)\end{cases}\] (4.2) _is controllable, then for any_ \(\ell\in\mathbb{N}_{0},\) _the global symbols_ \(\sigma_{A}(\ell)\) _and_ \(\sigma_{B}(\ell)\) _of_ \(A\) _and_ \(B\) _associated to the spectral decomposition of_ \(E,\) _respectively, satisfy the Kalman condition:_ \[\operatorname{rank}\big{[}\sigma_{B}(\ell),\ \sigma_{A}(\ell)\sigma_{B}(\ell),\ \cdots,\ \sigma_{A}(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)\big{]}=d_{ \ell}.\] (4.3) _Additionally, if_ \(A\) _generates a strongly continuous semigroup on_ \(L^{2}(M),\) _the image of the Cauchy problem (_4.2_) under the Fourier transform relative to the decomposition_ \((H_{j}=\operatorname{Ker}(E-\lambda_{j}I))_{j\in\mathbb{N}},\) _has a finite global controllability cost at time_ \(T>0,\) _that is_ \[\mathscr{C}_{T}:=\sup_{\ell\in\mathbb{N}_{0}}\mathscr{C}_{\ell,T}<\infty.\] _Moreover,_ \[\mathscr{C}_{T}\leqslant\tilde{\mathscr{C}}_{T},\] _where \(\tilde{\mathscr{C}}_{T}\) is the controllability cost of (4.2)._ 2. _Conversely, assume that_ \(A\) _is the generator of a strongly continuous semigroup on_ \(\mathcal{H},\) _and that the Kalman condition (_4.3_) is satisfied for each_ \(\ell\in\mathbb{N}_{0}\)_. Assume that the image of the Cauchy problem (_4.2_) under the Fourier transform relative to the spectral decomposition_ \((H_{j}=\operatorname{Ker}(E-\lambda_{j}I))_{j\in\mathbb{N}},\) _has a finite global controllability cost at time_ \(T>0,\) _that is,_ \[\mathscr{C}_{T}:=\sup_{\ell\in\mathbb{N}_{0}}\mathscr{C}_{\ell,T}<\infty.\] _Then, the Cauchy problem (_4.2_) is controllable at time_ \(T>0,\) _and its controllability costs_ \(\tilde{\mathscr{C}}_{T}\) _satisfies the inequality_ \[\mathscr{C}_{T}\geqslant\tilde{\mathscr{C}}_{T}.\] (4.4) Proof.: The statement of this corollary follows from Theorem 3.5 applied to the \(E\)-invariant operators \(A\) and \(B\) which, equivalently, are Fourier multipliers relative to the spectral decomposition \(H_{j}=\operatorname{Ker}(E-\lambda_{j}I).\) Note that in this case \(\mathcal{H}^{\infty}=C^{\infty}(M)\) and \(\mathcal{H}=L^{2}(M).\) ### Kalman criterion on compact Lie groups In this subsection we prove our controllability criterion in the setting of compact Lie groups. First, we adapt Definition 3.1 in terms of the group Fourier transform of the group. **Definition 4.3**.: Let \(A,B:C^{\infty}(G)\to C^{\infty}(G)\) be continuous and left-invariant linear operators. Consider the Cauchy problem \[(\mathrm{CP}):\left\{\begin{aligned} &\frac{du}{dt}=Au+Bv,\\ & u(0)=u_{0}\in C^{\infty}(G).\end{aligned}\right. \tag{4.5}\] We define the image of (\(\mathrm{CP}\)) under the group Fourier transform to be the infinite family of finite-dimensional dynamical systems \[((\mathrm{CP}),[\xi]):\left\{\begin{aligned} &\frac{d\widehat{u}(\xi)}{dt}= \sigma_{A}(\xi)\widehat{u}(\xi)+\sigma_{B}(\xi)\widehat{v}(\xi),\\ &\widehat{u(0)}(\xi)=\widehat{u}_{0}(\xi)\in\mathbb{C}^{d_{\xi} \times d_{\xi}}.\end{aligned}\right.,\,[\xi]\in\widehat{G}. \tag{4.6}\] We present the following version of Lemma 3.2 adapted to Definition 4.3 for left-invariant operators. **Lemma 4.4**.: _The following statements are equivalent._ 1. _For all_ \(\xi\in\mathrm{Rep}(G),\) _the Cauchy problem_ \(((\mathrm{CP}),[\xi])\) _in (_4.6_) is a controllable dynamical system at a time_ \(T>0.\)__ 2. \(\forall\xi\in\mathrm{Rep}(G)\)_, the Kalman condition_ \[\operatorname{rank}\big{[}\sigma_{B}(\xi),\ \sigma_{A}(\xi)\sigma_{B}(\xi),\ \cdots\,\ \sigma_{A}(\xi)^{d_{\xi}-1}\sigma_{B}(\xi)\big{]}=d_{\xi}\] (4.7) _is satisfied._ 3. \(\forall\xi\in\mathrm{Rep}(G),\ \exists c=c(\xi,T)>0\) _such that_ \[\int\limits_{0}^{T}||\sigma_{B}(\xi)^{*}\exp{(t\sigma_{A}(\xi)^{*})}z ||_{\mathrm{HS}}^{2}dt\geq c(\xi,T)^{2}||z||_{\mathrm{HS}}^{2},\ \forall z\in\mathbb{C}^{d_{\xi}\times d_{\xi}}.\] (4.8) Proof.: The proof follows exactly the same steps as the one done above for Lemma 3.2. Indeed, the equivalence \((1)\Longleftrightarrow(2)\) follows from Kalman's criterion (see Theorem 2.10). On the other hand, the equivalence \((1)\Longleftrightarrow(3)\) is nothing else that the observability criterion in Theorem 2.11. Is clear then the equivalence \((2)\Longleftrightarrow(3)\). The proof of Lemma 4.4 is complete. _Remark 4.5_.: Let \(c_{\xi,T}\) be the supremum of the constants \(c=c(\xi,T)>0\) satisfying the observability inequality (4.8). According to the usual nomenclature of the control theory, the constant \[\boxed{\mathscr{C}_{\xi,T}:=1/c_{\xi,T}}\] is called the controllability cost of the Cauchy problem (4.6). **Definition 4.6**.: We will say that the image of the Cauchy problem (4.5) under the group Fourier transform has a _finite global controllability cost_ if \[\mathscr{C}_{T}:=\sup_{[\xi]\in\tilde{G}}\mathscr{C}_{\xi,T}<\infty. \tag{4.9}\] The following is the main theorem of this subsection. **Theorem 4.7**.: _Let \(G\) be a compact Lie group and let \(A,B:C^{\infty}(G)\to C^{\infty}(G)\) be continuous left-invariant linear operators._ 1. _If the Cauchy problem_ \[\left\{\begin{aligned} &\frac{du}{dt}=Au+Bv,\\ & u(0)=u_{0}\in C^{\infty}(G)\end{aligned}\right.\] (4.10) _is controllable in time_ \(T>0\)_. Then, for each representation space the global symbols_ \(\sigma_{A}\) _and_ \(\sigma_{B}\) _of_ \(A\) _and_ \(B,\) _respectively, satisfy the Kalman condition:_ \[\mathrm{rank}\left[\sigma_{B}(\xi),\ \sigma_{A}(\xi)\sigma_{B}(\xi),\ \cdots,\ \sigma_{A}(\xi)^{d_{\xi}-1}\sigma_{B}(\xi)\right]=d_{\xi}.\] (4.11) _Additionally, if_ \(A\) _is the generator of a strongly continuous semigroup on_ \(L^{2}(G),\) _the image of the Cauchy problem (_4.5_) under the group Fourier transform has a finite global controllability cost at time_ \(T>0,\) _that is,_ \[\mathscr{C}_{T}:=\sup_{[\xi]\in\tilde{G}}\mathscr{C}_{\xi,T}<\infty.\] _Moreover, if_ \(\tilde{\mathscr{C}}_{T}\) _is the controllability costs of (_4.5_) then_ \[\mathscr{C}_{T}\leq\tilde{\mathscr{C}}_{T}.\] (4.12) 2. _Conversely, assume that_ \(A\) _is the generator of a strongly continuous semigroup on_ \(L^{2}(G),\) _and that the Kalman condition (_4.11_) is satisfied for each_ \(\xi\in\mathrm{Rep}(G)\)_. Assume that the image of the Cauchy problem (_4.5_) under the group _Fourier transform has a finite global controllability cost at time_ \(T>0,\) _that is,_ \[\mathscr{C}_{T}:=\sup_{[\xi]\in\widehat{G}}\mathscr{C}_{\xi,T}<\infty.\] _Then, the Cauchy problem (_4.5_) is controllable at time_ \(T>0,\) _and its controllability costs_ \(\tilde{\mathscr{C}}_{T}\) _satisfies the inequality_ \[\mathscr{C}_{T}\geqslant\tilde{\mathscr{C}}_{T}.\] (4.13) Proof.: For the proof of (1) note that a direct application of Theorem 3.6 provides a Kalman condition of dimension \(d_{\xi}^{2}\) in any representation space (see Remark 2.27). To reduce the dimension we will use the Cayley-Hamiton theorem in each representation space. We do it as follows. * Proof of (1). Assume that the Cauchy problem (4.10) is controllable. By fixing \(\xi\in\operatorname{Rep}(G)\) and taking the group Fourier transform in both sides, we get \[\frac{d\widehat{u}(\xi)}{dt}=\widehat{Au}(\xi)+\widehat{Bv}(\xi).\] Since \(A,B\) are left-invariant we have \[\frac{d\widehat{u}(\xi)}{dt}=\sigma_{A}(\xi)\widehat{u}(\xi)+\sigma_{B}(\xi) \widehat{v}(\xi).\] (4.14) This is a dynamical system in the set of square matrices of order \(d_{\xi}\) which can be identified with \(\mathbb{C}^{d_{\xi}^{2}}\) via \[\mathbf{Z}=(z_{ij})_{d_{\xi}\times d_{\xi}}\longmapsto\left(\begin{array}[] {c}\mathbf{Z}^{1}\\ \vdots\\ \mathbf{Z}^{d_{\xi}}\end{array}\right)\!,\] (4.15) where \(\mathbf{Z}^{j}\) denotes de \(j\)-th column vector of the matrix \(\mathbf{Z}\); so that equation (4.25) becomes \[\frac{d\widehat{U}(\xi)}{dt}=\Sigma_{A}(\xi)\widehat{U}(\xi)+\Sigma_{B}(\xi) \widehat{V}(\xi),\] (4.16) with \[\Sigma_{C}(\xi):=\left(\begin{array}{cccc}\sigma_{C}(\xi)&\mathbf{0}_{d_{ \xi}\times d_{\xi}}&\cdots&\mathbf{0}_{d_{\xi}\times d_{\xi}}\\ \mathbf{0}_{d_{\xi}\times d_{\xi}}&\sigma_{C}(\xi)&\cdots&\mathbf{0}_{d_{\xi} \times d_{\xi}}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{0}_{d_{\xi}\times d_{\xi}}&\mathbf{0}_{d_{\xi}\times d_{\xi}}&\cdots& \sigma_{C}(\xi)\end{array}\right)\!,\ C\in\{A,B\},\] \[\widehat{U}(\xi):=\left(\begin{array}{c}\widehat{u}(\xi)^{1}\\ \vdots\\ \widehat{u}(\xi)^{d_{\xi}}\end{array}\right)\text{ and }\widehat{V}(\xi):=\left( \begin{array}{c}\widehat{v}(\xi)^{1}\\ \vdots\\ \widehat{v}(\xi)^{d_{\xi}}\end{array}\right)\!.\] With the notation above, we have proved that the two systems \[\frac{d\widehat{u}(\xi)}{dt}=\sigma_{A}(\xi)\widehat{u}(\xi)+\sigma_{B}(\xi) \widehat{v}(\xi),\ \frac{d\widehat{U}(\xi)}{dt}=\Sigma_{A}(\xi)\widehat{U}(\xi)+\Sigma_{B}(\xi) \widehat{V}(\xi),\] (4.17) are equivalent, in the sense that any solution of (4.25) induces a solution of the system (4.16) and vice versa. Observe also that we can enumerate the unitary dual \(\widehat{G}\) as \([\xi_{j}],\) for \(j\in\mathbb{N}_{0}.\) In this way we fix the orthonormal basis \[\{e_{jk}\}_{k=1}^{d_{j}}=\{d_{\xi_{j}}^{\frac{1}{2}}(\xi_{j})_{il}\}_{i,l=1}^{d _{\xi_{j}}} \tag{4.18}\] where \(d_{j}=d_{\xi_{j}}^{2}.\) Then, we have the subspaces \[H_{j}=\mathrm{span}\{(\xi_{j})_{i,l}:i,l=1,\cdots,d_{\xi_{j}}\}.\] In view of Remark 2.27, note that the symbols \(\sigma_{A}(\ell)\) and \(\sigma_{B}(\ell)\) of \(A\) and \(B\) relative to the decomposition \(H_{j}=\mathrm{span}\{(\xi_{j})_{i,l}:i,l=1,\cdots,d_{\xi_{j}}\}\) are given by \[\sigma_{A}(\ell)\equiv\Sigma_{A}(\xi_{\ell})\text{ and }\sigma_{B}(\ell) \equiv\Sigma_{B}(\xi_{\ell}),\ \ell\in\mathbb{N}_{0}, \tag{4.19}\] respectively. Now, by applying Theorem 3.5, we deduce that our hypothesis (1) implies the Kalman condition \[\mathrm{rank}\left[\Sigma_{B}(\xi_{\ell}),\ \Sigma_{A}(\xi_{\ell})\Sigma_{B}( \xi_{\ell}),\ \cdots,\ \Sigma_{A}(\xi_{\ell})^{d_{\xi}^{2}-1}\Sigma_{B}(\xi_{\ell})\right]=d_{\xi_{ \ell}}^{2},\] where we have used that for any \(\ell,\)\(d_{\ell}=d_{\xi_{\ell}}^{2}.\) Since the map \(\ell\mapsto[\xi_{\ell}]\) is a bijection, we will omit the subscript \(\ell\) in \(\xi_{\ell}\) and we will return to the generic notation \(\xi\) for a unitary representation \(\xi\in\mathrm{Rep}(G).\) Now, let us make a refinement of the Kalman condition \[\mathrm{rank}\left[\Sigma_{B}(\xi),\ \Sigma_{A}(\xi)\Sigma_{B}(\xi),\ \cdots,\ \Sigma_{A}(\xi)^{d_{\xi}^{2}-1}\Sigma_{B}(\xi)\right]=d_{\xi}^{2}.\] For instance, note that \[\Sigma_{A}(\xi)^{j}\Sigma_{B}(\xi)=\left(\begin{array}{cccc}\sigma_{A}(\xi )^{j}\sigma_{B}(\xi)&\mathbf{0}_{d_{\xi}\times d_{\xi}}&\cdots&\mathbf{0}_{d_ {\xi}\times d_{\xi}}\\ \mathbf{0}_{d_{\xi}\times d_{\xi}}&\sigma_{A}(\xi)^{j}\sigma_{B}(\xi)&\cdots& \mathbf{0}_{d_{\xi}\times d_{\xi}}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{0}_{d_{\xi}\times d_{\xi}}&\mathbf{0}_{d_{\xi}\times d_{\xi}}&\cdots& \sigma_{A}(\xi)^{j}\sigma_{B}(\xi)\end{array}\right),\ j=0,...,d_{\xi}^{2}-1.\] Thus \[\mathrm{rank}\left[\Sigma_{B}(\xi),\ \Sigma_{A}(\xi)\Sigma_{B}(\xi),\ \cdots,\ \Sigma_{A}(\xi)^{d_{\xi}^{2}-1}\Sigma_{B}(\xi)\right]=\] \[d_{\xi}\cdot\mathrm{rank}\left[\sigma_{B}(\xi),\ \sigma_{A}(\xi) \sigma_{B}(\xi),\ \cdots,\ \sigma_{A}(\xi)^{d_{\xi}^{2}-1}\sigma_{B}(\xi)\right],\] and therefore \[\mathrm{rank}\left[\sigma_{B}(\xi),\ \sigma_{A}(\xi)\sigma_{B}(\xi),\ \cdots,\ \sigma_{A}(\xi)^{d_{\xi}^{2}-1}\sigma_{B}(\xi)\right]=d_{\xi}.\] Now, by the Caley-Hamilton theorem we know that if \[P_{\sigma_{A}(\xi)}(\lambda)=\det(\lambda I_{d_{\xi}\times d_{\xi}}-\sigma_{A }(\xi))=\lambda^{d_{\xi}}-\sum_{j=0}^{d_{\xi}-1}\alpha_{\xi,j}\lambda^{j},\] is the characteristic polynomial of the symbol \(\sigma_{A}(\xi),\) then \[P_{\sigma_{A}(\xi)}(\sigma_{A}(\xi))=0,\] which is equivalent to say that \[\sigma_{A}(\xi)^{d_{\xi}}=\sum_{j=0}^{d_{\xi}-1}\alpha_{\xi,j}\sigma_{A}(\xi)^{j}. \tag{4.20}\] In consequence, for every \(j\in\{d_{\xi},...,d_{\xi}^{2}-1\}\), there exist \(c_{j}^{0},c_{j}^{1},\cdots,c_{j}^{j}\in\mathbb{C}\), where each \(c_{k}^{j}=c_{k}^{j}(\xi)\) depends of \(\xi\), such that \[\sigma_{A}(\xi)^{j}=\sum_{k=0}^{j}c_{j}^{k}\sigma_{A}(\xi)^{k}.\] So in the matrix \[\left[\sigma_{B}(\xi),\ \sigma_{A}(\xi)\sigma_{B}(\xi),\ \cdots,\ \sigma_{A}(\xi) ^{d_{\xi}-1}\sigma_{B}(\xi)\right],\] the terms \(\sigma_{A}(\xi)^{j}\sigma_{B}(\xi)\), \(d_{\xi}\leqslant j\leqslant d_{\xi}^{2}-1\), can be written as a linear combination of the first \(d_{\xi}-1\) matrix-blocks \(\sigma_{A}(\xi)^{j^{\prime}}\sigma_{B}(\xi)\), \(1\leqslant j^{\prime}\leqslant d_{\xi}-1.\) After doing the Gaussian reduction we have that \[\operatorname{rank}\left[\sigma_{B}(\xi),\ \sigma_{A}(\xi)\sigma_{B}( \xi),\ \cdots,\ \sigma_{A}(\xi)^{d_{\xi}-1}\sigma_{B}(\xi)\right]=\] \[\operatorname{rank}\left[\sigma_{B}(\xi),\ \sigma_{A}(\xi)\sigma_{B}( \xi),\ \cdots,\ \sigma_{A}(\xi)^{d_{\xi}{}^{2}-1}\sigma_{B}(\xi)\right]=d_{\xi}.\] The proof of (1) is complete. * Now, let us prove (2). So, conversely, suppose that \[\forall\xi\in\operatorname{Rep}(G),\ \operatorname{rank}\left[\sigma_{B}(\xi),\ \sigma_{A}(\xi)\sigma_{B}(\xi),\ \cdots,\ \sigma_{A}(\xi)^{d_{\xi}-1}\sigma_{B}(\xi)\right]=d_{\xi}.\] We want to prove the controllability of the Cauchy problem (4.10) at any time \(T>0\). According to Theorem 2.11, it is sufficient to show that there exists \(c_{T}>0\) such that \[\int\limits_{0}^{T}||B^{*}S(t)^{*}f||_{L^{2}(G)}^{2}dt\geqslant c_{T}^{2}||f ||_{L^{2}(G)}^{2},\ \forall f\in L^{2}(G),\] (4.21) where \(S(t)\) is the \(C_{0}\)-semigroup generated by \(A\). By the Kalman criterion, we know that the system \[\frac{d\gamma_{\xi}}{dt}+\sigma_{A}(\xi)\gamma_{\xi}=\sigma_{B}(\xi)v_{\xi}\] (4.22) is controllable for every \(\xi\in\operatorname{Rep}(G).\) In consequence the inequality \[\int\limits_{0}^{T}||\sigma_{B}(\xi)^{*}\exp{(t\sigma_{A}(\xi)^{*})}z||_{\rm HS }^{2}dt\geqslant c_{\xi,T}^{2}||z||_{\rm HS}^{2},\ \forall z\in\mathbb{C}^{d_{\xi}\times d_{\xi}},\] (4.23) holds and let us denote by \(c_{\xi,T}>0\) the largest constant that satisfies this inequality. Now, by using the identification in (4.15) we observe that the observability inequality in (4.23) is equivalent to the following observability inequality \[\int\limits_{0}^{T}||\Sigma_{B}(\xi)^{*}\exp{(t\Sigma_{A}(\xi)^{*})z}||^{2}_{ \mathbb{C}^{d_{\xi}^{2}}}dt\geq c_{\xi,T}^{2}||z||^{2}_{\mathbb{C}^{d_{\xi}^{2} }},\ \forall z\in\mathbb{C}^{d_{\xi}^{2}}. \tag{4.24}\] This analysis shows that the controllability costs of the systems \[\frac{d\widehat{u}(\xi)}{dt}=\sigma_{A}(\xi)\widehat{u}(\xi)+\sigma_{B}(\xi) \widehat{v}(\xi),\ \frac{d\widehat{U}(\xi)}{dt}=\Sigma_{A}(\xi)\widehat{U}(\xi)+\Sigma_{B}(\xi) \widehat{V}(\xi), \tag{4.25}\] are the same. In other words, by following the lines in the proof of (2) in Theorem 3.6, we have that the following observability inequality \[\int\limits_{0}^{T}||B^{*}S(t)^{*}f||^{2}_{L^{2}(G)}dt\geq c_{T}^{2}||f||^{2}_ {L^{2}(G)},\ \forall f\in L^{2}(G),\] holds with \[c_{T}:=\inf\limits_{[\xi]\in\tilde{G}}c_{\xi,T}.\] So the equality in (4.21) holds with \[c_{T}:=\inf\limits_{[\xi]\in\tilde{G}}c_{\xi,T}=\inf\limits_{[\xi]\in\tilde{G} }1/\mathscr{C}_{\xi,T}=1/\sup\limits_{[\xi]\in\tilde{G}}\mathscr{C}_{\xi,T}<\infty.\] The proof of (2) is complete. Indeed, note that the controllability cost \(\tilde{\mathscr{C}}_{T}\) of (4.10) is the infimum of the constants \(c_{T}>0\) satisfying (4.21), from where we deduce that \[\mathscr{C}_{T}\geq\tilde{\mathscr{C}}_{T}. \tag{4.26}\] Having proved (1) and (2) the proof of Theorem 4.7 is complete. ### Controllability for fractional subelliptic diffusion models Let \(G\) be a compact Lie group, \(\mathbb{X}=\{X_{1},...,X_{k}\}\) be an orthonormal set of real left-invariant vector fields satisfying the Hormander condition at step \(r\), and let \(s>0.\) The positive fractional sub-Laplacian on \(G\) or order \(s\) associated to \(\mathbb{X}\) is the operator \[\mathcal{L}_{s}:=\left(-\sum\limits_{j=1}^{k}X_{j}^{2}\right)^{s/2}.\] The symbol of \(\mathcal{L}_{s}\) can be computed in terms of the symbol of the sub-Laplacian \(\mathcal{L}=-\sum\limits_{j=1}^{k}X_{j}^{2},\) \[\sigma_{\mathcal{L}}(\xi)\equiv\mathrm{diag}(\lambda_{1,[\xi]},...,\lambda_{ d_{\xi},[\xi]}),\ \xi\in\mathrm{Rep}(G),\] as follows \[\sigma_{\mathcal{L}_{s}}(\xi)=\mathrm{diag}(\lambda_{1,[\xi]}^{s/2},..., \lambda_{d_{\xi},[\xi]}^{s/2}),\ \xi\in\mathrm{Rep}(G). \tag{4.27}\] Recall that there exist constants \(c,C>0\) such that the values \(\lambda_{j,[\xi]}\) satisfy the inequality (see [39]) \[c\langle\xi\rangle^{1/r}\leq\lambda_{j,[\xi]}^{1/2}\leq C\langle\xi\rangle,\ \forall\xi\in\mathrm{Rep}(G),\ \forall j\in\{1,...,d_{\xi}\}. \tag{4.28}\] We have that \(\sigma_{\mathcal{L}_{s}}(\xi)\) does not depend on \(x\in G\), and that \(\mathcal{L}_{s}\) is left-invariant. Let us consider the _subelliptic diffusion model_ \[\frac{du}{dt}=-\mathcal{L}_{s}u+Bv. \tag{4.29}\] We shall prove the following: 1. If (4.29) is controllable then \(B:C^{\infty}(G)\to\mathrm{Im}(B)\subset C^{\infty}(G)\) is invertible. 2. If \(B:C^{\infty}(G)\to\mathrm{Im}(B)\subset C^{\infty}(G)\) is invertible on its image subspace and its matrix-valued symbol satisfies the lower bound \[\forall z\in\mathbb{C}^{d_{\xi}\times d_{\xi}},\,\|\sigma_{B}(\xi)^{*}z\|_{ \mathrm{HS}}\geq C_{B}\langle\xi\rangle^{*}\|z\|_{\mathrm{HS}},\] (4.30) for some \(\kappa\geq s/2\), then (4.29) is a controllable system. For the proof of (1) let us proceed as follows. If (4.29) is controllable in any time \(T>0\) then, by Theorem 4.7, \[\mathrm{rank}[\sigma_{B}(\xi),\sigma_{-\mathcal{L}_{s}}(\xi)\sigma_{B}(\xi), \cdots,\sigma_{-\mathcal{L}_{s}}(\xi)^{d_{\xi}-1}\sigma_{B}(\xi)]=d_{\xi},\ \forall\xi\in\mathrm{Rep}(G).\] But \[\sigma_{-\mathcal{L}_{s}}(\xi)=\sigma_{-\mathcal{L}_{s}}(x,\xi)=\xi(x)^{*}(- \mathcal{L}_{s}\xi(x))=-(\xi(x)^{*}\mathcal{L}_{s}\xi(x))=-\sigma_{\mathcal{L }_{s}}(x,\xi)=-\sigma_{\mathcal{L}_{s}}(\xi)\] (see Theorem 2.23), hence \[\mathrm{rank}[\sigma_{B}(\xi),-\sigma_{\mathcal{L}_{s}}(\xi)\sigma_{B}(\xi), \cdots,[-\sigma_{\mathcal{L}_{s}}(\xi)]^{d_{\xi}-1}\sigma_{B}(\xi)]=d_{\xi}, \ \forall\xi\in\mathrm{Rep}(G).\] Since \(\sigma_{\mathcal{L}_{s}}(\xi)\) is a diagonal matrix we can show, by doing Gaussian reduction, that \[\mathrm{rank}[\sigma_{B}(\xi),-\sigma_{\mathcal{L}_{s}}(\xi)\sigma_{B}(\xi), \cdots,[-\sigma_{\mathcal{L}_{s}}(\xi)]^{d_{\xi}-1}\sigma_{B}(\xi)]=\mathrm{ rank}[\sigma_{B}(\xi)],\] so \(\mathrm{rank}[\sigma_{B}(\xi)]=d_{\xi},\ \forall\xi\in\mathrm{Rep}(G),\) i.e., \(\sigma_{B}(\xi)\) is invertible for all \(\xi\in\mathrm{Rep}(G).\) The formula \[B^{-1}f(x)=\sum_{[\xi]\in\widehat{G}}d_{\xi}\mathrm{Tr}[\xi(x)\sigma_{B}(\xi) ^{-1}\widehat{f}(\xi)]\] defines the inverse \(B^{-1}:\mathrm{Im}(B)\to C^{\infty}(G)\) of \(B:C^{\infty}(G)\to\mathrm{Im}(B)\subset C^{\infty}(G).\) For the proof of (2) let us proceed as follows. Note that the constant \[\mathscr{X}_{\xi,T}^{2}=\inf_{z\neq 0}\frac{\int_{0}^{T}||\sigma_{B}(\xi)^{*} \exp{(t\sigma_{-\mathcal{L}_{s}}(\xi)^{*})z}||_{\mathrm{HS}}^{2}dt}{||z||_{ \mathrm{HS}}^{2}}, \tag{4.31}\] satisfies the inequality \[\int\limits_{0}^{T}||\sigma_{B}(\xi)^{*}\exp{(t\sigma_{-\mathcal{L}_{s}}(\xi) ^{*})z}||_{\mathrm{HS}}^{2}dt\geq\mathscr{X}_{\xi,T}^{2}||z||_{\mathrm{HS}}^{ 2},\ \forall z\neq 0. \tag{4.32}\] Let \(c_{\xi,T}^{2}\geq\mathscr{X}_{\xi,T}^{2}\) be the largest constant satisfying the inequality in (4.32). Note that \[c_{\xi,T}^{2}\geq\mathscr{X}_{\xi,T}^{2}=\inf_{z\neq 0}\frac{\int_{0}^{T}|| \sigma_{B}(\xi)^{*}\exp{(-t\sigma_{\mathcal{L}_{s}}(\xi)^{*})z}||_{\mathrm{HS }}^{2}dt}{||z||_{\mathrm{HS}}^{2}}\] \[\geqslant\inf_{z\neq 0}\frac{\int_{0}^{T}C_{B}^{2}\langle\xi \rangle^{2\kappa}\left|\left|\exp\left[-t\cdot\mathrm{diag}\left(\lambda_{1,[\xi]} ^{s/2},...,\lambda_{d_{\xi},[\xi]}^{s/2}\right)\right]\right|z\right|\right|_{ \mathrm{HS}}^{2}dt}{||z||_{\mathrm{HS}}^{2}}\] (by ( 4.30 )), \[=\inf_{z\neq 0}\frac{\int_{0}^{T}C_{B}^{2}\langle\xi \rangle^{2\kappa}\left|\left|\mathrm{diag}\left(\exp\left(-t\lambda_{1,[\xi]}^{ s/2}\right),...,\exp\left(-t\lambda_{d_{\xi},[\xi]}^{s/2}\right)\right)z \right|\right|_{\mathrm{HS}}^{2}dt}{||z||_{\mathrm{HS}}^{2}}\] \[\geqslant\inf_{z\neq 0}\frac{\int_{0}^{T}C_{B}^{2}\langle\xi \rangle^{2\kappa}\exp\left(-2t\gamma_{[\xi]}^{s/2}\right)||z||_{\mathrm{HS}}^ {2}dt}{||z||_{\mathrm{HS}}^{2}}\ (\text{where}\ \gamma_{[\xi]}:=\max_{1\leqslant j \leqslant d_{\xi}}\lambda_{j,[\xi]}),\] \[\geqslant\inf_{z\neq 0}\frac{\int_{0}^{T}C_{B}^{2}\langle\xi \rangle^{2\kappa}\exp\left(-2tC\langle\xi\rangle^{s}\right)||z||_{\mathrm{HS} }^{2}dt}{||z||_{\mathrm{HS}}^{2}}\ (\text{by (\ref{eq:2.2.2}))},\] \[=C_{B}^{2}\langle\xi\rangle^{2\kappa}\times\frac{\left(1-e^{-2TC \langle\xi\rangle^{s}}\right)}{2C\langle\xi\rangle^{s}}\] \[=C_{B}^{2}\langle\xi\rangle^{2\kappa-s}\left(1-e^{-2TC\langle\xi \rangle^{s}}\right)\] In consequence, since \(\langle\xi\rangle\geqslant 1\) and \(\kappa\geqslant s/2\), we have that \[c_{\xi,T}^{2}\geqslant C_{B}^{2}\langle\xi\rangle^{2\kappa-s} \left(1-e^{-2TC\langle\xi\rangle^{s}}\right)\geqslant C_{B}^{2}\left(1-e^{-2TC }\right)\neq 0. \tag{4.33}\] All the previous analysis shows that \[\inf_{[\xi]\in\tilde{G}}c_{\xi,T}^{2}\geqslant C_{B}^{2}\left(1-e ^{-2TC}\right)\neq 0. \tag{4.34}\] With the notation of the proof of Theorem 4.7 we have that \[c_{T}:=\inf_{[\xi]\in\tilde{G}}c_{\xi,T}=\inf_{[\xi]\in\tilde{G} }1/\mathscr{C}_{\xi,T}=1/\sup_{[\xi]\in\tilde{G}}\mathscr{C}_{\xi,T}<\infty.\] Then, we have proved that (4.29) is a controllable system. Note that in this case the controllability cost \(\tilde{\mathscr{C}}_{T}\) of (4.29) can be estimated as \[\tilde{\mathscr{C}}_{T}^{2}\leqslant\mathscr{C}_{T}^{2}=\sup_{[ \xi]\in\tilde{G}}\mathscr{C}_{\xi,T}^{2}=\inf_{[\xi]\in\tilde{G}}1/c_{\xi,T}^{ 2}\leqslant\frac{1}{C_{B}^{2}\left(1-e^{-2TC}\right)}.\] In consequence, \[\tilde{\mathscr{C}}_{T}\leqslant\frac{1}{C_{B}\sqrt{(1-e^{-2TC} )}}.\] Summarising all the discussion above we have proved the following controllability criterion for subelliptic diffusions models on \(G\). **Theorem 4.8**.: _Let \(G\) be a compact Lie group, and let \(B:C^{\infty}(G)\to C^{\infty}(G)\) be a continuous left-invariant linear operator._ 1. _If the Cauchy problem_ \[\left\{\begin{aligned} &\frac{du}{dt}=-\mathcal{L}_{s}u+Bv,\\ &\\ & u(0)=u_{0}\in C^{\infty}(G)\end{aligned}\right.\] (4.35) _is controllable in time_ \(T>0\)_, then_ \(B:C^{\infty}(G)\rightarrow\mathrm{Im}(B)\subset C^{\infty}(G)\) _is an invertible continuous linear operator on its image._ 2. _Conversely, assume_ \(B:C^{\infty}(G)\rightarrow\mathrm{Im}(B)\subset C^{\infty}(G)\) _is invertible on its image subspace and that its matrix-valued symbol satisfies the lower bound_ \[\forall z\in\mathbb{C}^{d_{\xi}\times d_{\xi}},\,\|\sigma_{B}(\xi)^{*}z\|_{ \mathrm{HS}}\geq C_{B}\langle\xi\rangle^{\kappa}\|z\|_{\mathrm{HS}},\] (4.36) _for some_ \(\kappa\geq s/2.\) _Then (_4.35_) is a controllable system and its controllability cost_ \(\tilde{\mathscr{C}}_{T}\) _satisfies the estimate_ \[\tilde{\mathscr{C}}_{T}\leq\frac{1}{C_{B}\sqrt{(1-e^{-2TC})}},\] (4.37) _for some_ \(C>0\)_._ ### Wave equation vs. heat equation on Hilbert spaces Let \(\mathcal{H}\) be a complex Hilbert space and let \(\mathcal{H}^{\infty}\subset\mathcal{H}\) be a dense linear subspace of \(\mathcal{H}\). Let \(\mathcal{H}=\bigoplus_{j}H_{j}\) be a decompositon of \(\mathcal{H}\) in orthogonal subspaces \(H_{j}\) of dimension \(d_{j}\in\mathbb{N}.\) Let \(A,B:\mathcal{H}^{\infty}\rightarrow\mathcal{H}\) be Fourier multipliers relative to the decomposition \(\{H_{j}\}_{j\in\mathbb{N}_{0}}.\) Let us consider the second order Cauchy problem \[\left\{\begin{aligned} &\frac{d^{2}u}{dt^{2}}=Au+Bv;\\ &\\ & u(0)=u_{0},\ u_{t}(0)=\tilde{u}_{0},\end{aligned}\right. \tag{4.38}\] where \(u:[0,T]\rightarrow\mathcal{H}^{\infty}\) is of \(C^{2}\)-class in time. The differential equation in (4.38) is a _wave equation_ and, analogously to the case of a first-order control system, we say that it is controllable in time \(T>0\) if for every \(u_{T},\tilde{u}_{T}\in\mathcal{H}^{\infty},\) there exists a control \(v:[0,T]\rightarrow\mathcal{H}^{\infty}\) such that the solution of (4.38) satisfies \(u(T)=u_{T}\) and \(u_{t}(T)=\tilde{u}_{T}.\) On the other hand, the first order differential equation in (3.6) is a _heat equation_. It is well known that in the case of internal control (i.e. when \(B\) is given by the multiplication operator \(Bv=1_{\omega}v\) where \(\omega\) is an open subset of \(G\)) the controllability of (4.38) implies the controllability of (4.10). We refer to Kannai [51], Russell [62], and Miller [60] for details. We shall prove the same result for any left-invariant operator \(B\) satisfying some conditions. More precisely, we have the following theorem. **Theorem 4.9**.: _Let \(A,B:\mathcal{H}^{\infty}\rightarrow\mathcal{H}\) be Fourier multipliers relative to the decomposition \(\{H_{j}\}_{j\in\mathbb{N}_{0}}\) of a Hilbert space \(\mathcal{H}.\) Assume that \(A\) is the generator of a strongly continuous semigroup. If the second order Cauchy problem_ \[\left\{\begin{aligned} &\frac{d^{2}u}{dt^{2}}=Au+Bv;\\ &\\ & u(0)=u_{0},\ u_{t}(0)=\tilde{u}_{0};\,u_{0},\tilde{u}_{0}\in \mathcal{H}^{\infty},\end{aligned}\right. \tag{4.39}\] _is controllable in time \(T>0\) then first order Cauchy problem_ \[\left\{\begin{aligned} &\frac{du}{dt}=Au+Bv,\\ &\\ & u(0)=u_{0}\in\mathcal{H}^{\infty}\end{aligned}\right. \tag{4.40}\] _is controllable in time \(T>0\) provided that the image of the Cauchy problem (4.40) under the Fourier transform relative to the decomposition \((H_{j})_{j\in\mathbb{N}},\) has a finite global controllability cost at time \(T>0\)._ Proof.: First, we will make a reduction of order by setting \(u_{1}:=u\) and \(u_{2}:=u_{t}\) so that the Cauchy problem (4.39) becomes \[\left\{\begin{aligned} &\frac{d}{dt}\left(\begin{array}{c}u_{1}\\ u_{2}\end{array}\right)=\left(\begin{array}{cc}\mathbf{0}&\text{Id}\\ A&\mathbf{0}\end{array}\right)\left(\begin{array}{c}u_{1}\\ u_{2}\end{array}\right)+\left(\begin{array}{cc}\mathbf{0}&\mathbf{0}\\ \mathbf{0}&B\end{array}\right)\left(\begin{array}{c}w\\ v\end{array}\right),\\ &\\ &\left(\begin{array}{c}u_{1}(0)\\ u_{2}(0)\end{array}\right)=\left(\begin{array}{c}u_{0}\\ \tilde{u}_{0}\end{array}\right).\end{aligned}\right. \tag{4.41}\] It is clear that the notion of controllability that we defined for (4.39) is equivalent to the definition of controllability of the first order system (4.41). Let us suppose that (4.41) is controllable in time \(T>0.\) We shall show that the finite-dimensional control system \[\left\{\begin{aligned} &\frac{d}{dt}\left(\begin{array}{c} \widehat{u}_{1}(\ell)\\ \widehat{u}_{2}(\ell)\end{array}\right)=\left(\begin{array}{cc}\mathbf{0}& \text{I}\\ \sigma_{A}(\ell)&\mathbf{0}\end{array}\right)\left(\begin{array}{c}\widehat{u }_{1}(\ell)\\ \widehat{u}_{2}(\ell)\end{array}\right)+\left(\begin{array}{cc}\mathbf{0}& \mathbf{0}\\ \mathbf{0}&\sigma_{B}(\ell)\end{array}\right)\left(\begin{array}{c}\widehat{ w}(\ell)\\ \widehat{v}(\ell)\end{array}\right),\\ &\\ &\left(\begin{array}{c}\widehat{u}_{1}(\ell)(0)\\ \widehat{u}_{2}(\ell)(0)\end{array}\right)=\left(\begin{array}{c}\widehat{u }_{0}(\ell)\\ \widehat{\tilde{u}}_{0}(\ell)\end{array}\right),\end{aligned}\right. \tag{4.42}\] where \(\widehat{u}_{j}(\ell),\sigma_{A}(\ell),I,\sigma_{B}(\ell),\widehat{w}(\ell), \widehat{v}(\ell)\in\mathbb{C}^{d_{\ell}\times d_{\ell}},\) is also controllable in time \(T.\) In fact, let \(\left(\begin{array}{c}\zeta_{1,T}\\ \zeta_{2,T}\end{array}\right)\in\mathbb{C}^{2d_{\ell}\times d_{\ell}},\) then the Fourier inversion formula implies that the functions \[u_{T}:=(e_{\ell},\zeta_{1,T})_{\mathbb{C}^{d_{\ell}}}\text{ and }\tilde{u}_{T}:=(e_{\ell}, \zeta_{2,T})_{\mathbb{C}^{d_{\ell}}}\] belong to \(\mathcal{H}^{\infty}\) and \(\widehat{u_{T}}(\ell)=\zeta_{1,T},\)\(\widehat{u}_{T}(\ell)=\zeta_{2,T}.\) Since (4.41) is controllable, there exist \(w,v:[0,T]\rightarrow\mathcal{H}^{\infty}\) such that the solution \(\left(\begin{array}{c}u_{1}\\ u_{2}\end{array}\right)\) of (4.41) is such that \(u_{1}(T)=u_{T}\) and \(u_{2}(T)=\tilde{u}_{T}.\) By taking the Fourier transform in (4.41) at \(\ell\in\mathbb{N}\) we obtain that \(\left(\begin{array}{c}\widehat{u}_{1}(\ell)\\ \widehat{u}_{2}(\ell)\end{array}\right)\) is the solution of (4.42) and \(\left(\begin{array}{c}\widehat{u}_{1}(\ell)(T)\\ \widehat{u}_{2}(\ell)(T)\end{array}\right)=\left(\begin{array}{c}\zeta_{1,T} \\ \zeta_{2,T}\end{array}\right)\). This argument holds for any \(\left(\begin{array}{c}\zeta_{1,T}\\ \zeta_{2,T}\end{array}\right)\), thus (4.42) is controllable in time \(T.\) Now, we can apply the rank Kalman condition to conclude that \[\mathrm{rank}\left[\left(\begin{array}{cc}\mathbf{0}&\text{I}\\ \sigma_{A}(\ell)&\mathbf{0}\end{array}\right)^{j}\left(\begin{array}{cc} \mathbf{0}&\mathbf{0}\\ \mathbf{0}&\sigma_{B}(\ell)\end{array}\right)\right]_{0\leq j\leq 2d_{\ell}-1}=2d_{\ell},\] but \[\left(\begin{array}{cc}\mathbf{0}&\mathrm{I}\\ \sigma_{A}(\ell)&\mathbf{0}\end{array}\right)^{j}\left(\begin{array}{cc} \mathbf{0}&\mathbf{0}\\ \mathbf{0}&\sigma_{B}(\ell)\end{array}\right)=\left\{\begin{array}{ll}\left( \begin{array}{cc}\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\sigma_{A}(\ell)^{\frac{j}{2}}\sigma_{B}(\ell)\end{array}\right),& \text{ if $j$ is even,}\\ \\ \left(\begin{array}{cc}\mathbf{0}&\sigma_{A}(\ell)^{\frac{j-1}{2}}\sigma_{B}( \ell)\\ \mathbf{0}&\mathbf{0}\end{array}\right),&\text{ if $j$ is odd,}\end{array}\right.\] thus \[\begin{split} 2d_{\ell}&=\,\mathrm{rank}\left[\left( \begin{array}{cc}\mathbf{0}&\mathrm{I}\\ \sigma_{A}(\ell)&\mathbf{0}\end{array}\right)^{j}\left(\begin{array}{cc} \mathbf{0}&\mathbf{0}\\ \mathbf{0}&\sigma_{B}(\ell)\end{array}\right)\right]_{0\leqslant j\leqslant 2d_{\ell}-1} \\ \\ &=\,\mathrm{rank}\left[\left(\begin{array}{cc}\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\sigma_{A}(\ell)^{i}\sigma_{B}(\ell)\end{array}\right),\left( \begin{array}{cc}\mathbf{0}&\sigma_{A}(\ell)^{i}\sigma_{B}(\ell)\\ \mathbf{0}&\mathbf{0}\end{array}\right)\right]_{0\leqslant i\leqslant d_{\ell}-1} \\ \\ &=\,2\cdot\mathrm{rank}[\sigma_{B}(\ell),\sigma_{A}(\ell)\sigma_{B}(\ell), \cdots,\sigma_{A}(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)]\\ \\ &\Longrightarrow\,\mathrm{rank}[\sigma_{B}(\ell),\sigma_{A}(\ell)\sigma_{B}( \ell),\cdots,\sigma_{A}(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)]=d_{\ell}.\end{split}\] This means that the rank Kalman condition (4.11) is satisfied for each \(\ell\in\mathbb{N}_{0}\). Additionally, if \(A\) is the generator of a strongly continuous semigroup and the image of the Cauchy problem (4.40) under the group Fourier transform has finite global controllability cost, then by Theorem 4.7 (2), the system (4.40) is controllable. This completes the proof. As a consequence of Theorem 4.9, we consider the following application to the case of compact Lie groups. **Corollary 4.10**.: _Let \(G\) be a compact Lie group, and \(A,B:C^{\infty}(G)\longrightarrow C^{\infty}(G)\) be continuous left-invariant linear operators such that \(A\) is the generator of a strongly continuous semigroup. If the second order Cauchy problem_ \[\left\{\begin{array}{l}\frac{d^{2}u}{dt^{2}}=Au+Bv;\\ \\ u(0)=u_{0},\ u_{t}(0)=\tilde{u}_{0};\,u_{0},\tilde{u}_{0}\in C^{\infty}(G), \end{array}\right. \tag{4.43}\] _is controllable in time \(T>0\). Then, the first order Cauchy problem_ \[\left\{\begin{array}{l}\frac{du}{dt}=Au+Bv,\\ \\ u(0)=u_{0}\in C^{\infty}(G)\end{array}\right. \tag{4.44}\] _is controllable in time \(T>0\), provided that its image under the group Fourier transform has finite global controllability cost._ Proof.: For the proof let us use the notation in Remark 2.27. We can enumerate the unitary dual \(\widehat{G}\) as \([\xi_{j}]\), for \(j\in\mathbb{N}_{0}.\) In this way we fix the orthonormal basis \[\{e_{jk}\}_{k=1}^{d_{j}}=\{d_{\xi_{j}}^{\frac{1}{2}}(\xi_{j})_{il}\}_{i,l=1}^{d_ {\xi_{j}}}, \tag{4.45}\] where \(d_{j}=d_{\xi_{j}}^{2}.\) Then, we have the subspaces \[H_{j}=\operatorname{span}\{(\xi_{j})_{i,l}:i,l=1,\cdots,d_{\xi_{j}}\}.\] In view of Remark 2.27, note that the symbols \(\sigma_{A}(\ell)\) and \(\sigma_{B}(\ell)\) of \(A\) and \(B\) relative to the decomposition \(H_{j}=\operatorname{span}\{(\xi_{j})_{i,l}:i,l=1,\cdots,d_{\xi_{j}}\}\) are given by \[\sigma_{A}(\ell)\equiv\Sigma_{A}(\xi_{\ell})\text{ and }\sigma_{B}(\ell) \equiv\Sigma_{B}(\xi_{\ell}),\ \ell\in\mathbb{N}_{0}, \tag{4.46}\] respectively. The statement of Corollary 4.10 follows from Theorem 4.9. **Corollary 4.11**.: _In the context of Theorem 4.9, assume that the operator_ \[\tilde{A}=\left(\begin{array}{cc}\mathbf{0}&\mathrm{Id}\\ A&\mathbf{0}\end{array}\right)\] _is the infinitesimal generator of a \(C_{0}\)-semigroup and that the rank Kalman condition_ \[\text{rank}[\sigma_{B}(\ell),\sigma_{A}(\ell)\sigma_{B}(\ell),\cdots,\sigma_{A }(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)]=d_{\ell}, \tag{4.47}\] _holds for every \(\ell\in\mathbb{N}_{0}\). Then the wave equation \(d^{2}u/dt^{2}=Au+Bv\) is controllable in any time \(T>0\) provided that_ \[\inf_{\ell\in\mathbb{N}_{0}}\sup_{(z_{1},z_{2})\neq(0,0)}\frac{\int_{0}^{T} \left|\sigma_{B}(\ell)^{*}S_{1}(t)\sigma_{A}(\ell)^{*}z_{1}+\sigma_{B}(\ell)^{ *}S_{2}(t)z_{2}\right|\right|_{\mathrm{HS}}^{2}dt}{||z_{1}||_{\mathrm{HS}}^{2 }+||z_{2}||_{\mathrm{HS}}^{2}}>0, \tag{4.48}\] _where \(z_{1},z_{2}\in\mathbb{C}^{d_{\ell}}\), and_ \[S_{1}(t):=\sum_{n=0}^{\infty}\frac{t^{2n+1}}{(2n+1)!}(\sigma_{A}(\ell)^{*})^{n },\text{ and }S_{2}(t):=\sum_{n=0}^{\infty}\frac{t^{2n}}{(2n)!}(\sigma_{A}(\ell)^{*})^{n },\ 0\leq t\leq T.\] Proof.: Note that the rank Kalman condition (4.47) is equivalent to the following Kalman conditon (as we have established in the proof of Theorem 4.9) \[\text{rank}\left[\left(\begin{array}{cc}\mathbf{0}&\mathrm{I} \\ \sigma_{A}(\ell)&\mathbf{0}\end{array}\right)^{j}\left(\begin{array}{cc} \mathbf{0}&\mathbf{0}\\ \mathbf{0}&\sigma_{B}(\ell)\end{array}\right)\right]_{0\leq j\leq 2d_{\ell}-1}\] \[=2d_{\ell}\] \[=2\text{rank}[\sigma_{B}(\ell),\sigma_{A}(\ell)\sigma_{B}(\ell), \cdots,\sigma_{A}(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)]\] and note that the observability inequality for the system (4.42) reduces to \[\int_{0}^{T}\left|\left|\left(\begin{array}{cc}\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\sigma_{B}(\ell)^{*}\end{array}\right)\exp\left[t\left(\begin{array} []{cc}\mathbf{0}&\mathrm{I}\\ \sigma_{A}(\ell)^{*}&\mathbf{0}\end{array}\right)\right]\left(\begin{array}{ cc}z_{1}\\ z_{2}\end{array}\right)\right|\right|_{\mathrm{HS}}^{2}dt\geq c_{\ell,T}^{2} \left|\left|\left(\begin{array}{cc}z_{1}\\ z_{2}\end{array}\right)\right|\right|_{\mathrm{HS}}^{2}.\] Since \[\left(\begin{array}{cc}\mathbf{0}&\mathrm{I}\\ \sigma_{A}(\ell)^{*}&\mathbf{0}\end{array}\right)^{j}=\left\{\begin{array}{cc} \left(\begin{array}{cc}(\sigma_{A}(\ell)^{*})^{\frac{j}{2}}&\mathbf{0}\\ \mathbf{0}&(\sigma_{A}(\ell)^{*})^{\frac{j}{2}}\end{array}\right),&\text{if $j$ is even},\\ \\ \left(\begin{array}{cc}\mathbf{0}&(\sigma_{A}(\ell)^{*})^{\frac{j-1}{2}}\\ (\sigma_{A}(\ell)^{*})^{\frac{j+1}{2}}&\mathbf{0}\end{array}\right),&\text{if $j$ is odd}, \end{array}\right.\] then we have that \[\exp\left[t\left(\begin{array}{cc}\mathbf{0}&\mathrm{I}\\ \sigma_{A}(\ell)^{*}&\mathbf{0}\end{array}\right)\right]=\left(\begin{array}[] {cc}S_{2}(t)&S_{1}(t)\\ S_{1}(t)\sigma_{A}(\ell)^{*}&S_{2}(t)\end{array}\right)\] so, the observability inequality above becomes \[\int_{0}^{T}\left\|\left(\begin{array}{cc}\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\sigma_{B}(\ell)^{*}\end{array}\right)\left(\begin{array}{cc}S_{ 2}(t)&S_{1}(t)\\ S_{1}(t)\sigma_{A}(\ell)^{*}&S_{2}(t)\end{array}\right)\left(\begin{array}{c }z_{1}\\ z_{2}\end{array}\right)\right\|_{\mathrm{HS}}^{2}dt\geq c_{\ell,T}^{2}\left(||z _{1}||_{\mathrm{HS}}^{2}+||z_{2}||_{\mathrm{HS}}^{2}\right),\] or equivalently, \[\int_{0}^{T}||\sigma_{B}(\ell)^{*}S_{1}(t)\sigma_{A}(\ell)^{*}z_{1}+\sigma_{B} (\ell)^{*}S_{2}(t)z_{2}||_{\mathrm{HS}}^{2}\,dt\geq c_{\ell,T}^{2}\left(||z_{1 }||_{\mathrm{HS}}^{2}+||z_{2}||_{\mathrm{HS}}^{2}\right).\] From here we can see that the condition (4.48) is nothing but the property that the Fourier transform of the system (4.41) relative to the decomposition \((H_{j})_{j\in\mathbb{N}_{0}}\) has finite global controllability cost. Therefore, by Theorem 3.5, the system (4.41) is controllable, so is (4.38). _Remark 4.12_.: It is known that in the case of a compact Riemannian manifold \((M,g)\), if \(A=\Delta\), where \(\Delta\) is the negative Laplacian to the metric \(g\), the operator \[\tilde{\Delta}=\left(\begin{array}{cc}\mathbf{0}&\mathrm{Id}\\ \Delta&\mathbf{0}\end{array}\right)\] is the infinitesimal generator of a strongly continuous semigroup. This is due to the fact that it is dissipative with respect to a specific inner product defined in terms of the metric and the gradient of the manifold (see the classical work of Chen and Millman [22] for details). ### Control of the Schrodinger equation on Hilbert spaces Let us consider the Schrodinger equation \[\left\{\begin{array}{l}i\frac{du}{dt}=Au+Bv;\\ \\ u(0)=u_{0},\end{array}\right. \tag{4.49}\] where \(u:[0,T]\rightarrow\mathcal{H}^{\infty}\) is of \(C^{2}\)-class in time, and \(A,B:\mathcal{H}^{\infty}\rightarrow\mathcal{H}\) are Fourier multipliers relative to the decomposition \(\{H_{j}\}_{j\in\mathbb{N}_{0}}\) of a Hilbert space \(\mathcal{H}.\) It is clear that (4.49) is equivalent to the following Cauchy problem \[\left\{\begin{array}{l}\frac{du}{dt}=-iAu-iBv;\\ \\ u(0)=u_{0}.\end{array}\right. \tag{4.50}\] Moreover, since \(\sigma_{-iA}(\ell)=-i\sigma_{A}(\ell)\) and \(\sigma_{-iB}(\ell)=-i\sigma_{B}(\ell),\) for all \(\ell\in\mathbb{N},\) the Kalman condition \[\forall\ell\in\mathbb{N},\,\operatorname{rank}\big{[}\sigma_{-iB}(\ell),\ \sigma_{-iA}(\ell)(\ell)\sigma_{-iB}(\ell),\ \cdots,\ \sigma_{-iA}(\ell)(\ell)^{d_{\ell}-1}\sigma_{-iB}(\ell)\big{]}=d_{\ell}, \tag{4.51}\] holds if and only if the Kalman condition \[\forall\ell\in\mathbb{N},\,\operatorname{rank}\big{[}\sigma_{B}(\ell),\ \sigma_{A}(\ell)\sigma_{B}(\ell),\ \cdots,\ \sigma_{A}(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)\big{]}=d_{\ell}, \tag{4.52}\] is satisfied. In view of the discussion above we have the following consequence of Theorem 3.5 for Schrodinger type models. Note that the natural assumption is that the operator \(-iA\) generates a \(C_{0}\)-semigroup, which happens if for example the operator \(A:\mathcal{H}^{\infty}\to\mathcal{H}\) is an unbounded self-adjoint operator, see e.g. [69]. **Corollary 4.13**.: _Let \(\mathcal{H}\) be a complex Hilbert space and let \(\mathcal{H}^{\infty}\subset\mathcal{H}\) be a dense linear subspace of \(\mathcal{H}\). Let \(\mathcal{H}=\bigoplus_{j}H_{j}\) be a decompositon of \(\mathcal{H}\) in orthogonal subspaces \(H_{j}\) of dimension \(d_{j}\in\mathbb{N}.\) Let \(A,B:\mathcal{H}^{\infty}\to\mathcal{H}\) be Fourier multipliers relative to the decomposition \(\{H_{j}\}_{j\in\mathbb{N}_{0}}.\)_ 1. _Assume that the Schrodinger equation_ \[\begin{cases} i\frac{du}{dt}=Au+Bv,\\ \\ u(0)=u_{0}\in\mathcal{H}^{\infty},\end{cases}\] (4.53) _is controllable, then for any_ \(\ell\in\mathbb{N}_{0},\) _the global symbols_ \(\sigma_{A}(\ell)\) _and_ \(\sigma_{B}(\ell)\) _of_ \(A\) _and_ \(B,\) _respectively, satisfy the Kalman condition:_ \[\operatorname{rank}\big{[}\sigma_{B}(\ell),\ \sigma_{A}(\ell)\sigma_{B}(\ell),\ \cdots,\ \sigma_{A}(\ell)^{d_{\ell}-1}\sigma_{B}(\ell)\big{]}=d_{\ell}.\] (4.54) _Additionally, if_ \(-iA\) _generates a strongly continuous semigroup on_ \(\mathcal{H},\) _the image of the Cauchy problem (_4.50_) under the Fourier transform relative to the decomposition_ \((H_{j})_{j\in\mathbb{N}},\) _has a finite global controllability cost at time_ \(T>0,\) _that is_ \[\mathscr{C}_{T}:=\sup_{\ell\in\mathbb{N}_{0}}\mathscr{C}_{\ell,T}<\infty.\] _Moreover,_ \[\mathscr{C}_{T}\leq\tilde{\mathscr{C}}_{T},\] _where_ \(\tilde{\mathscr{C}}_{T}\) _is the controllability cost of (_4.50_)._ 2. _Conversely, assume that_ \(-iA\) _is the generator of a strongly continuous semigroup on_ \(\mathcal{H},\) _and that the Kalman condition (_4.54_) is satisfied for each_ \(\ell\in\mathbb{N}_{0}.\) _Assume that the image of the Cauchy problem (_4.50_) under the Fourier transform relative to the decomposition_ \((H_{j})_{j\in\mathbb{N}},\) _has a finite global controllability cost in time_ \(T>0,\) _that is,_ \[\mathscr{C}_{T}:=\sup_{\ell\in\mathbb{N}_{0}}\mathscr{C}_{\ell,T}<\infty.\] _Then, the Cauchy problem (_4.53_) is controllable at time_ \(T>0,\) _and its controllability costs_ \(\tilde{\mathscr{C}}_{T}\) _satisfies the inequality_ \[\mathscr{C}_{T}\geq\tilde{\mathscr{C}}_{T}.\] (4.55) ## 5. Conclusions In this work, we have considered the problem of the controllability of the Cauchy problem on complex Hilbert spaces. Our approach reduces the controllability of the system to the validity of the Kalman condition for an infinite number of finite-dimensional controllability systems. This reduction is done by the Fourier analysis induced by a fixed orthogonal decomposition of the underlying Hilbert space over subspaces of finite dimension and the criterion is presented in terms of the matrix-valued symbols relative to these kinds of decomposition as developed in [27, 28]. After presenting our main Theorem 3.6 we have identified in Algorithm 4.1 the required steps to analyse the controllability for a variety of problems that satisfy the invariance property in Theorem 3.6. In particular, we have introduced the notion of the _global controllability cost_ of the image of a system under the Fourier transform relative to a decomposition \((H_{j})_{j\in\mathbb{N}}\) of a Hilbert space \(\mathcal{H}.\) In terms of this notion we have estimated in a sharp way the _controllability cost_ of the system. The prototype of the models under consideration as well as their controllability has been extensively analysed in Section 4. There we have considered the control of subelliptic diffusion models on compact Lie groups associated with left-invariant operators, the control of fractional diffusion models for elliptic operators on compact manifolds and also, we have deduced some properties of the control for wave and Schrodinger equations and we have explained/illustrated the relation of such properties with respect to Kalman type criteria.
2303.17018
System Predictor: Grounding Size Estimator for Logic Programs under Answer Set Semantics
Answer set programming is a declarative logic programming paradigm geared towards solving difficult combinatorial search problems. While different logic programs can encode the same problem, their performance may vary significantly. It is not always easy to identify which version of the program performs the best. We present the system Predictor (and its algorithmic backend) for estimating the grounding size of programs, a metric that can influence a performance of a system processing a program. We evaluate the impact of Predictor when used as a guide for rewritings produced by the answer set programming rewriting tools Projector and Lpopt. The results demonstrate potential to this approach.
Daniel Bresnahan, Nicholas Hippen, Yuliya Lierler
2023-03-29T20:49:40Z
http://arxiv.org/abs/2303.17018v1
[ ###### Abstract Answer set programming is a declarative logic programming paradigm geared towards solving difficult combinatorial search problems. While different logic programs can encode the same problem, their performance may vary significantly. It is not always easy to identify which version of the program performs the best. We present the system predictor (and its algorithmic backend) for estimating the grounding size of programs, a metric that can influence a performance of a system processing a program. We evaluate the impact of predictor when used as a guide for rewritings produced by the answer set programming rewriting tools projector and lpopt. The results demonstrate potential to this approach. Under consideration in Theory and Practice of Logic Programming (TPLP). Answer set programming and Encoding optimizations. System Predictor: Grounding Size Estimator for Logic Programs]System Predictor: Grounding Size Estimator for Logic Programs under Answer Set Semantics Daniel Bresnahan]Daniel Bresnahan University of Nebraska Omaha NICHOLAS HIPpen University of Nebraska Omaha ## 1 Introduction Answer set programming (ASP) (Brewka et al., 2011) is a declarative (constraint) programming paradigm geared towards solving difficult combinatorial search problems. ASP programs model problem specifications/constraints as a set of logic rules. These logic rules define a problem instance to be solved. An ASP system is then used to compute solutions (answer sets) to the program. Answer set programming has been successfully used in scientific and industrial applications. Examples include, but are not limited to a decision support systems for space shuttle flight controllers (Balduccini et al., 2006), team building and scheduling (Ricca et al., 2012), and healthcare realm (Dodaro et al., 2021). Intuitive ASP encodings are not always the most optimal/performant, making this programming paradigm less attractive to novice users as their first attempts to problem solving may not scale. ASP programs often require careful design and expert knowledge in order to achieve performant results (Gebser et al., 2011a). Figure 1 depicts a typical ASP system architecture. The first step performed by systems called grounders transforms a non-ground logic program (with variables) into a ground/propositional program (without variables). Expert ASP programmers often modify their ASP solution targeting the reduction of grounding size of a resulting program. Size of a ground program has been shown to be a predictive factor of a program's performance, enabling it to be used as an "optimization metric" (Gebser et al., 2011a). Intelligent grounding techniques (Faber et al., 2012) utilized by grounders such as gringo(Gebser et al., 2011b) or idlv(Calimeri et al., 2017) also keep such a reduction in mind. Intelligent grounding procedures analyze a given (non-ground) program to produce a smaller propositional program without altering the solutions. In addition, researchers looked into automatic program rewriting procedures. Systems such as simplify(Eiter et al. (2006a); Eiter et al. (2006b)), lpopt(Bichler (2015); Bichler et al. (2020)), and projector(Hippen and Lierler, 2019) rewrite non-ground programs (preserving their semantics) targeting the reduction of the grounding size. These systems are meant to be prepossessing tools agnostic to the later choice of ASP solving technology. Tools such as simplify, loppt, and projector, despite illustrating promising results, often hinder their objective. Sometimes, the original set of rules is better than the rewritten set, when their size of grounding and/or runtime is taken as a metric. Research has been performed to mitigate the negative impact of these rewritings. For example, Mastria et al. (2020) demonstrated a novel approach to guide automatic rewriting techniques performed in idlv using machine learning with a set of features built from structural properties of a considered program and domain information. Thus, a machine learning model guides idlv on whether to perform built-in rewritings or not. Another example of incorporating automatic rewriting techniques with the use of information about specifics of a considered program and a considered grounder is work by Calimeri et al. (2019). In that work, the authors incorporated program rewriting technique stemming from loppt into the intelligent grounding algorithm of grounder idlv. Such tight coupling of the rewriting and grounding procedures allows idlv to make a decision on whether to apply or not an loppt rewriting based on the current state of grounding. Groundwater idlv accurately estimates the impact of rewriting on grounding and based on this information decides whether to perform a rewriting. This synergy of intelligent grounding and a rewriting technique demonstrates the best performant results. Yet, it makes the transfer of rewriting techniques laborious assuming the need of tight integration of any rewriting within a grounder of choice. _Here_, we propose an algorithm for estimating the size of grounding a program based on (i) mimicking an intelligent grounding procedure documented by Faber et al. (2012) and (ii) techniques used in query optimization in relational databases, see, for instance, Chapter 13 by Silberschatz et al. (1997). We then implement this algorithm in a system called predictor. This tool is meant to be used as a decision support mechanism for ASP program rewriting systems so that they perform a possible rewriting based on estimates produced by predictor. This work culminates in the integration of predictor within the rewriting tools projector and loppt, which then are used prior to the invocation of a typical grounder-solver pair of ASP. For example, Figure 2 depicts the use of predictor within the rewriting system projector as a preprocessing step before the invocation of an ASP system. To depict the use of predictor within the rewriting system loppt as a preprocessing step it is sufficient to replace the box named projector by a box named loppt in Figure 2. We illustrate the success of this synergy by an experimental analysis. It is due to note that predictor is a stand alone tool and can be used as part of any ASP inspired technology where its functionality is of interest. We underline that the important contribution of this work is in the design of a building block - in the shape of the system predictor - towards making ASP a truly declarative framework. Answer set programming is frequently portrayed as a powerful declarative programming formalism. Yet, we can argue that such a claim is somewhat misleading. At present, to achieve scalable ASP solutions to problems of interest, it is typical that _an expert ASP programmer_ - with strong insights into underlying grounding/solving technology - constructs logic programs/encodings for problems that are efficient rather than intuitive. The ASP experts must rely on their extensive knowledge of the ASP technology to deliver efficient solutions. Yet, in truly declarative formalism we would expect the possibility of constructing _intuitive_ encodings and rely on underlying systems to process these efficiently. This way programmers may focus on coding specifications Figure 1: Typical ASP system architecture Figure 2: An ASP system with projector using predictor of problems at hand rather than the specifics of the shape of these specifications and the details of the underlying technology. This paper targets the development of infrastructure, which _one day_ will allow us to achieve the ultimate goal of _truly declarative ASP_. Ultimately, an expert ASP programmer capable of devising efficient encodings will be replaced by an ASP user capable of devising intuitive specifications that are then turned into effective specification by a portfolio of automatic tools such as, for example, projector and predictor, or loppt and predictor pairs showcased and evaluated here in the final section of the paper. This work makes a step towards achieving the described ultimate goal: it provides us with insights and possible directions for the developments on that pass. **Related work** It is due to remark on another body of research that targets a similar goal namely portfolio-like approaches, where researchers use machine learning based methods in navigating the space of distinct ASP grounders and/or solvers - claspfolio(Hoos et al., 2014); me-asp(Maratea et al., 2014); or encodings - esp(Liu et al., 2022) to decide on the best possibility in tackling considered problem by means of ASP technology. All and all, to the best of our knowledge this work is _one of the very few_ approaches for the stated/similar purpose. Already mentioned work by Mastria et al. (2020) presents an alternative machine learning based method for a similar purpose. In that work properties of a program are considered to predict whether rewriting will help an ASP solver down the road or not. Also, the work by Calimeri et al. (2019) can be seen as the most related one to this paper. The greatest difference of the championed approach is its detachment from any specific grounding system. It produces its estimates looking at a program alone. Calimeri et al. incorporate computation of estimates within a grounder. The benefit of such approach that at any point in time their estimates are reflective of de facto grounding that happened so far. **Outline of the paper** We start by introducing the subject matter terminology. The key contribution of the work lies in the development of formulas for estimating the grounding size of a logic program based on its structural analysis and insights on intelligent grounding procedures. First, we present the simplified version of these formulas for the case of tight programs. We trust that this helps the reader to build intuitions for the work. Second, the formulas for non-tight programs are given. We then describe the implementation details of system predictor. The main part of the presentation concerns most typical logic rules (stemming from Prolog). The section that follows the presentation of the key concepts discusses other kinds of rules and their treatment by the predictor system. We conclude by experimental evaluation that includes incorporation of predictor within rewriting systems projector and loppt. Parts of this paper appeared in the proceedings of the 17th Edition of the European Conference on Logics in Artificial Intelligence (Hippen and Lierler 2021). ## 2 Preliminaries An _atom_ is an expression \(p(t_{1},...,t_{k})\), where \(p\) is a predicate symbol of arity \(k\geq 0\) and \(t_{1},...,t_{k}\) are _terms_ - either object constants or variables. As customary in logic programming, variables are marked by an identifier starting with a capital letter. We assume object constants to be numbers. This is an inessential restriction as we can map strings to numbers using, for instance, the lexicographic order. For example, within our implementation described in this paper: we consider all alphanumeric object constants occurring in a program; sort these object constants using the lexicographic order; and map each string in this sorted list to a natural number that corresponds to its position in the list added to the greatest natural number occurring in the program. For an atom \(p(t_{1},...,t_{k})\) and position \(i\) (\(1\leq i\leq k\)), we define an _argument_ denoted by \(p[i]\). By \(p(t_{1},...,t_{k})^{0}\) and \(p(t_{1},...,t_{k})^{i}\) we refer to predicate symbol \(p\) and the term \(t_{i}\), respectively. A _rule_ is an expression of the form \[a_{0}\gets a_{1},...,a_{m},not\ a_{m+1},...,not\ a_{n}. \tag{1}\] where \(n\geq m\geq 0\), \(a_{0}\) is either an atom or symbol \(\bot\), and \(a_{1},...,a_{n}\) are atoms. We refer to \(a_{0}\) as the _head_ of the rule and an expression to the right hand side of an arrow symbol in (1) as the _body_. An atom \(a\) and its negation _not_\(a\) is a _literal_. To literals \(a_{1},...,a_{m}\) in the body of rule (1) we refer as _positive_, whereas to literals \(not\ a_{m+1},...,not\ a_{n}\) we refer as _negative_. For a rule \(r\), by \(\mathbb{H}(r)\) we denote the head atom of \(r\). By \(\mathbb{B}^{+}(r)\) we denote the set of positive literals in the body of \(r\). We obtain the set of variables present in an atom \(a\) and a rule \(r\) by \(vars(a)\) and \(vars(r)\), respectively. For a variable \(X\) occurring in rule \(r\), by \(args(r,X)\) we denote the set \[\{p[i]\mid a\in\mathbb{B}^{+}(r),a^{0}=p,\text{ and }a^{i}=X\}.\] In other words, \(args(r,X)\) denotes the set of arguments in the positive literals of rule \(r\), where variable \(X\) appears. A rule \(r\) is _safe_ if each variable in \(r\) appears in \(\mathbb{B}^{+}(r)\). Let \(r\) be a safe rule \[p(A)\gets q(A,B),r(1,A),not\ s(B). \tag{2}\] Then \(vars(r)=\{A,B\},\)\(args(r,A)=\{q[1],r[2]\}\), and \(args(r,B)=\{q[2]\}\). A _(logic) program_ is a finite set of safe rules. We call programs containing variables _non-ground_. For a program \(\Pi\), \(oc(p[i])\) denotes the set of all object constants occurring within \[\{\mathbb{H}(r)^{i}\mid r\in\Pi\text{ and }\mathbb{H}(r)^{0}=p\},\] whereas \(oc(\Pi)\) denotes the set of all object constants occurring in the head atoms of the rules in \(\Pi\). **Example 2.1**: _Let \(\Pi_{1}\) denote a program_ \[p(1).\ p(2).\ r(3). \tag{3}\] \[q(X,1)\gets p(X). \tag{4}\] _Then, \(oc(p[1])=\{1,2\}\), \(oc(q[1])=\emptyset\), \(oc(q[2])=\{1\}\) and \(oc(\Pi_{1})=\{1,2,3\}\). The grounding of a program \(\Pi\), denoted \(gr(\Pi)\), is a ground program obtained by instantiating variables in \(\Pi\) with all object constants of the program. For example, \(gr(\Pi_{1})\) consists of rules in (3) and rules_ \[q(1,1)\gets p(1).\ \ q(2,1)\gets p(2). \tag{5}\] \[q(3,1)\gets p(3). \tag{6}\] Given a program \(\Pi\), ASP grounders utilizing intelligent grounding are often able to produce a program smaller than its grounding \(gr(\Pi)\), but that has the same answer sets as \(gr(\Pi)\). Recall program \(\Pi_{1}\) introduces in Example 2.1. For instance, the program obtained from \(gr(\Pi_{1})\) by dropping rule (6) may be a result of intelligent grounding. The ground extensions of a predicate within a grounded program \(\Pi\) are the set of terms associated with the predicate in the program. For instance, in \(gr(\Pi_{1})\), the ground extensions of predicate \(q\) is the set of tuples \(\{\langle 1,1\rangle,\langle 2,1\rangle,\langle 3,1\rangle\}\). For an argument \(p[i]\) and a ground program \(\Pi\), we call the number of distinct object constants occurring in the ground extensions of \(p\) in \(\Pi\) at position \(i\) the argument size of \(p[i]\). For instance, for program \(gr(\Pi_{1})\) argument sizes of \(p[1]\), \(q[1]\), and \(q[2]\) are 3, 3, and 1, respectively. The dependency graph of a program \(\Pi\) is a directed graph \(G_{\Pi}=\langle N,E\rangle\) such that \(N\) is the set of predicates appearing in \(\Pi\) and \(E\) contains the edge \((p,q)\) if there is a rule \(r\) in \(\Pi\) in which \(p\) occurs in \(\mathbb{B}^{+}(r)\) and \(q\) occurs in the head of \(r\). A program \(\Pi\) is tight if \(G_{\Pi}\) is acyclic, otherwise the program is non-tight (Fages, 1994). **Example 2.2**: _Let \(\Pi_{2}\) denote a program constructed from \(\Pi_{1}\) (introduced in Example 2.1) by extending it with rules:_ \[r(2).\ r(4). \tag{7}\] \[s(X,Y,Z)\gets r(X),p(X),p(Y),q(Y,Z). \tag{8}\] _Program \(\Pi_{3}\) is the program \(\Pi_{2}\) extended with the rule:_ \[q(Y,X)\gets s(X,Y,Z). \tag{9}\] Figure 3 shows the dependency graphs \(G_{\Pi_{2}}\) (left) and \(G_{\Pi_{3}}\) (center). Program \(\Pi_{2}\) is tight, while program \(\Pi_{3}\) is not. ## 3 System predictor The key contribution of this work is the development of the system predictor (its algorithmic and software base), whose goal is to provide estimates for the size of an "intelligently" grounded program. In other words, its goal is to assess the impact of grounding without grounding itself. predictor is based on the intelligent grounding procedures implemented by the grounder dlv, described in Faber et al. (2012). The key difference is that, instead of building the ground instances of each rule in the program, predictor constructs statistics about the predicates, their arguments, and rules of the program. This section provides formulas we developed in order to produce the estimates backing up the computed statistics. We conclude with details on the implementation. It is due to make couple remarks. First, in a way we parallel the work on query optimization techniques within relational databases, e.g., see Chapter 13 in (Silberschatz et al., 1997). Indeed, when a particular query is considered within a relational database there are often numerous ways to its execution/implementation. Relational databases maintain statistics about its tables to produce estimates for intermediate results of various execution scenarios of potential queries. These estimates help database management systems decide which of the possible execution plans of the query at hand to select. In this work, we develop methods to collect and maintain statistics/estimates about entities of answer set programs. We then show how these estimates may help a rewriting (prepossessing) system for ASP to decide whether to rewrite some rules of a program or not. Second, the intelligent grounding procedure implemented by grounder dlv(Faber et al., 2012) is based on database evaluation techniques (Ullman; Abiteboul et al., 1988; 1995). The same statement is the case for another modern grounder gringo(Gebser et al.; Kaminski and Schaub, 2011; 2022). It also shares a lot in common with grounder dlv. This fact makes the estimates of system predictor rooting in the algorithm of dlvapplicable also within the framework of gringo. In a nutshell, both dlv and gringo instantiate a program via an iterative bottom-up process starting from the program's facts targeting the accumulation of ground atoms and ground rules derivable from the rules seen so far. As this process continues, a new ground rule is produced when its positive body atoms belong to the already computed atoms. Then, the head atom of this rule is added to the set of already accumulated ground atoms. This process continues until no new ground atoms/rules are produced by this process. #### Argument size estimation Tight program case: The estimation formulas are based on predicting argument sizes. To understand these it is essential to describe an order in which we produce estimates for predicate symbols/arguments. Given a program \(\Pi\), we obtain such an ordering by performing a topological sorting on its dependency graph. We associate each node in this ordering with its position and call it a _strata rank_ of a predicate. For example, \(p,q,r,s\) is one possible ordering for program \(\Pi_{2}\) (introduced in Example 2.2). This ordering associates strata ranks \(1,2,3,4\) with predicates \(p,q,r,s\), respectively. We now introduce some intermediate formulas for constraining our estimates. These intermediate formulas are inspired by query optimization techniques within relational databases, e.g., see Chapter 13 in (Silberschatz et al., 1997). These formulas keep track of information that helps us to estimate which actual values may occur in the grounded program without storing these values themselves. Let \(p[i]\) be an argument. We track the range of values that may occur at this argument. To provide intuitions for an introduced process, consider an intelligent grounding of \(\Pi_{2}\) consisting of rules (3), (5), (7), and rules \[s(2,1,1) \gets r(2),p(2),p(1),q(1,1). \tag{10}\] \[s(2,1,1) \gets r(2),p(2),p(2),q(2,1). \tag{11}\] This intelligent grounding produces rules (10), (11) in place of rule (8). Variable \(X\) from rule (8) is only ever replaced with object constant 2. Intuitively, this is due to the intersection \(oc(p[1])\cap oc(r[1])=\{2\}\). We model such a restriction by considering what minimum and maximum values are possible for each argument in an intelligently grounded program (compliant with described principle; all modern intelligent grounders respect such a restriction). We then use these values to define an "upper restriction" of the argument size for each argument. For a tight program \(\Pi\), let \(p[i]\) be an argument in \(\Pi\); \(R\) be the following set of rules \[\{r\mid r\in\Pi,\ \mathbb{H}(r)^{0}=p,\text{ and }\mathbb{H}(r)^{i}\text{ is a variable}\}. \tag{12}\] By \(\downarrow_{est}^{t\cdot t}(p[i])\) we denote an estimate of a minimum value that may appear in argument \(p[i]\) in \(\Pi\): \[\downarrow_{est}^{t\cdot t}(p[i])=min\big{(}oc(p[i])\cup\] \[\{max\Big{(}\{\downarrow_{est}^{t\cdot t}(p^{\prime}[i^{\prime}] )\mid p^{\prime}[i^{\prime}]\in args(r,\mathbb{H}(r)^{i})\}\Big{)}\mid r\in R \}\big{)}.\] The superscript \(t\)-\(t\) stands for "tight". Note how \(\mathbb{H}(r)^{i}\) in \(args(r,\mathbb{H}(r)^{i})\) is conditioned to be a variable due to the choice of set \(R\) of rules. The function \(\downarrow_{est}^{t\cdot t}\) is total because the rank of the predicate occurring on the left hand side of the definition above is strictly greater than the ranks of all of the predicate symbols \(p^{\prime}\) on the right hand side, where rank is understood as a strata rank defined before (multiple strata rankings are possible; any can be considered here). By \(\uparrow_{est}^{t\cdot t}(p[i])\) we denote an estimate of a maximum value that may appear in argument \(p[i]\) in tight program \(\Pi\). It is computed using formula for \(\downarrow_{est}^{t\cdot t}(p[i])\) with \(min\), \(max\), and \(\downarrow_{est}^{t\cdot t}\) replaced by \(max\), \(min\), and \(\uparrow_{est}^{t\cdot t}\), respectively. Now that we have estimates for minimum and maximum values, we estimate the size of the range of possible values. We understand the _range_ of an argument to be the number of values we anticipate to see in the argument within an intelligently grounded program if the values were all integers between the minimum and maximum estimates. It is possible that our minimum estimate for a given argument is greater than its maximum estimate. Intuitively, this indicates that no ground rule will contain this argument in its head. The number of values between the minimum and maximum estimates may also be greater than the number of object constants in a considered program. In this case, we restrict the range to the number of object constants occurring in the program. We compute the range, \(range_{est}^{t\cdot t}(p[i])\), as follows: \[min\big{(}\{max(\big{\{}0,\uparrow_{est}^{t\cdot t}(p[i])-\downarrow_{est}^ {t\cdot t}(p[i])+1\big{\}}),|oc(\Pi)|\}\big{)}\] **Example 3.1**: Recall program \(\Pi_{2}\) introduced in Example 2.2. The operations required to compute the minimum estimate for argument \(s[1]\) in \(\Pi_{2}\) follow: \[\downarrow_{est}^{t\cdot t}(r[1])=min\big{(}oc(r[1])\big{)}=2\] \[\downarrow_{est}^{t\cdot t}(p[1])=min\big{(}oc(p[1])\big{)}=1\] \[\downarrow_{est}^{t\cdot t}(s[1])=min(oc(s[1])\cup\] \[\{max(\big{\{}\downarrow_{est}^{t\cdot t}(r[1]),\downarrow_{est}^ {t\cdot t}(p[1])\big{\}})\})=min(\emptyset\cup\{2\})=2\] We compute \(\uparrow_{est}^{t\cdot t}(s[1])\) to be 2. Then, \(range_{est}^{t\cdot t}(s[1])\) is \[min(\{max\big{(}\big{\{}0,\uparrow_{est}^{t\cdot t}(s[1])- \downarrow_{est}^{t\cdot t}(s[1])+1\big{\}}\big{)},|oc(\Pi_{2})|\})\] \[=min(\{max\big{(}\{0,2-2+1\}\big{\}},4\big{)})=1\] We presented formulas for estimating the range of values in program's arguments. We now show how these estimates are used to assess the _size_ of an argument understood as the number of distinct values occurring in this argument upon an intelligent grounding. We now outline intuitions behind a recursive process that we capture in formulas. Let \(p[i]\) be an argument. If \(p[i]\) is such that predicate \(p\) has no incoming edges in the program's dependency graph, then we estimate the size of \(p[i]\) as \(|oc(p[i])|\). Otherwise, consider rule \(r\) such that \(\mathbb{H}(r)^{0}=p\) and \(\mathbb{H}(r)^{i}\) is a variable. Our goal is to estimate the _number of values_ variable \(\mathbb{H}(r)^{i}\) may be replaced with during intelligent grounding. To do so, we consider the argument size estimates for arguments in the positive body of the rule that contain variable \(\mathbb{H}(r)^{i}\). Based on typical intelligent grounding procedures, variable \(\mathbb{H}(r)^{i}\) may not take more values than the minimum of those argument size estimations. This gives us an estimate of the argument size relative to a single rule \(r\). The argument size estimate of \(p[i]\) with respect to the entire program may be then computed as the sum of such estimates for all rules such as \(r\) (recall that rule \(r\) satisfies the requirements \(\mathbb{H}(r)^{0}=p\) and \(\mathbb{H}(r)^{i}\) is a variable). Yet, the sum over all rules may heavily overestimate the argument size. To lessen the effect of overestimation we incorporate range estimates discussed before into the described computations. For a tight program \(\Pi\), let \(p[i]\) be an argument in \(\Pi\); \(R\) be the set (12) of rules. By \(S^{t\cdot t}_{est}(p[i])\) we denote an estimate of the argument size \(p[i]\) in \(\Pi\). This estimate is computed as follows: \[S^{t\cdot t}_{est}(p[i])=min\Big{(}\Big{\{}range^{t\cdot t}_{est} (p[i]),\ |oc(p[i])|+\] \[\sum_{r\in R}min\big{(}\{S^{t\cdot t}_{est}(p^{\prime}[i^{\prime} ])\ |\ p^{\prime}[i^{\prime}]\in args(r,\mathbb{H}(r)^{i})\}\big{)}\Big{\}}\Big{)}\] We can argue that the function \(S^{t\cdot t}_{est}\) is total in the same way as we argued that the function \(\downarrow^{t\cdot t}_{est}\) is total. **Example 3.2**: Let us illustrate the computation of the argument size estimates for argument \(s[2]\) in program \(\Pi_{2}\) (introduced in Example 2.2). Given that \(range^{t\cdot t}_{est}(s[2])=2\) and \(oc(s[2])=\emptyset\): \[S^{t\cdot t}_{est}(p[1])=|oc(p[1])|=2\] \[S^{t\cdot t}_{est}(q[1])=min(range^{t\cdot t}_{est}(q[1]),\{|oc( q[1])|+\] \[min(\{S^{t\cdot}_{est}(p[1])\})\})=min(\{2,0+min(\{2\})\})=2\] \[S^{t\cdot t}_{est}(s[2])=min\big{(}range^{t\cdot t}_{est}(s[2]),\] \[\big{\{}|oc(s[2])|+min\big{(}\{S^{t\cdot t}_{est}(p[1]),S^{t\cdot t }_{est}(q[1])\}\big{)}\big{\}}\big{)}=2\] Arbitrary (nontight) program case: To process arbitrary programs (tight and non-tight), we must manage the circular dependencies such as present in sample program \(\Pi_{3}\) defined in Example 2.2 in the section on preliminaries. We borrow and simplify a concept of the component graph by Faber et al. (2012). The _component graph_ of a program \(\Pi\) is an acyclic directed graph \(G^{\textsc{sc}}_{\Pi}=\langle N,E\rangle\) such that \(N\) is the set of strongly connected components in the dependency graph \(G_{\Pi}\) of \(\Pi\) and \(E\) contains the arc \((P,Q)\) if there is an arc \((p,q)\) in \(G_{\Pi}\) where \(p\in P\) and \(q\in Q\). For tight programs, we identify its component graph with the dependency graph itself by associating a singleton set annotating a node with its member. Figure 3 (right) shows the component graph for program \(\Pi_{3}\). For a program \(\Pi\), we obtain an ordering on its predicates by performing a topological sorting on its component graph. We associate each node in this ordering with its position and call it a _strong strata rank_ of each predicate that belongs to a node. For example, \(\{p\},\{r\},\{q,s\}\) is one possible topological sorting of \(G^{\textsc{sc}}_{\Pi_{3}}\). This ordering associates the following strong strata ranks \(1,2,3,3\) with predicates \(p,r,q,s\), respectively. Let \(C\) be a node/component in graph \(G^{\textsc{sc}}_{\Pi}\). By \(\mathcal{P}_{C}\) we denote the set \[\{r\ |\ p\in C,r\in\Pi,\ \text{and}\ \mathbb{H}(r)^{0}=p\}.\] We call this set a _module_. A rule \(r\) in module \(\mathcal{P}_{C}\) is a _recursive rule_ if there exists an atom \(a\) in the positive body of \(r\) so that \(a^{0}=p\) and predicate \(p\) occurs in \(C\). Otherwise, rule \(r\) is an _exit rule_. For tight programs, all rules are exit rules. It is also possible to have modules with only recursive rules. **Example 3.3**: The modules in program \(\Pi_{3}\) introduced in Example 2.2 contain \[\mathcal{P}_{\{p\}}=\{p(1).\ \ \ p(2).\};\ \ \mathcal{P}_{\{r\}}=\{r(2).\ \ \ r(3).\ \ \ r(4).\};\] and \(\mathcal{P}_{\{q,s\}}\) composed of rules (4), (8), and (9). The rules rules (8) and (9) are recursive. In the sequel we consider components whose module contains an exit rule. For a component \(C\) and its module \(\mathcal{P}_{C}\), we construct a partition \(M_{1},...,M_{n}\) (\(n\geq 1\)) in the following way: Every exit rule of \(\mathcal{P}_{C}\) is a member of \(M_{1}\). A recursive rule \(r\) in \(\mathcal{P}_{C}\) is a member of \(M_{k}\) (\(k>1\)) if * for every predicate \(p\in C\) occurring in \(\mathbb{B}^{+}(r)\), there is a rule \(r^{\prime}\) in \(M_{1}\cup...\cup M_{k-1}\), where \(\mathbb{H}(r^{\prime})^{0}=p\) and * there is a predicate \(q\) occurring in \(\mathbb{B}^{+}(r)\) such that there is a rule \(r^{\prime\prime}\) in \(M_{k-1}\), where \(\mathbb{H}(r^{\prime\prime})^{0}=q\). We refer to the unique partition created in this manner as the _component partition_ of \(C\); integer \(n\) is called its _cardinality_. We call elements of a component partition _groups_ (the component partition is undefined for components whose module does not contain an exit rule). Prior to illustrating these concepts by an example we introduce one more notation. For a component partition \(M_{1},\ldots,M_{k},\ldots,M_{n}\), by \(M_{k}^{p[i]}\) we denote the set \[\{r\mid r\in M_{k},\ \mathbb{H}(r)^{0}=p,\text{ and }\mathbb{H}(r)^{i}\text{ is a variable}\};\] and by \(M_{1\ldots k}^{p[i]}\) we denote the union \(\bigcup_{j=1}^{k}M_{j}^{p[i]}\). **Example 3.4**: \(\!\!\!\) Recall program \(\Pi_{3}\) from Example 2.2. The component partition of node \(\{q,s\}\) in \(G_{\Pi_{3}}^{\text{sc}}\) follows: \[M_{1} =\{q(X,1)\gets p(X).\}\] \[M_{2} =\{s(X,Y,Z)\gets r(X),p(X),p(Y),q(Y,Z).\}\] \[M_{3} =\{q(Y,X)\gets s(X,Y,Z).\}.\] For program \(\Pi_{3}\) and its argument \(q[1]\): \[M_{1\ldots 3}^{q[1]}=\{q(X,1)\gets p(X).\ \ \ q(Y,X)\gets s(X,Y,Z).\}\] We now generalize range and argument size estimation formulas for tight programs to the case of arbitrary programs. These formulas are more complex than their "tight versions", yet they perform similar operations at their core. Intuitively, formulas for tight programs rely on argument ordering provided by the program's dependency graph. Now, in addition to an order provided by the component dependency graph, we rely on the orders given to us by the component partitions of the program. In the remainder of this section, let \(\Pi\) be a program; \(p[i]\) be an argument in \(\Pi\); \(C\) be the node in the component graph of \(\Pi\) so that \(p\in C\); \(n\) be the cardinality of the component partition of \(C\); and \(j\) be an integer such that \(1\leq j\leq n\). If the module of \(C\) does not contain an exit rule, then the estimate of the range of an argument \(p[i]\), denoted \(\mathit{range}_{est}(p[i])\), is assumed \(0\) and the estimate of the size of an argument \(p[i]\), denoted \(S_{est}(p[i])\), is assumed \(0\). We now consider the case when the module of \(C\) contains an exit rule. By \(\downarrow_{est}(p[i])\) we denote an estimate of a minimum value that may appear in argument \(p[i]\) in program \(\Pi\): \[\downarrow_{est}(p[i])\preceq \downarrow_{est}^{gr}(p[i],n)\] \[\downarrow_{est}^{gr}(p[i],j)=min(oc(p[i])\cup\{\downarrow_{est}^ {rule}(p[i],j,r)\mid r\in M_{1\ldots j}^{p[i]}\})\] \[\downarrow_{est}^{rule}(p[i],j,r)=max\big{(}\big{\{}\downarrow_{ est}^{split}(p[i],p^{\prime}[i^{\prime}],j)\mid p^{\prime}[i^{\prime}]\in \mathit{args}(r,\mathbb{H}(r)^{i})\big{\}}\big{)}\] \[\downarrow_{est}^{split}(p[i],p^{\prime}[i^{\prime}],j)=\begin{cases} \downarrow_{est}^{gr}(p^{\prime}[i^{\prime}],j-1),&\text{if }p^{\prime}\text{ in the same component as }p\\ \downarrow_{est}(p^{\prime}[i^{\prime}]),&\text{otherwise}\end{cases}\] We note the strong similarity between the combined definitions of \(\downarrow_{est}^{gr}(p[i],j)\) and \(\downarrow_{est}^{rule}(p[i],j,r)\) compared to the corresponding "tight" formula \(\downarrow_{est}^{l\gets r}(p[i])\). Formula for \(\downarrow_{est}^{split}(p[i],p^{\prime}[i^{\prime}],j)\) serves two purposes. If the predicate \(p^{\prime}\) is in the same component as predicate \(p\), we decrement the counter \(j\) (intuitively bringing us to preceding groups in component partition). Otherwise, we simply use the minimum estimate for \(p^{\prime}[i^{\prime}]\) that is due to the computation relevant to another component. We now show that defined functions \(\downarrow_{est},\downarrow_{est}^{gr},\downarrow_{est}^{rule}\) and \(\downarrow_{est}^{split}\) are total. Consider any strong strata ranking of program's predicates. Then, by \(\mathit{rank}(p)\) we refer to the corresponding strong strata rank of a predicate \(p\). The following table provides ranks associated with expressions used to define functions in question: \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{Expression} & Rank \\ \(\downarrow_{est}(p[i])\) & \(\omega\cdot(rank(p)+1)\) \\ \(\downarrow_{est}^{gr}(p[i],j)\) & \(\omega\cdot rank(p)+j\) \\ \(\downarrow_{est}^{rule}(p[i],j,r)\) & \(\omega\cdot rank(p)+j\) \\ \(\downarrow_{est}^{split}(p[i],p^{\prime}[i^{\prime}],j)\) & \(\omega\cdot rank(p)+j\) \\ \end{tabular} where \(\omega\) is the smallest infinite ordinal number. It is easy to see that in definitions of functions \(\downarrow_{est}\), \(\downarrow_{est}^{gr}\), and \(\downarrow_{est}^{rule}\) the ranks associated with their expressions do not increase. In definition of \(\downarrow_{est}^{split}\) in terms of \(\downarrow_{est}\) the rank decreases. Thus, the defined functions are total. By \(\uparrow_{est}(p[i])\) we denote an estimate of a maximum value that may appear in argument \(p[i]\) in program \(\Pi\). It is computed using formula for \(\downarrow_{est}(p[i])\) with \(min\), \(max\), \(\downarrow_{est}\), \(\downarrow_{est}^{gr}\), \(\downarrow_{est}^{rule}\), and \(\downarrow_{est}^{split}\) replaced with \(max\), \(min\), \(\uparrow_{est}\), \(\uparrow_{est}^{gr}\), \(\uparrow_{est}^{rule}\), and \(\uparrow_{est}^{split}\), respectively. The range of an argument \(p[i]\), denoted \(range_{est}(p[i])\), is computed by the formula of \(range_{est}^{t+t}(p[i])\), where we replace \(\downarrow_{est}^{t+t}\) and \(\downarrow_{est}^{t+t}\) with \(\downarrow_{est}\) and \(\uparrow_{est}\), respectively. We define the formula for finding the argument size estimates, \(S_{est}(p[i])\), as follows: \[S_{est}(p[i])=S_{est}^{gr}(p[i],n)\] \[S_{est}^{gr}(p[i],j)=min\big{(}\big{\{}range_{est}(p[i]),|oc(p[i] )|+\sum_{r\in\mathcal{H}_{1..j}^{right}}S_{est}^{rule}(p[i],j,r)\big{\}}\big{)}\] \[S_{est}^{rule}(p[i],j,r)=\ min\big{(}\big{\{}S_{est}^{split \prime}(p[i],p^{\prime}[i^{\prime}],j)\mid p^{\prime}[i^{\prime}]\in args(r,\mathbb{H}(r)^{i})\big{\}}\big{)}\] \[S_{est}^{split}(p[i],p^{\prime}[i^{\prime}],j)=\begin{cases} S_{est}^{gr}(p^{\prime}[i^{\prime}],j-1),&\text{if $p^{\prime}$ is in the same component as $p$}\\ S_{est}(p^{\prime}[i^{\prime}]),&\text{otherwise}\end{cases}\] We can argue that the function \(S_{est}\) is total in the same way as we argued that the function \(\downarrow_{est}\) is total. Program size estimationKeysWe borrow the concept of a key from relational databases. This concept allows us to produce more accurate final estimates as it carries important structural information about predicates and the kinds of instantiations possible for them. (Table 1 presented in the section on experimental analysis illustrates the impact of information on the keys within the implemented system.) For some predicate \(p\), we refer to any set of arguments of \(p\) that can uniquely identify all ground extensions of \(p\) as a _superkey_ of \(p\). We call a minimal superkey a _candidate key_. For instance, let the following be the ground extensions of some predicate \(q\): \[\{\langle 1,1,a\rangle,\langle 1,2,b\rangle,\langle 1,3,b\rangle,\langle 2,1,c \rangle,\langle 2,2,c\rangle,\langle 2,3,a\rangle\}\] It is easy to see that both \(\{q[1],q[2]\}\) and \(\{q[1],q[2],q[3]\}\) are superkeys of \(q\), while \(\{q[1]\}\) is not a superkey. Only superkey \(\{q[1],q[2]\}\) is a candidate key. A _primary key_ of a predicate \(p\) is a single chosen candidate key. A predicate may have at most one primary key. For the purposes of this work, we allow the users of predictor to manually specify the primary key. It is possible that some predicates do not have primary keys specified. To handle such predicates, we define \(key(p)\) to mean the following: \[key(p)=\begin{cases}\text{the primary key of $p$},&\text{if $p$ has a primary key specified}\\ \{p[1],...,p[n]\},&\text{otherwise}\end{cases}\] where \(n\) is the arity of \(p\). We call an argument \(p[i]\) a _key argument_ if it is in \(key(p)\). For a rule \(r\), by \(kvars(r)\) we denote the set of its variables that occur in its key arguments. Rule size estimationWe now have all the ingredients to provide an estimate for grounding size of each rule in a program. We understand a _grounding size_ of a rule as the number of rules produced as a result of intelligently grounding this rule. For a rule \(r\) in a program \(\Pi\), the estimated grounding size, denoted \(S_{est}(r)\), is computed as follows: \[S_{est}(r)=\prod_{X\in kvars(r)}min\big{(}\{S_{est}(p[i])\mid p[i]\in args(r,X)\} \big{)}\] **Implementation Details** System predictor1 is developed using the Python 3 programming language. predictor utilizes pyclingo version 5, a Python API sub-system of answer set solving toolkit clingo (Gebser et al., 2015). The pyclingo API enables users to easily access and enhance ASP processing steps within Python code, including access to some data in the processing chain. In particular, predictor uses pyclingo to parse a logic program into an abstract syntax tree (AST) representation. After obtaining the AST, predictor has an immediate access to internal rule structure of the program and computes estimates for the program using the presented formulas. System predictor is designed for integration with other systems processing ASP programs. It is distributed as a package that can be imported into other systems developed in Python 3, or it can be accessed through a command line interface. In order to ensure that system predictor is applicable to real world problems, it supports ASP-Core-2 logic programs. For instance, the estimation formulas presented here generalize well to programs with choice rules and disjunction. Rules with aggregates are also supported. Yet, for such rules more sophisticated approaches are required to be more precise at estimations. Next section covers key details on the ASP-Core-2 support by the predictor system. We then conclude by integrating the predictor system into two rewriting tools, namely, projector and lopt. We present a thorough experimental analysis for these systems and the enhancement that predictor offers to them. Footnote 1: [https://www.unomaha.edu/college-of-information-science-and-technology/natural-language-processing-and-knowledge-representation-lab/software/predictor.php](https://www.unomaha.edu/college-of-information-science-and-technology/natural-language-processing-and-knowledge-representation-lab/software/predictor.php) ## 4 Language Extensions: ASP-Core-2 Support In order to ensure that system predictor is applicable to real world problems, it has been designed to operate on many common features of ASP-Core-2 logic programs. In the following we extend the definition of logic rules to include these features and discuss how these features are handled by predictor. **Pools and Intervals** In ASP-Core-2 logic programs, an atom may have the form \(p(t_{1};...;t_{n})\), where \(p\) is a predicate of arity 1, and \(t_{1};...;t_{n}\) is a semicolon separated list of terms. Here, \(t_{1};...;t_{n}\) is a _pool_ term. A predicate with a pool term is "syntactic sugar" that indicates there is a copy of that rule for every object constant in the pool. **Example 4.1**: The following rule containing pool terms: \[p(a;b)\gets q(c;d).\] can be expanded to the following rules: \[p(a)\gets q(c).\] \[p(a)\gets q(d).\] \[p(b)\gets q(c).\] \[p(b)\gets q(d).\] Similarly, ASP-Core-2 programs may contain atoms of the form \(p(l..r)\), where \(p\) is a predicate of arity 1, and \(l\), \(r\) are terms. Here, \(l..r\) is an _interval_ term. A predicate with an interval term is "syntactic sugar" indicating that there is a copy of this rule for every integer between the range of \(l\) to \(r\), inclusive. **Example 4.2**: The following rule containing interval terms: \[p(1..3,a)\gets q(1..2).\] can be expanded to the following rules: \[p(1,a) \gets q(1).\] \[p(1,a) \gets q(2).\] \[p(2,a) \gets q(1).\] \[p(2,a) \gets q(2).\] \[p(3,a) \gets q(1).\] \[p(3,a) \gets q(2).\] For both pool and interval terms, system predictor handles the program as though it were in its expanded form. **Aggregates**: An _aggregate element_ has the form \[t_{0},...,t_{k}:a_{0},...,a_{m},not\ a_{m+1},...,not\ a_{n}.\] where \(k\geq 0\), \(n\geq m\geq 0\), \(t_{0},...,t_{k}\) are terms and \(a_{0},...,a_{n}\) are atoms. An _aggregate atom_ has the form \[\#aggr\{e_{0},...,e_{n}\}\prec t\] where \(n\geq 0\) and \(e_{0},...,e_{n}\) are aggregate elements. Symbol \(\#aggr\) is either \(\#count\), \(\#sum\), \(\#max\), or \(\#min\). Symbol \(\prec\) is either \(<\), \(\leq\), \(=\), \(\neq\), \(>\), or \(\geq\). Symbol \(t\) is a term. System predictor supports rules containing aggregates to a limited extent. In particular, predictor will simplify such a rule as if it had no aggregate atoms. **Example 4.3**: The rule containing an aggregate atom: \[p(X)\gets q(X),\#count\{Y:r(X,Y)\}<3.\] is seen by predictor as the following rule: \[p(X)\gets q(X).\] while the only variable seen in this rule will be \(X\). It is important to note that if an aggregate contains variables, it is possible that the _length of a rule_ expands during grounding processes, where it is understood that the length of a rule is the number of atoms in a rule. We do not consider this length expansion when computing the grounding size of a rule. **Disjunctive and Choice Rules**: A _disjunctive rule_ is an extended form of ASP logic rule that allows disjunctions in its head. They are of the form \[a_{0}\lor...\lor a_{k}\gets a_{k+1},...,a_{m},not\ a_{m+1},...,not\ a_{n}.\] where \(n\geq m\geq k\geq 0\), and \(a_{0},...,a_{n}\) are atoms. System predictor handles a disjunctive rule by replacing it with the set of rules created in the following way. For each atom \(a\) in the head of a disjunctive rule \(r\), predictor creates a new rule of the form \(a\leftarrow\mathbb{B}(r)\). For computing range and argument size estimates, all of these newly created rules are used. However, when estimating the grounding size of the original rule, only one of the rules is used. **Example 4.4**: The disjunctive rule \(r\): \[p(1)\lor p(2)\gets q(1).\] is replaced by the following two rules: \[p(1) \gets q(1).\] \[p(2) \gets q(1).\] Yet, only one of those rules is used for estimating the grounding size of the original rule. Using these rules is sufficient for estimating grounding information, even though they are not semantically equivalent to the original disjunctive rule. A _condition_ is of the form \[a_{0}:a_{1},...,a_{m},not\ a_{m}+1,...,not\ a_{n}\] where \(n\geq m\geq 0\), and \(a_{0},...,a_{n}\) are atoms. We refer to \(a_{0}\) as the head of the condition. A _choice atom_ is of the form \(l\{c_{1},...,c_{n}\}r\), where \(l\) is an integer, \(r\) is an integer such that \(r\geq l\), and \(c_{1};...;c_{n}\) is a semi-colon separated list of conditions. We now extend the definition of a rule given by (1) to allow the head to be a choice atom. We refer to rules whose head contains a choice atom as _choice rules_. System predictor handles a choice rule similarly to the case of a disjunctive rule, replacing it with the set of rules created in the following way. For each atom \(a\) in the head of a condition in the choice atom in rule \(r\), create a new rule of the form \(a\leftarrow\mathbb{B}(r)\). For computing range and argument size estimates, all of these newly created rules are used. However, when estimating the grounding size of the original rule, only one of the rules will be used. Note that, as with aggregates, choice rules can increase the length of a rule. **Example 4.5**: _The choice rule:_ \[1\{p(X):q(1);p(Y)\}1\gets r(X,Y),s(Y).\] _is replaced by the following two rules:_ \[p(X) \gets r(X,Y),s(Y).\] \[p(Y) \gets r(X,Y),s(Y).\] _Yet, only one of those rules is used for estimating the grounding size of the original rule._ **Functions**: In ASP-Core-2, a term may also be of the form \(f(t_{1},...,t_{n})\), where \(f\) is a function symbol and \(t_{1},...,t_{n}\) (\(n>0\)) are term. We call terms of this form _function_ terms. In order to be more compliant with ASP-Core-2 features, predictor is capable of running on programs containing function terms, however when a function term is encountered by predictor, it simply sees the function term as an object constant. **Binary Operations**: The ASP-Core-2 standard also allows _binary operation_ terms. A binary operation term is of the form \(t_{1}\ op\ t_{2}\), where \(t_{1}\) and \(t_{2}\) are either an integer object constant, a variable, or a binary operation and \(op\) is a valid binary operator2. If an atom contains a binary operation term, system predictor handles it in one of three ways. If the binary operation has no variables, it treats the term as an object constant. If the binary operation contains exactly one variable, it treats the term as that variable. Otherwise, the atom is treated as if it were part of the negative body (and therefore not used in estimations). Footnote 2: [http://potassco.sourceforge.net/doc/pyclingo/clingo.ast.html#BinaryOperator](http://potassco.sourceforge.net/doc/pyclingo/clingo.ast.html#BinaryOperator) _Example 4.6_: In the following rule containing binary operation terms: \[\gets p(1+1),q(2*X+1),r(2*X+Y),s(Y).\] the atoms are viewed as follows. Atom \(p(1+1)\) is seen as containing an object constant term. Atom \(q(2*X+1)\) is seen as the atom \(q(X)\). Atom \(r(2*X+Y)\) is seen as being part of the negative body. ## 5 Experimental Analysis We investigated the utility of system predictor by integrating it as a decision support mechanism into the ASP rewriting tool projector to create tool prd-projector, as well as the ASP rewriting tool lopt, to create tool prd-lopt. These tools are discussed in following subsections. ### System prd-projector Figure 2 (presented in the Introduction section) demonstrates how predictor is integrated with system projector resulting in what we call prd-projector. Note how predictor runs entirely independent of and prior to the grounding step of a considered ASP grounder-solver pair. The rewriting tool projector is documented by Hippen and Lierler (2019). This tool focuses on so called projection technique. In its default settings, it studies each rule of a given program and when a projection rewriting is applicable to considered rules projector rewrites these accordingly. Thus, whenever the rewriting is established to be possible it is also performed. The prd-projector tool extends the projector system by the decision making mechanism supported by predictor on whether to perform rewriting or not. When projector establishes that a rewriting is possible the system predictor evaluates an original rule against its rewritten counterpart as far as their predicted grounding sizes. The projection rewriting will only be applied if the rewritten rule is predicted to produce smaller grounding footprint. In particular, for each rule \(r\) in program \(\Pi\), projector will create a set \(R\) of rules, which represents one of the possible "projected"-versions of \(r\). This set \(R\) of rules is then substituted into \(\Pi\) to create program \(\Pi^{\prime}\). If the predicted grounding size for this new program is smaller than, or equal to the original, the set \(R\) of rules is kept and \(\Pi^{\prime}\) becomes a considered program in the future evaluations. However, if the new predicted grounding size is larger than the original, set \(R\) is discarded, and prd-projector will move on to the next rule in \(\Pi\). To summarize, tool predictor is used by projector in two ways: 1. When prd-projector encounters a tie through its default heuristics of projector for selecting variables to project, prd-projector generates the resulting projections for each of the variables and use the projection that is predicted to have the smallest grounding size. 2. prd-projector only performs a projection if the prediction for the projection is smaller than the predicted grounding size for the original rule. We note that it is possible for projections to occur inside of aggregate expressions. System predictor is not used to decide if these projections should be performed, so that these projections always occur in prd-projector. ### System prd-loppt Figure 2 with the box representing projector replaced by the box representing loppt demonstrates how predictor is integrated with system loppt. We refer to the version of loppt integrated with predictor as prd-loppt. Once again, predictor runs entirely independent of and prior to the grounding step. The rewriting tool loppt is documented by Bichler (2015); Bichler et al. (2020). This tool focuses on so called rule decomposition technique. This technique is strongly related to a rewriting championed by system projector. In fact, projector and loppt can be characterized as the tools performing the same kind of rewriting, while using different heuristics on how and when to apply this rewriting. Both systems attempt reducing the number of variables occurring in a rule by (a) introducing an auxiliary predicate and (b) replacing an original rule by new rules. In other words, there are often multiple ways available for rewriting the same rule and these systems may champion different ways. In its default settings, loppt studies each rule of a given program and when a rule decomposition rewriting is applicable to considered rules loppt rewrites these accordingly. Thus, it behaves just as the projector system when used with its default settings: whenever the rewriting at hand is established to be possible it is also performed. The prd-loppt tool extends the loppt system by the decision making mechanism of predictor on whether to perform rewriting or not in the same manner as prd-projector tool extends the projector system by the decision making mechanism of predictor. We refer the reader to the previous subsection for the details. ### Evaluation To evaluate the usefulness of predictor, two sets of experiments are performed. First, an "intrinsic" evaluation over accuracy of the predicted grounding size compared to the actual grounding size is examined. Second, an "extrinsic" evaluation of systems prd-projector and prd-loppt is conducted to examine whether the system predictor is indeed of use as a decision support mechanism on whether to perform or not the rewritings of projector and loppt, respectively. We note that the later evaluation is of a special value illustrating the value and the potential of system predictor and technology of the kind. It assesses predictor's impact when it is used in practice for its intended purpose as a decision making assistant. The intrinsic evaluation has its value in identifying potential future work directions and pitfalls in estimations. Overall, we will observe intrinsically that our estimates differ frequently in order of magnitude from the reality. Yet, extrinsic evaluation clearly states that predictor performs as a solid decision making assistant for the purpose of improving rewriting tools when their performance depends on a decision when rewriting should take place versus not. Benchmarks were gathered from two different sources. First, programs from the Fifth Answer Set Programming Competition (Calimeri et al., 2016) were used. Of the 26 programs in the competition, 13 were selected (these that system projector, in its default settings, has preformed rewritings on). For each program, the **20** instances (originally selected for the competition) were used. One interesting thing to note about these encodings is that they are generally already well optimized. As such, performing projections often leads to an increase in grounding size. Second, benchmarks were gathered from an application called aspccg implementing a natural language parser (Lierler and Schuller, 2012). This domain has been extensively studied in Buddenhagen and Lierler (2015) and was used to evaluate system projector by Hippen and Lierler (2019). In that evaluation, the authors considered 3 encodings from aspccg: enc1, enc7, enc19. We introduced changes to the encodings enc1, enc7, and enc19 to make these in ASP-Core-2 standard Calimeri et al. (2020) compatible with the lopt system. We utilize the same **60** instances as in the mentioned evaluation of projector. In our experiments, system projector was provided with the key information for some root predicate arguments within several of the benchmarks. Non-default keys used for all benchmarks can be found in Table 1. The sign "-" within the table denotes benchmarks where no key information was provided by the user. Table 2 details interesting features in the programs from both domains. The second column provides information about some features present in the programs. These features are abbreviated with the meanings as follows (abbreviation letters bolded): non-tight program, aggregates, binary operation terms, choice rules, and function terms. The competition benchmarks also consisted of two encodings: a newer 2014 encoding and a 2013 encoding from the previous year. The third column specifies which encoding was used (in case the newer encoding consisted of no projections). All tests were conducted on Ubuntu 18.04.3 with an Intel(r) Xeon(r) CPU E5-1620 v3 @ 3.50GHz and 32 GB of RAM. Furthermore, Python version 3.7.3 and yclingo version 5.4.0 are used to run predictor. \begin{table} \begin{tabular}{l l} \hline \hline **Program** & **Keys** \\ \hline Bottle Filling & - \\ Hanoi Tower & - \\ Incremental Scheduling & precedes/2[1], importance/2[1], job\_device/2[1], \\ & job\_len/2[1], deadline/2[1], curr\_job\_start/2[1], \\ & curr\_on\_instance/2[1], instances/2[1] \\ Knight Tour with Holes & - \\ Labyrinth & - \\ Minimal Diagnosis & obs\_elabel/3[1,2] \\ Nomystery & at/2[1], \(fuel/2[1]\), goal/2[1] \\ Permutation Pattern Matching & t/2[1], p/2[1] \\ Ricochet Robots & amo/2[1], d1/2[1], dir/2[1] \\ Solitaire & - \\ Stable Marriage & manAssignsScore/3[1,2], womanAssignsScore/3[1,2] \\ Valves Location & dem/3[1,2] \\ Weighted-Sequence & leafWeightCardinality/3[1] \\ aspccg enc1; enc7; enc19 & word\_at/2[2], category\_tag\_nofeatures/3[1], \\ & category\_tag/3[1], adjacent/2[1] \\ \hline \hline \end{tabular} \end{table} Table 1: _Key information for benchmark programs_ . Grounding and solving was done by clingo version 5.4.0. For all benchmarks execution was limited to 5 minutes. #### 5.3.1 Intrinsic Evaluation Let \(S\) be the true grounding size of an instance of a program computed by gringo- i.e., the number of rules in a ground program produced by gringo. Let \(S^{\prime}\) be the grounding size predicted by predictor \begin{table} \begin{tabular}{l l l} \hline **Program** & **Features** & **2013** \\ \hline Bottle Filling & a,b & **Yes** \\ Hanoi Tower & b & No \\ Incremental Scheduling & a,b,c & No \\ Knight Tour with Holes & n,b & No \\ Labyrinth & n & No \\ Minimal Diagnosis & n & No \\ Nomystery & a,b,c,f & No \\ Permutation Pattern Matching & c,b & No \\ Ricochet Robots & n,a,b,c & No \\ Solitaire & a,b,c & No \\ Stable Marriage & - & **Yes** \\ Valves Location & n,a,c,f & No \\ Weighted-Sequence & n,c,b & **Yes** \\ aspccg enc1; enc7; enc19 & n,a,b,c,f & N/A \\ \hline \end{tabular} \end{table} Table 2: _Feature and version details for benchmark programs_ \begin{table} \begin{tabular}{l l l} \hline **Program** & **Average Error Factor** & **Average Error Factor (Keyless)** \\ \hline Hanoi Tower & - & 1.5 \\ Nomystery & 1.5 & 1.5 \\ Permutation Pattern Matching\(*\) & **3.8** & 5.0 \\ Solitaire & - & 4.3 \\ Stable Marriage & **3.7** & \(7.5*10^{5}\) \\ \hline Bottle Filling & - & \(4.9*10^{9}\) \\ Incremental Scheduling\(*\) & \(1.1*10^{5}\) & \(1.1*10^{5}\) \\ Labyrinth\(*\) & - & \(1.3*10^{1}\) \\ Minimal Diagnosis & \(8.2*10^{3}\) & \(8.2*10^{3}\) \\ Valves Location\(*\) & \(\textbf{1.3}*\textbf{10}^{1}\) & \(1.6*10^{1}\) \\ aspccg enc1 & \(\textbf{2.9}*\textbf{10}^{1}\) & \(3.1*10^{1}\) \\ aspccg enc7 & \(\textbf{1.3}*\textbf{10}^{1}\) & \(1.4*10^{1}\) \\ aspccg enc19 & \(2.2*10^{1}\) & \(2.2*10^{1}\) \\ \hline Knight Tour with Holes & - & \(1.9*10^{-4}\) \\ Ricochet Robots & \(2.0*10^{-1}\) & \(\textbf{2.2}*\textbf{10}^{-1}\) \\ Weighted Sequence & \(6.0*10^{-3}\) & \(\textbf{1.1}*\textbf{10}^{-2}\) \\ \hline \end{tabular} \end{table} Table 3: _Average error factor for benchmark programs, with and without keys_ for the same instance. We define a notion of an _error factor_ on a program instance as \(S^{\prime}/S\). The _average error factor_ of a program/benchmark is the average of all error factors across the instances of a program. Table 3 shows the average error factor using prd-projector for all programs 3. The third column presents the case for programs when no key information is provided. Sign "-" indicates that for this benchmark no key information was provided within the main encoding. The average error factor shown was rounded to make comparisons easier. An asterisk (\(*\)) next to a benchmark name indicates that not all 20 instances of this benchmark were grounded within the allotted time limit. For instance, 19 instances of the _Incremental Scheduling_ benchmark were successfully grounded, while the remaining instance timed out. For the benchmarks annotated by \(*\) we only report the average error factor assuming the instances grounded successfully. Footnote 3: The numbers presented for the aspccg enc1, enc7, enc19 are due to the original encoding of these benchmarks non-compatible with the ASP-Core-2 standard and utilized in the experiments by Hippen and Lierler (2021). We partition the results into three groups using the average error factor. The partition is indicated by the horizontal lines on Table 3. First, there are five programs where the estimates computed by predictor are, on average, less than one order of magnitude over. Second, there are eight programs that are, on average, greater than one order of magnitude over. Finally, three programs are predicted to have lower grounding sizes than in reality. We also note the impact that keys have on certain programs. We especially emphasize the difference in error between _Stable Marriage_ with and without keys, where the average error factor is different by 5 orders of magnitude. The numbers in bold mark instances in which information on keys change the prediction. It is obvious that the accuracy of system predictor could still use improvements. In many cases the accuracy is drastically erroneous. These results are not necessarily surprising. We identify five main reasons for observed data on predictor: 1. Insufficient data modeling is one weak point of predictor. Since we do not keep track what actual constants could be present in the ground extensions of a predicate, it is often the case that we overestimate argument size due to our inability to identify repetitive values. 2. Since we only identified keys for root predicate arguments, many keys were likely missed; automatic key detection is the direction of future work. 3. System predictor has limited support for such common language extensions as aggregates. 4. System predictor is vulnerable to what is known as _error propagation_(Ioannidis and Christodoulakis, 1991). 5. While one might typically expect predictor to overestimate due to its limited capabilities in detecting repeated data, the underestimation on _Knight Tour with Holes_, _Ricochet Robots_, and _Weighted Sequence_ programs is not surprising due to the fact that these programs are non-tight and utilize binary operations in terms. #### 5.3.2 Extrinsic Evaluation Here, we examine the _relative_ accuracy of system predictor alongside projector and lpopt. In other words, we measure the quality of predictor by analyzing the impact it has on projector and lpopt performance. We recall that in all experiments we consider that predictor is provided information on keys as documented in Table 1. Let \(S\) be the grounding size of an instance of a program, where grounding is produced by gringo. Let \(S^{\prime}\) be the grounding size of the same instance in a modified (rewritten) version of the program. In this context, the modified version will either be the logic program outputted after using projector/lpopt or the logic program outputted after using prd-projector/prd-lpopt. The _grounding size factor_ of a program's instance is defined as \(S^{\prime}/S\). As such, a grounding size factor greater than 1 indicates that the modification increased the grounding size, whereas a value less than 1 indicates that the modification improved/decreased the grounding size. The _average grounding size factor_ of a benchmark is the average of all grounding size factors across the instances of a benchmark. While we target improving the grounding size of a program, the ultimate goal is to improve the overall performance of ASP grounding/solving. Thus, we also compare the execution time of the programs, as that is ultimately what we want to reduce. Let \(S\) be the execution time of an answer set solver clingo (including grounding and solving) on an instance of a benchmark. Let \(S^{\prime}\) be the execution time of clingo on the same instance in a modified version of the benchmark. The _execution time factor_ of a program's instance is defined as \(S^{\prime}/S\). The _average execution time factor_ of a benchmark is the average of all _execution time factors_ across the instances of a benchmark. Table 4 displays the average grounding size factor together with the average execution time factor for projector and prd-projector on all benchmark programs. An asterisk (\(*\)) following a program name indicates that not all 20 instances were grounded. In these cases, the average grounding size factor was only computed from instances where all 3 versions of the program (original, projector, prd-projector) completed solving. The same concerns the computation of the average execution time factor. While we only consider instances in where all 3 version of the program completed grounding and then solving, we have included the exact number of instances grounded and solved by each version of the program, to show that the factors presented may be misleading. For example, consider program _Inc. Scheduling_, while prd-projector seems to have a slightly slower execution time than projector alone, prd-projector managed to solve an additional instance, reflected by the decreased grounding time, therefore it would not be accurate to say the projector outperformed prd-projector on that encoding. A dagger (\(\dagger\)) following a program name indicates that there was a slight improvement for prd-projector, however this information was lost for the precision shown. We partition the results into three sets, indicated by the horizontal lines on Table 4. The first set denotes programs in which predictor improved the grounding size factor of the program, the second set denotes programs in predictor did not have a noticeable effect on the grounding size factor, and the last set denotes programs in which predictor harmed the grounding size factor of the program as compared to the rewriting without predictions. We note that there are five programs in which prd-projector reduces the grounding size when compared to projector, five programs in which prd-projector does not impact the grounding size, and six programs in which prd-projector increases the grounding size. By grey highlight we mark the benchmarks where decrease in grounding size by means of using predictor resulted in the increase of solving time. \begin{table} \begin{tabular}{l|c c|c c|c c c} & \multicolumn{3}{c}{Grounding Size Factor} & \multicolumn{3}{c}{Execution Time Factor} & \\ **Program** & PROJ & PRD-PROJ & PROJ & PRD-PROJ & **Svd.** & **Svd.** & proj & **Svd.** & prd-proj \\ \hline Hanoi Tower & 1.41 & 1.00 & 1.67 & 1.00 & 20 & 20 & 20 \\ Inc. Scheduling* & 1.14 & 1.12 & 1.06 & 1.10 & 13 & 13 & 14 \\ Minimal Diagnosis & 1.06 & 1.00 & 1.04 & 1.00 & 20 & 20 & 20 \\ Solitaire & 1.41 & 1.00 & 1.32 & 0.99 & 19 & 19 & 19 \\ Stable Marriage & 0.13 & 0.12 & 0.18 & 0.17 & 19 & 19 & 19 \\ \hline Bottle Filling & 1.36 & 1.36 & 1.44 & 1.43 & 20 & 20 & 20 \\ Labyrinth & 1.11 & 1.11 & 5.26 & 5.27 & 18 & 18 & 18 \\ Perm. Pattern Match.\(*\)\(\dagger\) & 0.13 & 0.13 & 0.14 & 0.14 & 16 & 20 & 20 \\ Valves Location\(\dagger\) & 1.00 & 1.00 & 1.03 & 0.93 & 3 & 3 & 3 \\ Weighted Sequence\(\dagger\) & 1.00 & 1.00 & 3.05 & 1.59 & 19 & 16 & 17 \\ aspccg enc1 & 1.01 & 1.01 & 1.65 & 2.28 & 60 & 60 & 60 \\ \hline aspccg enc7 & 0.90 & 1.00 & 1.57 & 2.20 & 60 & 60 & 60 \\ aspccg enc19 & 0.70 & 0.81 & 1.71 & 2.59 & 60 & 60 & 60 \\ Knight Tour with Holes & 0.80 & 0.90 & 0.50 & 2.45 & 1 & 1 & 1 \\ Nomystery & 0.62 & 1.00 & 1.23 & 1.00 & 7 & 8 & 7 \\ Ricochet Robots & 0.91 & 1.00 & 0.85 & 1.00 & 20 & 20 & 20 \\ \hline \end{tabular} \end{table} Table 4: _Average grounding size factors, and execution time factors for proj and prd-proj_ Table 5 displays the average grounding size factor together with the average execution time factor for lpopt and prd-lpopt on all benchmark programs. It is data is organized in the same style as within Table 4 comparing projector and prd-projector. We note that there are ten programs in which prd-lpopt reduces the grounding size when compared to lpopt, two programs in which prd-lpopt does not impact the grounding size, and four programs in which prd-lpopt increases the grounding size. Overall, the results illustrate the validity of predictor approach. The system has especially positive impact within its integration with lpopt. Also, the presented experimental data illustrates once more the importance of the development rewriting techniques and the possibility of their positive impact. Together with that decision support systems exemplified by predictor have to be designed and engineered to achieve the whole potential of ASP. We trust that system predictor is a solid step in that direction providing room for numerous improvements to account for nontrivial language features of ASP dialects. ## 6 Conclusions and Future Work We introduced a method for predicting grounding size of answer set programs. We implement the described method in stand-alone system predictor that runs agnostic to any answer set grounder/solver pair. We expect this tool to become a foundation to decision support systems for rewriting/preprocessing tools in ASP. Indeed, using predictor as a decision support guide to rewriting system projector and lpopt improves their outcome overall. The same is observed for the case of the rewriting system called lpopt. This proves the validity of the proposed approach, especially as further methods for improving estimation accuracy are explored in the future. As such system predictor is a unique tool unparalleled in earlier research ready for use within preprocessing frameworks in ASP. As discussed in the introduction: this work provides an important step towards achieving a goal of _truly_ declarative answer set programming. The section on intrinsic evaluation indicated a number of potential areas worth of improving estimations. It is one of the future work directions. Another one is utilizing predictor within other preprocessing tools of ASP. We trust that both efforts can be now undertaken as a community effort given the availability and transparency of predictor. Also, rather sophisticated techniques such as database-inspired optimizations, back-jumping, rewritings, binder splitting techniques are available in modern implementations of \begin{table} \begin{tabular}{l|c c|c c|c c} **Program** & \multicolumn{2}{c|}{\begin{tabular}{c} Grounding Size Factor \\ lpopt \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Execution Time Factor \\ lpopt \\ \end{tabular} } & \multicolumn{2}{c}{ \begin{tabular}{c} Execution Time Factor \\ lpopt \\ \end{tabular} } & \multicolumn{2}{c}{**Svd.**} & \multicolumn{2}{c}{**Svd.**} & \multicolumn{2}{c}{**Svd.**} & \multicolumn{2}{c}{**svd.**} & \multicolumn{2}{c}{**svd.**} & \multicolumn{2}{c}{**svd.**} \\ \hline aspCCG ENC1 & 0.92 & 0.89 & 0.88 & 0.87 & 60 & 60 & 60 \\ aspCCG ENC7 & 0.80 & 0.75 & 0.83 & 0.79 & 60 & 60 & 60 \\ Hanoi Tower & 1.41 & 1.00 & 1.59 & 0.99 & 20 & 19 & 20 \\ Minimal Diagnosis & 1.17 & 1.00 & 1.13 & 1.00 & 20 & 20 & 20 \\ Bottle Filling & 1.00 & 0.28 & 0.98 & 0.39 & 20 & 20 & 20 \\ Valves Location & 1.00 & 1.00 & 1.00 & 0.96 & 3 & 3 & 3 \\ Soltiatre* & 1.03 & 1.01 & 4.53 & 0.94 & 18 & 18 & 18 \\ Knight Tour with Holes & 3.36 & 2.18 & 1.28 & 0.90 & 1 & 1 & 1 \\ Labyrinth & 1.24 & 1.12 & 10.45 & 9.36 & 18 & 18 & 18 \\ Weighted Sequence\(\dagger\) & 1.07 & 1.04 & 1.13 & 2.11 & 19 & 20 & 20 \\ \hline Stable Marriage\(\dagger\) & 1.01 & 1.01 & 1.02 & 1.02 & 19 & 19 & 19 \\ Perm. Pattern Match.\(*\dagger\) & 0.14 & 0.14 & 1.15 & 0.89 & 16 & 19 & 20 \\ \hline Inc. Scheduling & 1.78 & 2.30 & 1.01 & 1.24 & 13 & 13 & 13 \\ aspCCG ENC19 & 0.78 & 0.87 & 0.92 & 0.93 & 60 & 60 & 60 \\ Nomystery & 0.70 & 0.95 & 1.06 & 2.72 & 7 & 8 & 8 \\ Ricochet Robots & 1.09 & 1.18 & 1.01 & 2.02 & 20 & 20 & 20 \\ \hline \end{tabular} \end{table} Table 5: _Average grounding size factors, and execution time factors for lpopt and prd-lpopt_ grounders (Gebser et al.; Calimeri et al., 2011b; 2017). As of now these techniques are not accounted for when estimates are produced. Also at the moment, uniform distribution of values between the maximum and minimum in predicate arguments is assumed. Looking into different assumptions is also an interesting future direction. We would like to thank Mirek Truszczynski, Daniel Houston, Liu Liu, Michael Dingess, Roland Kaminski, Abhishek Parakh, Victor Winter, Parvathi Chundi, and Jorge Fandinno for valuable discussions on the subject of this paper. The work was partially supported by NSF grant 1707371. The author(s) declare none.
2309.02328
Neurosymbolic Meta-Reinforcement Lookahead Learning Achieves Safe Self-Driving in Non-Stationary Environments
In the area of learning-driven artificial intelligence advancement, the integration of machine learning (ML) into self-driving (SD) technology stands as an impressive engineering feat. Yet, in real-world applications outside the confines of controlled laboratory scenarios, the deployment of self-driving technology assumes a life-critical role, necessitating heightened attention from researchers towards both safety and efficiency. To illustrate, when a self-driving model encounters an unfamiliar environment in real-time execution, the focus must not solely revolve around enhancing its anticipated performance; equal consideration must be given to ensuring its execution or real-time adaptation maintains a requisite level of safety. This study introduces an algorithm for online meta-reinforcement learning, employing lookahead symbolic constraints based on \emph{Neurosymbolic Meta-Reinforcement Lookahead Learning} (NUMERLA). NUMERLA proposes a lookahead updating mechanism that harmonizes the efficiency of online adaptations with the overarching goal of ensuring long-term safety. Experimental results demonstrate NUMERLA confers the self-driving agent with the capacity for real-time adaptability, leading to safe and self-adaptive driving under non-stationary urban human-vehicle interaction scenarios.
Haozhe Lei, Quanyan Zhu
2023-09-05T15:47:40Z
http://arxiv.org/abs/2309.02328v1
Neurosymbolic Meta-Reinforcement Lookahead Learning Achieves Safe Self-Driving in Non-Stationary Environments ###### Abstract In the area of learning-driven artificial intelligence advancement, the integration of machine learning (ML) into self-driving (SD) technology stands as an impressive engineering feat. Yet, in real-world applications outside the confines of controlled laboratory scenarios, the deployment of self-driving technology assumes a life-critical role, necessitating heightened attention from researchers towards both safety and efficiency. To illustrate, when a self-driving model encounters an unfamiliar environment in real-time execution, the focus must not solely revolve around enhancing its anticipated performance; equal consideration must be given to ensuring its execution or real-time adaptation maintains a requisite level of safety. This study introduces an algorithm for online meta-reinforcement learning, employing lookahead symbolic constraints based on _Neurosymbolic Meta-Reinforcement Lookahead Learning_ (NUMERLA). NUMERLA proposes a lookahead updating mechanism that harmonizes the efficiency of online adaptations with the overarching goal of ensuring long-term safety. Experimental results demonstrate NUMERLA confers the self-driving agent with the capacity for real-time adaptability, leading to safe and self-adaptive driving under non-stationary urban human-vehicle interaction scenarios. reinforcement learning, meta-learning, cyber security, autonomous vehicles, human safety ## I Introduction The application of machine learning (ML) in self-driving (SD) technology represents a marvel of engineering, enabling vehicles to process an array of sensor inputs in real-time, interpret complex surroundings, and execute actions with a precision that was once relegated to the realm of science fiction. Recent advances in the field of machine learning, as evidenced by works such as [1, 2, 3], have triggered a significant surge of curiosity and investigation into the realm of learning-driven SD [4]. This application has arisen in vehicles that can correctly work through known cityscapes, anticipate pedestrian behavior, and interact perfectly with other vehicles, all while following traffic rules and optimizing fuel efficiency. Nevertheless, beyond controlled experimental setups, the inherent unpredictability of artificial intelligence (AI) becomes evident when a self-driving vehicle confronts a new and unfamiliar situation. In such instances, the system's performance might deteriorate or lead to a crash when encountering unanticipated scenarios on real-world roads. The inaugural instance of a pedestrian fatality attributed to autonomous vehicles surfaced in 2018, when a self-driving Uber vehicle collided with a pedestrian crossing an intersection in Tempe, Arizona, during the nighttime [5]. This tragic event highlights the critical importance of improving the safety and adaptability of autonomous driving systems. Considering such challenges, a pertinent question arises: How can advances in technologies and methodologies help enhance the capability of autonomous vehicles to operate safely and reliably in diverse and complex environments? Fig. 1: An illustration of Neurosymbolic Meta-Reinforcement Lookahead Learning. When driving in a changing environment, the agent first uses observation from the environment to calibrate its belief at every time step about the mode. Based on its belief, the agent conjectures its performance in the future within a lookahead horizon. Then, using this conjecture, the agent searches in its knowledge to find suitable safety constraints. In the meantime, the knowledge of the agent will update itself by symbolic safety constraint adaptation if needed. The policy is adapted through conjectural lookahead optimization with safety constraints, leading to a suboptimal (empirically) online control with a long-term safety guarantee. ious advantages, including increased robustness against uncertainty and environmental variations, improved exploration capabilities, and compatibility with policy search algorithms like evolutionary strategies or Monte Carlo (MC) methods [9]. However, the limited generalization ability prevents RL from wide application in real SD systems when encountering nonstationary environments different from their training time. This drawback also makes the stochastic policies even more unstable in the life-critical execution. Enhancing the adaptability of reinforcement learning (RL) policies comprises the objective of meta-reinforcement learning (meta-RL), which attempts to discover a meta-policy capable of adeptly adjusting and delivering satisfactory performance across a spectrum of environments [10]. While prior works [10, 11, 12, 13] have dedicated significant efforts to this pursuit, many of them continue to rely on offline methodologies. These approaches display the capacity to adapt to a diverse array of tasks within environments they are exposed to during their training in offline settings. The practical implementation of online machine learning often faces challenges due to its time-sensitive nature. Processing and updating models in real-time can be demanding, making time a critical factor when deploying such systems in real-world applications. Beyond the time constraints, another significant challenge arises: the assurance of policy processing safety remains an ongoing concern in these methods. In conjunction with the real-time adaptation capability, several researchers have incorporated safety-centric learning to support policy robustness. For instance, in [14], an approach leveraging symbolism is proposed to formulate distinct safety policies tailored to various state partitions. Similarly, [15] presents a method that constructs a shield for actions based on observation inputs, ensuring the safety of each individual step. Notably, neither of these approaches accounts for the dynamic nature of the environment. This implies that in scenarios where the initial environmental observations are incomplete or where the environment is subject to change, the effectiveness of the safety mechanisms might diminish. **Our Contributions** In response to the dual challenges of limited real-time adaptation capabilities and the quest for safety assurance, this study introduces an algorithm for online meta-reinforcement learning, employing lookahead symbolic constraints based on _Neurosymbolic Meta-Reinforcement Lookahead Learning_ (NUMERLA). The underlying principle of NUMERLA is to facilitate secure real-time learning by continually updating safety constraints. The core idea involves employing logical statements as safety constraints for the process of secure online meta-adaptation learning (OMAL) [See Section II-B]. These constraints are iteratively refined in a forward-looking manner during the online execution [See Equation (NUMERLA)]. This lookahead updating mechanism balances the efficiency of online adaptations with the overarching goal of ensuring long-term safety. In summary, the main contributions of this work include: 1) conceptualizing the challenge of acquiring adaptive strategies in a dynamic environment characterized by symbolic safety constraints; 2) introducing an ensure-safe Real-Time OMAL algorithm, which builds upon the principles of Neurosymbolic Meta-Reinforcement Lookahead Learning (NUMERLA); 3) experimental results demonstrating that NUMERLA enables the self-driving agent with the capacity for real-time adaptability, leading to safe and self-adaptive driving under non-stationary urban human-vehicle interaction scenarios. ## II Definition and Model Structure of NUMERLA ### _Meta Reinforcement learning_ RL is a field that focuses on solving problems within a stationary environment called a Markov Decision Process (MDP). Considering \(z_{t}\in\mathcal{Z}\) is the environment mode or latent variable hidden from the agent at time \(t\). Let \(s_{t}\in\mathcal{S}\) and \(a_{t}\in\mathcal{A}\) be the state input and the control action at time \(t\). In the context of RL, we often encounter situations where the underlying conditions remain stable throughout a decision-making period, known as \(H\). This means that the parameters that define the environment remain unchanged as time progresses (i.e., \(z_{t}=z\)). This characteristic is known as "stationarity," which allows us to consider a specific class of policies known as Markov policies [16]. These policies, denoted as \(\pi:\mathcal{S}\rightarrow\Delta(A)\), depend only on the current state. Denote a neural network-based policy by \(\pi(s,a;\theta)\), where \(\theta\in\Theta\subset\mathbb{R}^{d}\) (\(d\) represents the dimension of parameters in the neural network). This choice aligns well with scenarios where state information is collected from sensors. And the evaluation of the choices will be given by reward function \(r_{t}=r(s_{t},a_{t})\). The aim of RL is to tackle a problem in the realm of stationary MDPs, where the environment determines the best policy that maximizes the expectation of cumulative rewards in a fixed environment \(z\). These rewards are accounted for over time using a discount factor \(\gamma\) (\(0<\gamma\leq 1\)): \[\max_{\theta}J_{z}(\theta):=\mathbb{E}_{P(s_{t+1}|s_{t},a_{t};z),\pi(s_{t},a_{ t};\theta)}\left[\sum_{t=1}^{H}\gamma^{t}r(s_{t},a_{t})\right].\] (RL) Here, the transition \(P(s_{t+1}|s_{t},a_{t};z_{t})\) tells how likely the agent is to observe a certain state \(s_{t+1}\) with control action \(a_{t}\) under the current mode conditions \(z_{t}\). Since we are using the fixed \(z_{t}=z\), it can be reduced to \(P(s_{t+1}|s_{t},a_{t};z)\). Traditional meta-RL is explored in works like [10, 12, 13]. These methods aim to discover a meta policy denoted as \(\theta\), along with an adaptation mapping called \(\Phi\). The objective is to attain favorable rewards across various environments by updating the meta policy \(\theta\) through \(\Phi\) within each environment. In simpler terms, they try to train a good meta policy using offline methods. Instead of treating meta-learning as a fixed optimization problem, we propose learning the meta-adaptation process in real-time. This implies that the agent adjusts its adaptation strategies continuously based on its observations. Essentially, our approach enables the agent to adapt to changing environments. The following paragraph formally defines the problem of online meta-adaptation learning (OMAL). Let \(\mathcal{I}_{t}=\{s_{t},a_{t-1},r_{t-1}\}\) be the set of agent's observations at time \(t\), referred to as the information structure [17]. The online adaptation mapping relies on the online observations \(\cup_{t=1}^{t}\mathcal{I}_{k}\). Then the meta adaptation mapping at time \(t\) is defined as \(\Phi_{t}(\theta):=\Phi(\theta,\cup_{t=1}^{t}\mathcal{I}_{k})\). The adaptation mapping \(\Phi\) adapts the meta policy \(\theta\) to a new policy fine-tuned for the specific \(z\) at time step \(t\) based on the agent's observations \(\mathcal{I}_{t}\). Under this circumstance, we will use expected reward \(r_{t}^{\pi}(s_{t};\theta):=\mathbb{E}_{a\sim\pi(\cdot|s_{t};\theta)}[r_{t}(s_ {t},a)]\) as our new objective in the function shown below: \[\max_{\{\Phi_{t}\}_{t=1}^{H}} \mathbb{E}_{z_{1},z_{2},\cdots,z_{H}}[\sum_{t=1}^{H}r^{\pi}(s_{t} ;\Phi_{t}(\theta))],\] (OMAL) s.t. \[z_{t+1}\sim p_{z}(\cdot|z_{t}),\ t=1,\ldots,H-1,\] \[\theta=\arg\max\mathbb{E}_{z\sim\rho_{z}}[J_{z}(\theta)].\] In this context, the mode denoted as \(z\in\mathcal{Z}\) represents the specific environment in which the offline policy is situated. Furthermore, we denote the latent mode transitions probabilistically via a Markov chain \(p_{z}(z_{t+1}|z_{t})\) with an initial distribution denoted as \(\rho_{z}(z_{1})\). Our proposition involves the adoption of the Conjectural Online Lookahead Adaptation (COLA) model, as outlined in [18] and expounded upon in Section III-A, as a means to identify a viable \(\Phi_{t}\). The diagram illustration can also be found in Figure 1. ### _Objective function of OMAL with an action constraint_ In the last section, we address the OMAL problem that can be solved by COLA, which can be thought of as Neuro Lookahead Learning. This section explains how symbolism can make the policy safer. When the OMAL wants to find an optimal model that maximizes the online performance, the aim of NUMERLA is to further make sure \(K\) steps safety of the policy. The objective function is given by: \[\max_{\{\Phi_{t}\}_{t=1}^{H}} \mathbb{E}_{z_{1},z_{2},\cdots,z_{H}}[\sum_{t=1}^{H}r^{\pi}(s_{t} ;\Phi_{t}(\theta))],\] (NUMERLA) s.t. \[z_{t+1}\sim p_{z}(\cdot|z_{t}),\ t=1,\ldots,H-1,\] \[\theta=\arg\max\mathbb{E}_{z\sim\rho_{z}}[J_{z}(\theta)],\] \[\Phi_{t}(\theta)\in f_{t}(z_{t}),\] \[f_{t}(z_{t}):=\begin{cases}\varphi_{1}&\text{if }\chi_{1}(z_{t})\\ \varphi_{2}&\text{if }\chi_{2}(z_{t})\wedge\neg\chi_{1}(z_{t})\\ \cdots\\ \varphi_{n}&\text{if }\chi_{n}(z_{t})\wedge\left(\bigwedge_{1\leq i<n}\neg\chi_{i}(z _{t})\right)\end{cases}.\] (SSC) where symbolic safety constraints (SSC) \(f_{t}:\mathcal{Z}_{t}\Rightarrow\Theta_{t}\) is a function belonging to the space \(\mathcal{F}\). The function \(f_{t}\) serves to associate a mode within \(\mathcal{Z}_{t}\) with a subset of \(\Theta_{t}\). It is important to note that when alterations occur in the mode space \(\mathcal{Z}_{t}\) at step \(t\), corresponding adjustments are made to both the mapping function \(f_{t}\) and the policy space \(\Theta_{t}\) in accordance with the change. We define \(\mathcal{X}:=\{\chi_{1},\ldots,\chi_{n}\}\) as a collection of symbolic logic judgments (expressed through linear predicates), which serve to segment the space of modes. For the sake of clarity, we represent the non-overlapping partitions as \(\{g_{1},\cdots,g_{n}\}\), denoted by for all \(i\in\{1,\cdots,n\}\), \(z^{\prime}\in g_{i}\subseteq\mathbf{Z}_{t}\) is a set of mode that satisfies if \(\chi_{i}(z^{\prime})\wedge\left(\bigwedge_{1\leq j<i}\neg\chi_{j}(z^{\prime})\right)\) is true; \(\{\varphi_{1},\cdots,\varphi_{n}\}\subseteq\Theta\) are the symbolic logic-based safety constraints which are the coupling between the knowledge mode space with the physical action space. They can be defined as subsets in \(\Theta\) that include the safest action choices according to the yield environment mode \(z_{t}\). The framework of NUMERLA is also shown in Figure 2. ## III Methodology of Optimization ### _Conjectural Online Lookahead Adaptation_ Following the model in [18], let \(b_{t}\) be the agent's belief (normally, the belief is a pre-defined prediction or conjecture of the future mode in the environment) and \(\theta\) still be our obtained policy defined in Equation (OMAL). We consider a \(K\) step future that can be represented by trajectory \(\tau_{t}^{K}:=(s_{t},a_{t},\ldots,s_{t+K-1},a_{t+K-1},s_{t+K})\). Following, the distribution of trajectory \(\tau_{t}^{K}\) can be characterized as: \[q(\tau_{t}^{K};b_{t},\theta):=\] \[\prod_{k=0}^{K-1}\pi(a_{t+k}|s_{t+k};\theta)\prod_{k=0}^{K-1} \left[\sum_{z\in\mathcal{Z}}b_{t}(z)\underbrace{P(s_{t+k+1}|s_{t+k},a_{t+k};z)}_ {\text{unknown}}\right].\] Fig. 2: The NUMERLA framework is shown in the plot. Utilizing symbolic logic-based safety constraints and online meta-adaptation learning techniques, we consider the following scenario: With the state denoted as \(s_{t}\), policy as \(\pi_{t}\), and policy constraint as \(f_{i}\) at time \(t\), generated by the SSC function. The policy \(\pi_{t}\) initiates dynamic adjustments online while guided by the knowledge encoded in \(f_{i}\). This adaptation process draws upon insights from both the current state \(s_{t}\) and historical context. Subsequently, the revised policy \(s_{t+1}\) governs the selection of an action, thereby leading to the transition from state \(s_{t}\) to \(s_{t+1}\). Assuming the environment mode space \(\mathcal{Z}_{t}\) changes exclusively during steps \(1,4,6\). In case of such mode changes, the knowledge content is updated to \(f_{i+1}\). Here, the transition of the environment \(P\) is unknown. The goal of this model is to maximize the forecast future performance: \[\max_{\theta^{\prime}\in\Theta}\mathbb{E}_{q(\tau_{t}^{K};b,\theta^{\prime})}\sum _{k=0}^{K-1}r(s_{t+k},a_{t+k}) \tag{1}\] However, the agent cannot access the distribution \(q(\tau_{t}^{K};b,\theta^{\prime})\) during the online adaptation. Thus, cannot use policy gradient methods to solve the optimization problem. As the replacement, we use importance sampling to do the optimization by reformulating the original problem (1) to the conjectural lookahead optimization (CLO) problem: \[\max_{\theta^{\prime}\in\Theta}\mathbb{E}_{q(\cdot;b_{t},\theta)} \left[\prod_{k=0}^{K-1}\frac{\pi(a_{t+k}|s_{t+k};\theta^{\prime})}{\pi(a_{t+k}| s_{t+k};\theta)}\sum_{k=0}^{K-1}r(s_{t+k},a_{t+k})\right]\] (CLO) s.t. \[\mathbb{E}_{s\sim q}D_{KL}(\pi(\cdot|s;\theta),\pi(\cdot|s;\theta^ {\prime}))\leq\delta,\] where \(D_{KL}\) is the Kullback-Leibler divergence. In the KL divergence constraint, we slightly abuse the notation \(q(\cdot)\) to denote the discounted state visiting frequency \(s\sim q\). Equation (CLO) is equivalent to Equation (1) since the distribution difference between \(q(\tau_{t}^{K};b,\theta^{\prime})\) and \(q(\tau_{t}^{K};b,\theta)\) in (CLO) is compensated by the ratio \(\prod_{k=0}^{K-1}\frac{\pi(a_{t+k}|s_{t+k};\theta^{\prime})}{\pi(a_{t+k}|s_{t+ k};\theta)}\). When \(\theta^{\prime}\) is close to the based policy \(\theta\) in terms of KL divergence, we can use the data collected during the training to finish the approximation of the results. In the COLA setting, the data is gradient sampling of the objective function in different environment modes. The overall online updating process for the COLA is shown in Algorithm 1. ``` 1:Input The meta policy \(\theta\), belief \(b\), training samples \(\{\mathcal{D}_{z}\}\), sample batch size \(M\), lookahead horizon \(K\) for\(t\in\{1,2,\dots,\}\)do 2:Acquire the sensor input \(s_{t}\); Implement the action using \(\pi(\cdot|s_{t};\theta_{t})\); Update the belief \(b(z;s_{t})\); Sample\(M\) trajectories (\(K\) steps from \(t\) ) \(\hat{\tau}_{t}^{K}\) under \(z\) from \(\{\mathcal{D}_{z}\}\); Obtain\(\theta^{\prime}\) by solving Conjecture Lookahead Optimization (CLO); \(\theta_{t+1}=\theta^{\prime}\). ``` **Algorithm 1** Conjectural **O**nline **Lookahead Adaptation ### _Symbolic Safety Constraint Adaptation_ Denote a given safety assessment function \(Safe(s_{t},a_{t}):S\times\mathcal{A}\rightarrow\{0,1\}\) that outputs a Boolean value where if state-action pair (s-a pair) \((s_{t},a_{t})\) is safe (output 0) or unsafe (output 1). Then, for the symbolic safety constraint adaptation (SSCA), its objective function can be defined as Equation (SSCA) shown below: \[\min_{f}\sum_{z\in\mathcal{Z}}\sum_{\theta^{\prime}\in f_{t}(b_{t}(z))} \mathbb{E}_{q(\tau_{t}^{K};b,\theta^{\prime})}\left[\sum_{k=0}^{K-1}Safe(s_{t+ k},a_{t+k})\right].\] (SSCA) On the other hand, we can divide it into optimization problems according to different mode partitions \(g_{i}\), called the symbolic safety constraint adaptation of partition (SSCAP). Suppose we have: \[\hat{q}(\tau_{t}^{K};b_{t},g_{i},\theta):=\] \[\prod_{k=0}^{K-1}\pi(a_{t+k}|s_{t+k};\theta)\prod_{k=0}^{K-1} \left[\sum_{z\in g_{i}}b_{t}(z)P(s_{t+k+1}|s_{t+k},a_{t+k};z)\right].\] Then, we can denote the SSC optimization for specific partition: \[\min_{\varphi_{i}}\sum_{\theta^{\prime}\in\varphi_{i}}\mathbb{E}_{\hat{q}( \tau_{t}^{K};b_{t},g_{i},\theta)}\left[\sum_{k=0}^{K-1}Safe(s_{t+k},a_{t+k}) \right].\] (SSCAP) The foundational SSC function \(f_{0}\) is derived through a heuristic process rooted in prior human insights within our conceptual framework. It is important to note that \(\mathcal{Z}\) represents the range of modes entirely encompassed by \(f_{0}\). However, in scenarios where the agent is confronted with a novel mode space denoted as \(\mathcal{Z}^{\prime}\supset\mathcal{Z}\) demanding a more powerful SSC function, a knowledge expansion is imperative. This expansion pertains to the enhancement of our understanding, specifically the SSC function, to effectively accommodate this broader mode space. The online update of the SCC can follow the rules described in Algorithm 2. ``` 1:Input \(\{\chi_{1},\dots,\chi_{n}\},\{\varphi_{1},\cdots,\varphi_{n}\},\mathcal{Z}^{\prime}\) 2:Create partitions \(\{g_{1},\dots,g_{n}\}\) using \(\{\chi_{1},\dots,\chi_{n}\}\) 3:\(g_{n+1}\leftarrow\emptyset\) 4:for\(z^{\prime}\in\mathcal{Z}^{\prime}\)do 5:if\(z^{\prime}\notin g_{i},\forall g_{i}\in\{g_{1},\dots,g_{n}\}\)then 6:\(g_{n+1}\gets g_{n+1}\cup\{z^{\prime}\}\) 7:Find\(g_{n+1}=\chi_{n+1}(z_{t})\wedge\left(\bigwedge_{1\leq i<n+1}\neg\chi_{i}(z_{t})\right)\) 8:Obtain\(\varphi_{n+1}\) that optimal (SSCAP) with input \(p_{n+1}\) 9:Derive updated judgments \(\{\chi_{1},\dots,\chi_{n+1}\}\) 10:return\(\{\chi_{1},\dots,\chi_{n},\chi_{n+1}\}\) and \(\{\varphi_{1},\cdots,\varphi_{n},\varphi_{n+1}\}\) ``` **Algorithm 2** Symbolic **S**afety **O**nstrain **A**daptation Figure 3 shows the process of online updating of the SSC function. By combining the results derived from Algorithm 2, we acquire the refined SSC function denoted as \(f_{1}\). It is essential to note that the enhancement of the SSC function is not a solitary, instantaneous modification; rather, the agent is required to gather data from the changing environment \(\mathcal{Z}^{\prime}\). This necessitates conducting multiple samplings from the environment to achieve the desired refinement. Illustrative examples can shed light on the process of updating the SCC. A relevant exasmple is motivated by the disparities in driving practices across different regions within the United States. Imagine a driver who has been accustomed to the driving conditions in New York City but relocates to Texas. This relocation exposes the driver to a distinct environmental context. In urban traffic settings, the driver's existing knowledge might still prove effective. However, driving in Texas introduces new scenarios, such as encountering wildlife like deer or bears on the road. Here, the driver not only adapts through personal experience but can also seek insights from local residents, or the acquisition of new modes online. Regarding the incorporation of these novel modes into the driver's cognitive framework, namely the expansion of SCC, this can be accomplished by making minor adjustments to an existing safety partition or creating a new partition catering exclusively to these new modes. These concepts are illustrated in Figure 3. In Section IV, our focus is only on the scenario where the SCC function remains invariant throughout. ## IV Experimental Configuration For our experimental assessments, we employ CARLA-0.9.4 [19], a well-established platform for urban self-driving scenarios. To establish the communication between learning algorithms and environments, we adapt the API by integrating the Multi-Agent Connected Autonomous Driving (MACAD) Gym [20] framework atop CARLA. We examine vehicle-human interactions in an urban traffic environment featuring two agents: a vehicle with an initial velocity and a pedestrian, illustrated in Figure 4. We denote the vehicle by \(c\) and the pedestrian \(p\). To assess the effectiveness of our approach, we conduct experiments within two distinct scenarios: one involving Well-Behaved walking and the other involving jaywalking. Each scenario comprises three tasks determined by the initial distance between the vehicle's and pedestrian's origin points. We will describe more specific details later. We assume the state input is coming from the sensors on the vehicle. The state representation comprises each agent's current and previous speeds, denoted as \(v_{c,t},v_{p,t}\in\mathbb{R}\), and their distances to their respective endpoints, represented by \(d_{c,t},d_{p,t}\in\mathbb{R}\). Additionally, the actions \(a_{c,t}\in\mathcal{A}_{c}\subseteq\mathbb{R}^{n}\) and \(a_{p,t}\in\mathcal{A}_{p}\subseteq\mathbb{R}^{n}\) are included. Furthermore, we introduce a simulated signal light input denoted as \(l_{t}\), which serves as an additional component within the state. It is important to highlight that, since the sensors are only equipped on the vehicle, inputs stemming from pedestrians and the signal light are initialized to \(-1\) until the vehicle approaches within a distance of \(15\) meters from them. The complete structure of the state \(s_{t}\) encompasses 10 different variables, namely Fig. 4: An illustration of the uncertain position of signal light scenario. In this, we create a pedestrian with a signal light in front of the car on the urban sidewalk road. The location of this pedestrian is uncertain. The sensors will observe the velocities \(v_{c,t},v_{p,t}\) and their distances to the destination \(d_{c,t},d_{p,t}\) of the pedestrian and the vehicle and the signal light’s status \(l_{t}\). The vehicle needs to reach its destination in a short period of time without colliding with pedestrians. Fig. 3: The evolution of the SSC function takes place through the absorption of new information. Suppose the initial SSC function is \(f_{0}\). We assume for time step \(1\) to \(k\), \(f_{0}\) can dominate everything. At \(t=1\), the SSC uses \(\varphi_{n}\) as its constraint since \(z_{1}\in g_{n}\). The lookahead procedure conjectures the next time step should be in mode \(z_{2}\in g_{1}\), so the SSC will prepare to use \(\varphi_{1}\) as the next constraint. The knowledge update of SSC occurs when a novel mode is identified at \(t=k\), denoted as \(z_{k+1}\notin g_{i}\) for all existing modes \(g_{i}\) within the set \(\{g_{1},\dots,g_{n}\}\), or in other words, \(z_{k+1}\notin\text{dom}(f_{0})\). This update can be executed through two distinct approaches: either by integrating the new mode with an existing earlier mode (solving Equation (SSCAP) with \(g_{i},\ \forall i\in\{1,\cdots,n\}\)) or by establishing a fresh partition exclusively for the new mode (solving Equation (SSCAP) with \(g_{n+1}\)). \(\{d_{c,t},d_{p,t},v_{c,t},v_{p,t},l_{t},d_{c,t-1},d_{p,t-1},v_{c,t-1},v_{p,t-1},l_{t- 1}\}\). When executing the SSC function \(f_{t}\), we focus solely on the current state information, represented as \(\hat{s}_{t}=\{d_{ct},d_{pt},v_{ct},v_{pt},l_{t}\}\), to ensure computational efficiency. For pedestrians and the vehicle agent, the available actions are defined in Table I. In the case of pedestrians, the action values correspond to acceleration towards the main road direction (if positive) or the opposite direction (if negative). For the vehicle, the action values represent the throttle strength (if positive) or the brake strength (if negative). The vehicle's reward function hinges on its present velocity, proximity to the destination, and the occurrence of collisions. Across every scenario, encompassing both Well-Behaved walking and jaywalking scenarios, we sketch three distinct initial gaps (15 meters, 25 meters, and 35 meters) between the vehicle and pedestrians, classified according to their types. This methodology serves to assess how effectively and safely the proposed NUMERLA model can adeptly manage diverse traffic scenarios, thereby measuring its adaptability and security. In both scenarios, we assess the performance of the RL method, COLA method, and NUMERLA method for each individual task. We capture the mean reward, standard deviation, and collision rate as key metrics for each experimental iteration. ### _Well-Behaved Walking_ In this scenario, the behavior of the pedestrian is guided by the signal light. When the signal light is red, the pedestrian refrains from initiating movement. When the signal light turns yellow, there is a 0.1 probability that the pedestrian will commence walking. When the signal light switches to green, the pedestrian promptly begins walking. The Figure 5 shows the efficiency and long-term safety performance of the NUMERLA method compared with the RL and the COLA in the Well-Behaved walking scenario. We collect the collision rate, which means the ratio of episodes with collision and the testing episode number, shown in Table II. We can find collision rates are around zero for the NUMERLA method, which is much safer than the other two methods. ## V Conclusion This work has introduced a novel online meta-learning approach, building upon the principles of Neurosymbolic Meta-Reinforcement Lookahead Learning (NUMERLA). This technique guarantees the security of real-time learning by consistently refining safety constraints. NUMERLA enables long-term safe online adaptation by solving the conjectural \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Policy Type} & \multicolumn{3}{c|}{Collision Rate} \\ \cline{2-4} & 25m & 35m & 15m \\ \hline RL & 0.350 & 0.341 & 0.438 \\ \hline COLA & 0.156 & 0.154 & 0.190 \\ \hline NUMERLA & 0.000 & 0.0003 & 0.000 \\ \hline \end{tabular} \end{table} TABLE II: Collision Rates for Well-Behaved Walking \begin{table} \begin{tabular}{|c|c|c|c|} \hline \# & **Action** & **\#** & **Action** \\ \hline 0 & 0.0 & 4 & -1.0 \\ \hline 1 & 1.0 & 5 & -0.5 \\ \hline 2 & 0.5 & 6 & -0.25 \\ \hline 3 & 0.25 & & \\ \hline \end{tabular} \end{table} TABLE I: Discrete Actions Fig. 5: The performance comparison between RL, COLA, and NUMERLA for Well-Behaved walking pedestrians where the value represents the mean rewards and the error bar represents the standard deviation (std). The data is gathered from 1,000 episodes of online executions. The RL performance is the worst of the three types of methods, while the COLA obtains some better results. However, both of these two methods return us to a poor std, which means unstable performance. By using the NUMERLA method, we can achieve higher mean rewards and a small std. It should be noted that the task 15-meter gets the worst performance in every method. The reason is that the 15-meter is the hardest task in this urban environment since our vehicle has an initial speed, but the location of the pedestrians is too close. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Policy Type} & \multicolumn{3}{c|}{Collision Rate} \\ \cline{2-4} & 25m & 35m & 15m \\ \hline RL & 0.350 & 0.341 & 0.438 \\ \hline COLA & 0.156 & 0.154 & 0.190 \\ \hline NUMERLA & 0.000 & 0.0003 & 0.000 \\ \hline \end{tabular} \end{table} TABLE III: Collision Rates for Jaywalking lookahead optimization (CLO) and the symbolic safety constraint adaptation (SSCA) on the fly using off-policy data and the conjecture of the future.
2310.17120
Topic Segmentation of Semi-Structured and Unstructured Conversational Datasets using Language Models
Breaking down a document or a conversation into multiple contiguous segments based on its semantic structure is an important and challenging problem in NLP, which can assist many downstream tasks. However, current works on topic segmentation often focus on segmentation of structured texts. In this paper, we comprehensively analyze the generalization capabilities of state-of-the-art topic segmentation models on unstructured texts. We find that: (a) Current strategies of pre-training on a large corpus of structured text such as Wiki-727K do not help in transferability to unstructured conversational data. (b) Training from scratch with only a relatively small-sized dataset of the target unstructured domain improves the segmentation results by a significant margin. We stress-test our proposed Topic Segmentation approach by experimenting with multiple loss functions, in order to mitigate effects of imbalance in unstructured conversational datasets. Our empirical evaluation indicates that Focal Loss function is a robust alternative to Cross-Entropy and re-weighted Cross-Entropy loss function when segmenting unstructured and semi-structured chats.
Reshmi Ghosh, Harjeet Singh Kajal, Sharanya Kamath, Dhuri Shrivastava, Samyadeep Basu, Hansi Zeng, Soundararajan Srinivasan
2023-10-26T03:37:51Z
http://arxiv.org/abs/2310.17120v1
Topic Segmentation of Semi-Structured and Unstructured Conversational Datasets using Language Models ###### Abstract Breaking down a document or a conversation into multiple contiguous segments based on its semantic structure is an important and challenging problem in NLP, which can assist many downstream tasks. However, current works on topic segmentation often focus on segmentation of structured texts. In this paper, we comprehensively analyze the generalization capabilities of state-of-the-art topic segmentation models on unstructured texts. We find that: (a) Current strategies of pre-training on a large corpus of structured text such as Wiki-727K _do not help_ in transferability to unstructured conversational data. (b) Training from scratch with only a relatively small-sized dataset of the target unstructured domain improves the segmentation results by a significant margin. We stress-test our proposed Topic Segmentation approach by experimenting with multiple loss functions, in order to mitigate effects of imbalance in unstructured conversational datasets. Our empirical evaluation indicates that Focal Loss function is a robust alternative to Cross-Entropy and re-weighted Cross-Entropy loss function when segmenting unstructured and semi-structured chats. Keywords:Topic Segmentation, Language Models, Unstructured, Conversational Datasets, BERT, RoBERTa-base, Focal Loss ## 1 Introduction Topic Segmentation refers to the task of splitting texts into meaningful segments that correspond to a distinct topic or a subtopic. Natural language texts, especially in unstructured formats such as chat conversations and transcripts, often do not have an easy-to-detect separation between contiguous topics. Reliable & accurate division of text into coherent segments can help in making the text more readable as well as searchable. Hence, topic segmentation enables numerous applications such as search assistance and recommendation [1]. It has also been noted that text segmentation can improve and speedup applications such as information extraction and summarization [2]. Historically, Topic Segmentation methods have primarily been dependent on lexical chains and machine learning methods that can detect changes in document structure [3]. Recently, a handful of approaches leveraging language models have been proposed for topic segmentation [2],[4],[5]. However, the datasets on which these approaches have been evaluated are often structured in nature such as Wiki-727K [2],[6], Wiki-50 [2], RST, and Choi [7]. Adding to the constraints, in many applications, texts that need to be segmented are often unstructured such as chat transcripts and conversations. But, understanding the effectiveness of topic segmentation methods on such unstructured texts hasn't been well studied. In this paper, we empirically investigate the effectiveness of various topic segmentation methods on unstructured segmentation datasets such as LDC BOLT chat [8] and Amazon Topical chat [9]. In addition to being less structured than the Wiki-727K or Wiki-50 data, these datasets are challenging for traditional topic segmentation approaches as they contain grammatically ill-formed "noisy sentences" and a varying number of segments per conversation. Hence, we systematically examine the effectiveness of large-scale pre-training, dataset used in pre-training, and fine-tuning strategies on these "out-of-domain" (data that is conversational in nature rather than the segmented Wiki content), unstructured text-segmentation datasets spanning traditional the LSTM-based models and the newer transformer-based architectures. We find that large-scale pre-training (and fine-tuning with data from target domain) has only a _negligible_ effect on downstream segmentation tasks, when the task consists of unstructured data. This is contrary to the conventional wisdom in NLP, where pre-training and fine-tuning is a common practice. We, therefore, identify topic segmentation on unstructured data as one task where large-scale pre-training doesn't have any significant effect. To perform well on segmentation of unstructured text, we find that training, from scratch, with only a _few_ examples of the segmentation domain is sufficient. This is true for segmentation architectures ranging from the LSTM-based ones to the recent Transformer-based ones. Our results also show that, for unstructured topic segmentation, avoiding pre-training on a large corpus such as the Wiki-727K dataset results in saving a significant amount of training time and compute resources, facilitating the exploration of newer approaches. In summary, our contributions are as follows: * We investigate and stress-test the effectiveness of current topic segmentation methods on unstructured texts, which is a more challenging segmentation task when compared to segmentation tasks based on structured datasets. * We find that pre-training on large topic segmentation datasets such as Wiki-727K has negligible effect on downstream transfer to unstructured text-segmentation datasets and instantiating the model with only a few-examples of the unstructured task is sufficient. * We present simple and practical recipes to improve topic segmentation performance on unstructured datasets to remedy the effect of imbalanced class labels synthetically generated for the setup of supervised learning approach. We empirically examine the impact of alternative loss functions like re-weighted cross-entropy loss and focal loss on imbalanced dataset, and conclude that focal loss is a more effective alternative for topic segmentation tasks on conversational datasets. ## 2 Related Works Topic segmentation has been explored through many realms; particularly, the approaches used could be broadly categorized as Non-neural-based and Neural-based approaches in both supervised and unsupervised [10],[11] settings. **Non-neural approaches**: The early research efforts related to non-neural[12] approaches for topic segmentation include [13] that focused on an unsupervised approach to analyze lexical cohesion in small segments by leveraging counts of word repetitions. This work was expanded to enable models to understand words and sentences occurring in segments in a comprehensive manner leading to the wide use of lexical chains [14],[15],[16],[17]. **Neural Approaches:**[2] used a hierarchical Bi-LSTM to cast the topic segmentation as a supervised learning task. Other neural methods also leverage the Transformer architecture. In [5], the authors proposed three transformer-based models, of which Cross-Segment BERT model is particularly important for topic segmentation tasks as the model captures information from the local context surrounding a potential topic segment boundary to judge about which pool of sub-document units, the particular segment belongs. The other two model architectures use a hierarchical approach as used by [2], but with using the BERT model instead of BiLSTMs. [18] is another recent work that uses an unsupervised approach based on BERT embeddings to segment topics in multi-person meeting transcripts. ## 3 Segmentation Datasets **Unstructured Datasets**: We utilize the Linguistic Data Consortium (LDC) and BOLT SMS/chat data collection (restricted to English chats and SMS, henceforth referred to as BOLT) and Amazon Topical Chat dataset. The BOLT dataset is considerably more 'unstructured' compared to the Topical chat dataset, as it comprises non-uniform sentence structures, incomplete sentences, and abbreviations, commonly found in asynchronous conversations on direct messaging applications. Consequently, we refer to BOLT as unstructured dataset and categorize the Topical chat dataset as semi-structured dataset due to its more coherent sentence structure. The LDC BOLT SMS and Chat dataset (Figure 1) includes conversations that have been extracted from messaging platforms like WhatsApp, iMessage, Android SMS, Symbian SMS, Viber, BlackBerry, QQ, Google chat, Skype chat, & Yahoo Messenger in Chinese, Egyptian Arabic, and English. The dataset contains 2140 Egyptian, 7844 Chinese and 9155 English conversations. Our analysis is limited to the English subset of the data.The dataset exhibits the following characteristics: * Mean sentence length (number of words) = 9.45 ; standard deviation = 9.41 * Mean segment length (number of sentences) = 11.28; standard deviation = 13.99 The Amazon Topical Chat dataset (Figure 2) contains human-to-human conversations spanning eight broad topics, with over 8000 conversations. Although the dataset features well-structured sentences, it consists of brief conversations rather than articles, prompting its classification as semi-structured. The Topical Chat dataset's underlying knowledge encompasses eight broad topics, with minimal variation in the segment length. * Mean sentence length (number of words) = 19.86, standard deviation = 10.49 * Mean segment length (number of sentences) = 21.83, standard deviation = 1.75 Figure 1: Snapshot of the LDC Bolt dataset which contains conversations that have a higher degree of ‘unstructured-ness’, meaning, incomplete sentences, usage of abbreviations, and little or no punctuation. The LDC BOLT dataset more closely represents the modern-day, fast paced SMS/text conversations. Figure 2: Snapshot of Amazon Topical Chat dataset, which is a repository of conversations that have well-formed sentences and appropriate punctuation. **Wiki-727K**: Proposed by [2], Wiki-727K comprises 727,746 English-language documents with text segmentations based on their table of contents. As the text is non-conversational and features proper syntactical structure in the form of well-organized sentences, paragraphs, and sections, the Wiki-727K dataset is deemed structured. To the best of our knowledge, Wiki-727K is the sole publicly available large dataset suitable for large-scale pre-training [2],[4],[5]. No single conversational (unstructured) dataset of comparable scale exists for extensive pre-training of large Transformer models. A similar approach of pre-training was employed in [2][4][5]. Both Topical chat and BOLT datasets are conversational, necessitating additional pre-processing to adapt them for a supervised learning setup. We accomplish this by segmenting these datasets into multiple segments (using five segments as it yielded the best F1 scores; _Figure 3_), with each segment representing a specific chat conversation snippet. After pre-processing the conversations into the appropriate number of segments, we generated synthetic labels for each sentence in every segment, casting the topic segmentation task for conversational datasets as a supervised learning problem with binary labels. We first pre-process the datasets to synthetically create multiple segments, and label the sentences occurring in each segment \(x_{i}=(x_{1},x_{2},....,x_{n})\) based on the boundary condition (i.e., whether a specific sentence was end-of-segment (label, \(y_{i}=\) '1') or a non end-of-segment sentence (encoded label \(y_{i}=\) '0')). ### Model Architecture Topic segmentation in the existing literature has involved neural and non-neural approaches. Due to the complexity of understanding heterogeneous conversational datasets, we use state-of-the-art neural models, i.e, Hierarchical Bi-LSTM as proposed by [2], and CSBERT model as introduced by [5]. The Hierarchical Bi-LSTM model is a neural architecture that first learns sentence representation, which are then fed into a segment prediction sub-network. The lower-level sub-network employs a two-layer bi-directional LSTM layer that generates representations, by consuming words \(w_{1},w_{2}....w_{\mathrm{i}}\) of a sentence \(x_{\mathrm{i}}\) as input. The intermediate output is passed through a max pooling layer to create the final sentence representations \(e_{\mathrm{i}}\). The higher-level sub-network for segment prediction takes a sequence of sentence embeddings generated from the lower sub-network, and feeds them into a two-layer Bi-LSTM, which then feeds into a fully connected layer with a softmax function to generate segmentation probabilities. In the Cross-Segment BERT model, the authors leverage information from the local context, i.e., studying the semantic shift in word distributions, as first introduced in [13]. The additional context on both sides of the segment boundary (termed as 'candidate break in the paper), i.e., the sequence of word-piece tokens that come before and after the segment breaks. Basically, the model is fed \(k\) word-piece tokens from the left and \(k\) tokens from the right of a segment break. The input is composed of a classification token (denoted by \([CLS]\) ), followed by the two contexts concatenated together and separated by a separator token (denoted by \([SEP]\)). The tokens are fed into the Transformer encoder([20]), which is initialized by \(BERT_{\text{LARGE}}\) to output segmentation probabilities. The \(BERT_{\text{LARGE}}\) model has 24 layers and uses 1024-dimensional embeddings and 16-attention heads. As the released BERT checkpoint only supports up to 512 tokens, we keep a maximum 250 word tokens on each side. The RoBERTa architecture was chosen as a comparative alternative to the BERT model in the Cross-Segment framework, as it is the relatively newer successor of BERT [19]. However, rather than changing the framework of Cross-Segment learning as proposed by [5], wherein the authors demonstrate the capability of BERT model to learn the context around end-of-segments in a robust way, we chose to utilize the same framework and simply replace the BERT model with RoBERTa-base. ### Setup The Wiki-727K dataset was randomly partitioned in 80% / 10% / 10% format to create train, development (fine-tuning), and test set, respectively. We used the partitioned train set to perform pre-training using Hierarchical Bi-LSTM, Cross-Segment BERT, and Cross-Segment RoBERTa models for the first set of experiments, i.e., to evaluate the effectiveness of large-scale pretraining on structured dataset. Additionally we synthetically partitioned these conversational datasets to generate segments (snippets of conversations) and group the segment chunks to form documents. The model at any point of the training process consumed a batch of these documents. As described in section 3, the sentences in these segments are synthetically labelled to indicate the end-of-segment. As a result, the chunking the conversations from BOLT and Topical Chat datasets in 5 segments leads to 1815 and 1726 documents, respectively. Furthermore, the documents created from the chunking process described above were split into train/fine-tuning/test sets for tasks described in Table 1 and Table 2. The documents from Topical Chat dataset was divided in 1099 / 348 / 281 documents for the train/fine-tuning/test splits respectively, wherein each document had 5 segments of mean segment length of approximately 11.5 sentences. Additionally, a similar approach was employed for the documents from BOLT dataset, and it was divided in 1090 / 363 / 362 documents for the train/fine-tuning/test splits respectively containing 5 segments each (mean segment length of 21.83 sentences). Lastly to train the three models from scratch using the unstructured, and semi-structured data, without involving any pre-training on structured Wiki-727K, required 1109 documents of 5 segments from BOLT, and 1030 documents of 5 segments from Topical Chat. ## 4 Experiments We study the problem of segmenting semi-structured and unstructured chats using three popular modeling paradigms used in the structured segmentation domain: the Hierarchical Bi-LSTM model proposed by [2] and the Cross-Segment BERT model [5] (hereafter CSBERT), and the Cross-Segment RoBERTa (CSRoBERTa) [4] (Section3.1). In CSRoBERTa, we use the same training paradigm as CSBERT, but replace BERT with RoBERTa-base [19]. We cast the task of topic segmentation as a binary classification problem, and for the purpose of validating our proposed models, we use Precision, Recall, and F1 scores to measure performance. Precision measures the percentage of boundaries identified by the model that are true boundaries. Complementary to Precision, Recall measures the percentage of true boundaries identified by the model. Although comprehensive, it is important to note that there are some challenges associated with individually reporting Precision and Recall as they are somewhat less sensitive to near misses of true boundary identifications, when the prediction is off by one or two sentences. Hence, we additionally report F1 scores in Section 4. F1 score can be reliably used to conclude our initial findings from performing topic segmentation based binary classification on unstructured and semi-structured data. Note that Topic Segmentation models in the existing body of work have not been validated against any form of semi-structured or unstructured datasets. Table 1 presents the results of our analysis. We first pre-train all three models on Wiki-727K and test against the unstructured BOLT, the semi-structured Topical Chat, and the structured Wiki-727K datasets (underlined in Table 1). We then use different pre-training and fine-tuning combinations to examine the necessity of large-scale pre-training on structured datasets for segmenting conversational data. Also note that the results in Table 1 are generated by partitioning the BOLT and Topical Chat data into documents of 5 segments each (validated in Figure 3; more details on train-test split are in Section 3.2). Adapting the hierarchical Bi-LSTM model on the unstructured BOLT and the semi-structured Topical chat datasets during the validation phase, we conclude that the F1 scores (additional details on evaluation metrics in section 4) are significantly worse when compared with the performance of the CSBERT and CSRoBERTa model in the same setting. However, evaluating the performance on the structured Wiki-727K dataset, we find that the F1 scores from both models are in the same range. ### Effectiveness of Pre-training on Wiki-727K In this section, we investigate the effectiveness of pre-training on the large Wiki-727K dataset. We first pre-train the Hierarchical Bi-LSTM, CSRoBERTa, and CSBERT models on the 80% of the Wiki-727K corpus. We further fine-tune the models on the unstructured datasets: BOLT chat and Topical Chat dataset. Additionally, we experiment with fine-tuning the models after training on semi-structured/unstructured dataset instead of the pre-training step with Wiki-727K. From Table 1, we find that training models with unstructured/semi-structured and fine-tuning it with semi-structured and unstructured dataset respectively, leads to better performance than fine-tuning with the Wiki-727K checkpoint. For instance, we find that with the pre-train with Wiki-727K and fine-tune paradigm, using the CSBERT, we obtain a F1 score of 0.725 for Topical Chat, whereas only training on Topical Chat results in a F1 score of 0.764 (_Task A.3 vs.A.6_)4. For BOLT, the pre-train and fine-tune paradigm results in a F1 score of 0.489 whereas training from scratch with BOLT dataset results in a F1 score of 0.499 (_Task B.2 vs. B.4_). These results indicate that the conventional approach of pre-training on a large structured Wiki-727K dataset, and then fine-tuning with semi-structured or unstructured dataset, doesn't lead to an improvement in F1 scores, making the approach of pre-training on structured dataset questionable. We associate this finding to the fact that - Wiki-727K, although large enough for pre-training approaches and has been used in established Topic Segmentation methods for structured texts, the dataset fails to represent the rapid change in themes of conversations, thus making feature reuse [21] from the pre-training process redundant. In chats between two human agents, the topic of the conversation can change very quickly, which is not fully represented by feature hierarchy learned from structured texts. Footnote 4: We find that for Cross-Segment BERT model Task A.6 and A.7 have very similar F1 scores \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline & \multicolumn{3}{c|}{**Datasets**} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} \\ \hline **Task** & **Pre-Train** & **Finetune** & **Test** & **Cross Segment BERT** & **Cross Segment RoBERTa-Base** & **Hierarchical Bi-LSTM** \\ \hline A.1 & Wiki-727K & - & Topical Chat & 0.492 & 0.487 & 0.021 \\ A.2 & Wiki-727K & BOLT & Topical Chat & 0.470 & 0.406 & 0.391 \\ A.3 & Wiki-727K & Topical Chat & Topical Chat & 0.725 & 0.713 & 0.931 \\ \hline A.4 & BOLT & - & Topical Chat & 0.491 & 0.498 & 0.611 \\ A.5 & BOLT & Topical Chat & Topical Chat & 0.734 & 0.729 & 0.915 \\ \hline A.6 & Topical Chat & - & Topical Chat & 0.764 & **0.767** & **0.951** \\ A.7 & Topical Chat & BOLT & Topical Chat & **0.767** & 0.759 & 0.501 \\ \hline \hline B.1 & Wiki-727K & - & BOLT & 0.487 & 0.467 & 0.005 \\ B.2 & Wiki-727K & BOLT & BOLT & 0.489 & 0.479 & 0.406 \\ B.3 & Wiki-727K & Topical Chat & BOLT & 0.511 & 0.492 & 0.152 \\ \hline B.4 & BOLT & - & BOLT & 0.569 & 0.561 & 0.443 \\ B.5 & BOLT & Topical Chat & BOLT & 0.518 & 0.509 & 0.181 \\ \hline B.6 & Topical Chat & - & BOLT & 0.544 & 0.542 & 0.157 \\ B.7 & Topical Chat & BOLT & BOLT & 0.536 & 0.529 & 0.331 \\ \hline C.1 & Wiki-727K & - & Wiki-727K & 0.604 & 0.599 & 0.57 \\ C.2 & Wiki-727K & BOLT & Wiki-272K & 0.433 & 0.435 & 0.501 \\ C.3 & Wiki-727K & Topical Chat & Wiki-727K & 0.513 & 0.509 & 0.411 \\ \hline C.4 & BOLT & - & Wiki-727K & 0.487 & 0.492 & 0.12 \\ C.5 & BOLT & Topical Chat & Wiki-727K & 0.489 & 0.478 & 0.027 \\ \hline C.6 & Topical Chat & - & Wiki-727K & 0.505 & 0.502 & 0.020 \\ C.7 & Topical Chat & BOLT & Wiki-727K & 0.5089 & 0.511 & 0.198 \\ \hline \end{tabular} \end{table} Table 1: Effect of model architectures and training strategies on topic segmentation tasks, grouped by the test dataset of choice - Topical Chat (**A.1 - 1.7**), BOLT (**B.1 - B.7**) & Wiki-727K (**C.1 - C.7**). The best F1 scores for BOLT (unstructured), **T**opical Chat (semi-structured), and Wiki-727K, per model are highlighted. Comparing the F1 scores in different pre-training & fine-tuning scenarios, we conclude that the Cross-Segment BERT model _consistently_ outperforms the Hierarchical Bi-LSTM, and CSRoBERTa model on _almost_ all tasks. We also find that large-scale pre-training with the structured Wiki-727K dataset (and then fine-tuning with data from Target domain - underlined; Task B.2 vs. B.4 and Task A.3 vs. A.6) is not required to create cohesive topic segments on unstructured or semi-structured conversational data.(Train-finetuning-test split described in Section 3.2). ### Effect of architecture on unstructured datasets We test our initial conclusions (Table 1) on the efficacy of architectures across various number of segments. We find that, across different number of segments, the CSBERT model performs substantially better than the Hierarchical Bi-LSTM model and slightly better than CSRoBERTa model. From Figure 3, we conclude that: * Across the varying number of segments curated and adapted during the pre-training step with Wiki-727K, the CSBERT model results in higher F1 scores when tested against semi-structured Topical Chat and unstructured BOLT datasets. Hence, we can conclude that for these topic segmentation tasks, CSBERT model is more suitable than the Hierarchical Bi-LSTM and the CSRoBERTa models. * As the number of segments in the conversational datasets increases, the F1 scores from all models drop. We associate this finding with the fact that the large segments of the conversational data may contain more heterogeneous topics, making it difficult for all models to group coherent chats. ### Practical recipes to improve unstructured segmentation tasks Casting topic segmentation for unstructured datasets as a binary classification problem leads to a severe imbalance in class labels, prompting the need to re-weight the samples or even modify the loss function to boost performance. The number of end-of-segment sentences (encoded as '1') is significantly smaller than the non-boundary sentences (sentences that do not mark the end of a segment; encoded as label '0') due to the inherent structure of segments in any document or chat. Figure 3: Effect of model architecture and the number of segments on segmentation performance. Comparing the F1 scores from all three models pre-trained on Wiki-727K and inferred on unstructured & semi-structured datasets (without any fine-tuning), we find that CSBERT outperforms the other models robustly across the varying number of segments. Bi-LSTM performs very poorly compared to the other two models, which can be explained by its inability to fully consider the semantic context of the text representing segment boundaries. Additionally, with an increase in the number of segments, the F1 score peaks at 5 segments, and then drops across all models. We attribute this finding to the fact that at 5 segments, we are able to capture a relatively decent break in the theme of conversations for both datasets. Re-weighting in cross-entropy loss: To reduce the effect of dominance by the samples with label '0' and avoid biasing the model at inference time, we re-weight the class labels in cross-entropy loss function, giving proportional importance to labels '0' and '1'. The set of weights to be assigned is considered as a hyper-parameter and is optimized in the range \([0,1]\). We find that weighting the end-of-segment sentences (encoded as '1') with 0.8, and weighting the rest with 0.2 yields the best results on these datasets. From Table 2, we conclude that re-weighting the cross-entropy loss function to provide proportional importance to both labels leads to a slightly better F1 score for all three models. #### 3.2.2 Focal loss as an alternative loss function for imbalanced topic segmentation: Focal loss has been used widely to mitigate the risks involved with class imbalance for tasks related to object detection [22], credit-card fraud detection [23], and other tasks involving class-imbalance [24]. We consider the \(\alpha\) (a parameter that controls trade-off between precision and recall) and \(\gamma\) (focusing parameter; defines the degree of confidence assigned by the model to correct predictions that contributes to overall loss values) as focal-loss hyper-parameters and tune these over 10 epochs. Focal loss(2) is different from Cross Entropy loss ( 1), as the former implements a technique called as "down-weighting", that reduces the influence of confidently predicting easy examples (predicted probability: \(p>>0.5\)) on the loss function, resulting in more attention being paid to hard-to-predict examples (misclassified examples). To achieve this, an additional modulating-factor, called the focusing parameter (\(\gamma\)) is included to improve the conventional Cross Entropy loss function. Additionally, Focal loss also tackles the class-imbalance problem by introducing a weighting parameter (\(\alpha\)) to place appropriate weights on positive and negative classes. \[CE(p,y)=\begin{cases}-log(p),&\text{if }y=1\\ -log(1-p),&\text{otherwise}\end{cases} \tag{1}\] \[FL(p,y)=\begin{cases}-\alpha(1-p)^{\gamma}log(p),&\text{if }y=1\\ -(1-\alpha)p^{\gamma}log(1-p),&\text{otherwise}\end{cases} \tag{2}\] Thus, the Focal Loss function is a dynamically scaled Cross Entropy loss, where the scaling factor decays to zero as confidence in the correct class increases. We experiment with re-weighted cross-entropy loss and Focal loss functions for all combinations of pre-training and fine-tuning strategies and present our findings in Table 2. We observed that, across _almost_ (except 3) all paradigms, Cross-Segment BERT model outperforms Cross-Segment RoBERTa and Hierarchical Bi-LSTM model on both re-weighting and focal loss function recipes. Conversely, the F1 scores of Hierarchical Bi-LSTM are highly inconsistent. Hence, we can conclude the re-weighting of cross-entropy loss and replacement by focal loss function in our baseline model did not prove of large significance to the Bi-LSTM framework. We can attribute this finding to the fact that vanilla recurrent neural network architecture structure limits contextual learning around the segment boundaries, whereas Transformer-based architectures, with their multi-head attention mechanism, is able to capture the local context surrounding the boundaries of the segments more robustly, leading to a higher degree of topic coherence in the segments. Moreover, Focal loss across all pre-training and fine-tuning strategies prove to be a robust alternative to re-weighting cross-entropy loss, for all three models. We hypothesize this effect to be a result of including appropriate values of focusing parameter(\(\gamma\)) and trade-off parameter(\(\alpha\)) in the Focal loss function, which assigns larger importance to hard-to-train examples, ensuring comprehensive learning of the latent feature hierarchy [21] of unstructured conversation data. From Table 2, we find that the resultant F1 scores with focal loss are higher than re-scaling the cross-entropy loss function. Hence, we conclude that for future iterations of topic segmentation tasks involving unstructured datasets, focal loss is a better alternative. ## 5 Conclusion In this work, we evaluated the effectiveness of current and new Topic Segmentation methods on unstructured conversations. Our findings suggest that, across different \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \multicolumn{3}{c|}{**Datasets**} & \multicolumn{3}{c|}{**CE weights = [0.2, 0.8]**} & \multicolumn{3}{c}{**Focal Loss (\(\alpha\) = 0.8; \(\gamma\) = 2)**} \\ \hline **Pre-Train** & **Finetune** & **Test** & **CSBERT** & **CSRoBERTa** & **Bi-LSTM** & **CSBERT** & **CSRoBERTa** & **Bi-LSTM** \\ \hline Wiki-727K & - & Topical Chat & 0.497 & 0.491 & 0.028 & 0.512 & 0.504 & 0.037 \\ Wiki-727K & BOLT & Topical Chat & 0.475 & 0.411 & 0.398 & 0.483 & 0.427 & 0.405 \\ Wiki-727K & Topical Chat & Topical Chat & 0.729 & 0.717 & 0.933 & 0.748 & 0.729 & 0.928 \\ \hline BOLT & - & Topical Chat & 0.493 & 0.5 & 0.613 & 0.501 & 0.510 & 0.606 \\ BOLT & Topical Chat & Topical Chat & 0.736 & 0.731 & 0.917 & 0.747 & 0.741 & 0.920 \\ \hline Topical Chat & - & Topical Chat & 0.767 & **0.769** & **0.952** & **0.778** & **0.775** & **0.95** \\ Topical Chat & BOLT & Topical Chat & **0.768** & 0.761 & 0.505 & 0.777 & 0.768 & 0.515 \\ \hline \hline Wiki-727K & - & BOLT & 0.490 & 0.468 & 0.007 & 0.493 & 0.472 & 0.012 \\ Wiki-727K & BOLT & BOLT & 0.495 & 0.481 & 0.409 & 0.503 & 0.485 & 0.432 \\ Wiki-727K & Topical Chat & BOLT & 0.573 & 0.495 & 0.183 & 0.560 & 0.524 & 0.214 \\ \hline BOLT & - & BOLT & 0.575 & **0.567** & **0.443** & **0.580** & **0.572** & **0.45** \\ BOLT & Topical Chat & BOLT & 0.520 & 0.511 & 0.183 & 0.531 & 0.519 & 0.189 \\ \hline Topical Chat & - & BOLT & 0.546 & 0.544 & 0.159 & 0.555 & 0.551 & 0.164 \\ Topical Chat & BOLT & BOLT & 0.537 & 0.531 & 0.333 & 0.549 & 0.54 & 0.34 \\ \hline Wiki-727K & - & Wiki-727K & 0.609 & 0.602 & 0.591 & 0.614 & 0.611 & 0.6 \\ Wiki-727K & BOLT & Wiki-272K & 0.435 & 0.438 & 0.508 & 0.441 & 0.446 & 0.513 \\ Wiki-727K & Topical Chat & Wiki-727K & 0.516 & 0.510 & 0.414 & 0.522 & 0.517 & 0.417 \\ \hline BOLT & - & Wiki-727K & 0.489 & 0.495 & 0.127 & 0.493 & 0.501 & 0.132 \\ BOLT & Topical Chat & Wiki-727K & 0.492 & 0.479 & 0.03 & 0.496 & 0.484 & 0.037 \\ \hline Topical Chat & - & Wiki-727K & 0.507 & 0.503 & 0.023 & 0.516 & 0.510 & 0.031 \\ Topical Chat & BOLT & Wiki-727K & 0.511 & 0.514 & 0.199 & 0.520 & 0.521 & 0.210 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the F1 scores with different pre-training & fine-tuning scenarios using re-weighting and Focal Loss. The best F1 scores for BOLT, **Topical Chat**, and Wiki-727K, per model are highlighted. We see that Focal loss stands out to be a much a robust alternative than the re-weighted cross-entropy loss for topic segmentation of unstructured texts. model architectures and datasets, pre-training on the large, structured Wiki-727K _is not required_ for segmentation of unstructured conversational datasets such as Topical Chat and BOLT that contain syntactical and semantical noise. Surprisingly, training from scratch with only a few-examples provides a sufficiently strong baseline for this task. This finding challenges the prevalent pre-training on a large corpus and fine-tuning on the target domain paradigm, commonly used in a variety of tasks in current NLP research. Furthermore, we expanded upon existing Language Models by analyzing the effectiveness of Cross-Segment RoBERTa, which demonstrated better segmenting capabilities compared to the hierarchical Bi-LSTM model. Additionally, we provided various practical recipes to boost the topic segmentation performance on conversational and unstructured datasets, particularly achieved by using focal loss instead of cross-entropy loss. In conclusion, our research contributes valuable insights into the nuances of topic segmentation in conversational data and offers practical recommendations for achieving improved performance on these chat datasets. The findings presented here not only advance the understanding of the pre-training and fine-tuning paradigm, but also provide a solid foundation for further exploration and development of more effective topic segmentation techniques in the ever-evolving field of NLP.
2301.11210
Coherent Organizational States in Turbulent Pipe Flow at moderate Reynolds numbers
Turbulent pipe flow is still an essentially open area of research, boosted in the last two decades by considerable progress achieved both on the experimental and numerical frontiers, mainly related to the identification and characterization of coherent structures as basic building blocks of turbulence. It has been a challenging task, however, to detect and visualize these coherent states. We address, by means of stereoscopic particle image velocimetry, that issue with the help of a large diameter (6 inches) pipe loop, which allowed us to probe for coherent states at various moderate Reynolds numbers (5300 < Re < 29000)). Although these states have been observed at flow regimes around laminar-turbulent transition (Re $\approx$ 2300) and also at high Reynolds number pipe flow (Re $\approx$ 35000), at moderate Reynolds numbers their existence had not been observed yet by experiment. By conditionally averaging the flow fields with respect to their dominant azimuthal wavenumber of streamwise velocity streaks, we have been able to uncover the existence of ten well-defined coherent flow patterns. It turns out, as a remarkable phenomenon, that their occurrence probabilities and the total number of dominant modes do not essentially change as the Reynolds number is varied. Their occurrence probabilities are noted to be reasonably well described by a Poisson distribution, which suggests that low-speed streaks are created as a Poisson process on the pipe circular geometry.
R. Jäckel, B. Magacho, B. E. Owolabi, L. Moriconi, D. J. C. Dennis, J. B. R. Loureiro
2023-01-26T16:39:53Z
http://arxiv.org/abs/2301.11210v1
# Coherent Organizational States in Turbulent Pipe Flow at moderate Reynolds numbers ###### Abstract Turbulent pipe flow is still an essentially open area of research, boosted in the last two decades by considerable progress achieved both on the experimental and numerical frontiers, mainly related to the identification and characterization of coherent structures as basic building blocks of turbulence. It has been a challenging task, however, to detect and visualize these coherent states. We address, by means of stereoscopic particle image velocimetry, that issue with the help of a large diameter (6 inches) pipe loop, which allowed us to probe for coherent states at various moderate Reynolds numbers (5300 \(<\) Re \(<\) 29000)). Although these states have been observed at flow regimes around laminar-turbulent transition (Re \(\approx\) 2300) and also at high Reynolds number pipe flow (Re \(\approx\) 35000), at moderate Reynolds numbers their existence had not been observed yet by experiment. By conditionally averaging the flow fields with respect to their dominant azimuthal wavenumber of streamwise velocity streaks, we have been able to uncover the existence of ten well-defined coherent flow patterns. It turns out, as a remarkable phenomenon, that their occurrence probabilities and the total number of dominant modes do not essentially change as the Reynolds number is varied. Their occurrence probabilities are noted to be reasonably well described by a Poisson distribution, which suggests that low-speed streaks are created as a Poisson process on the pipe circular geometry. + Footnote †: preprint: ## I Introduction Turbulent structures in pipe flows have been a subject of great interest in fluid dynamics since the very first pioneering experiments of Reynolds in 1883 [1]. Until recently, research in turbulence was mainly focused on the statistical perspective of distinct flow features such as the statistical distribution of flow variables [2; 3; 4], turbulent energy spectra [5; 6], or RANS modeling [7]. A relatively new approach, on the other hand, often-referred to as dynamical systems viewpoint emerged in the last two decades due to considerable progress achieved both on the experimental and numerical frontiers. This essentially open area of research is related to the identification and characterization of coherent structures [8; 9; 10; 11]. The term coherent (from lat. Cohaerens - consistency) emphasizes on the understanding that turbulence, contrary to earlier assumptions, is no longer an example of chaos, but rather a superposition of canonical building blocks of motion with inherent patterns of spatial and temporal consistency. A better understanding of these building blocks has been the motivation for hot contemporary debates and the elaboration of improved experimental set-ups [12]. The near-wall production and complex dynamics of evolving coherent structures (e.g., turbulent puffs, quasi-streamwise and hairpin vortices, etc.) have been the fundamental keywords in these developments [13]. It is clear, however, that a gap in the literature persists, related to the visualization of near-wall coherent structures in pipe flows, in order to see how they can validate, refine or even suggest alternative perspectives to the ongoing scientific discussions. It is well known that turbulent statistics for the logarithmic region below Reynolds numbers of 25000 lack universality due to their Reynolds number dependence, a phenomenon often referred to as Reynolds number effect [14; 15; 3]. It is still under debate if this unique behavior is also shown by coherent structures. In this work, we provide a new step towards closing these gaps, exploring, with the help of Stereoscopic Particle Image Velocimetry (SPIV), the intriguing patterns of near-wall coherent structures associated with turbulent regimes in pipe flow, applying methodological lines similar to those applied by Hof et al. [16], Schneider et al. [17] and Dennis and Sogaro [18]. Hof et al. [16] as one of the first presented a proper visualization of traveling waves as coherent structures in pipe flow at Reynolds numbers close to the laminar-turbulent transition by means of SPIV experiments. The observed structures showed azimuthal patterns of high-speed streaks close to the wall and low-speed streaks closer to the pipe centre. Not very long after, Schneider et al. [17], by means of numerical simulation performed further investigations. They established a new approach for the structure identification, which allowed them to uncover a huge number of different coherent states together with their statistical features. These authors furthermore suggested that the transition dynamics could be modeled as a Markovian stochastic process, a phenomenological point that has been addressed in the recent literature [7]. Both Hof et al. [16] and Schneider et al. [17] interpreted traveling waves as phenomena related to laminar-turbulent transition. This assumption was called into question by Dennis and Sogaro's [18] SPIV experiments in pipe flow at a highly turbulent regime of Re = 35000. They showed that their flow was also organized into different coherent states, which bear a striking resemblance to travelling wave solutions observed until then only at lower Reynolds numbers, with the propensity for switching from one mode to another. In this study, we further investigate the turbulent states by closing the huge gap between these coherent states observed in laminar-turbulent transition by Hof et al. [16] and Schneider et al. [17], and those observed at relatively high Reynolds number by Dennis and Sogaro [18] in order to get a clearer picture of how these states evolve with the Reynolds number. The experimental setup, to be described in the next section, allows us to get a deep insight into the boundary layer in conditions of moderate Reynolds number turbulence. Also, we explain in detail the methods applied to detect and visualize the coherent states. In Sec. 3, we present the results of our work in a twofold manner; qualitatively, by visualization of the conditionally averaged cross-stream patterns associated with the dominant coherent states and their organization along the mean flow direction, and quantitatively by showing interesting statistical features of the occurrence probabilities of these states. Finally, in Sec. 4, we summarize and discuss the main ideas of our work and give an outlook on future work required to further improve our understanding regarding the nature of coherent states. The experimental results and statistical analysis will uncover turbulent dynamics so far unexplored by providing unique insight into turbulent pipe flow regarding its phenomenology _vis-a-vis_ with statistical features which are highly relevant to better understand, predict and model turbulence. ## II Materials and Methods ### Experimental setup The experiment was performed in a flow loop specially designed for the research on wall turbulence and coherent structures. The flow loop consists of a horizontal 6-inch diameter, 10 meters long pipe, operating in a closed system. By means of a progressive cavity pump, water is driven from a large reservoir through a Coriolis flow meter before entering the pipe. All components are connected by a flexible 2-inch rubber hose, which further serves as a pulsation damper. A settling chamber consisting of a diffuser cone with a 6-degree angle and a 1:3 aspect ratio followed by a honeycomb and a set of screens was installed to reduce eddies and swirling motions before the flow enters the pipe. We estimate the hydrodynamic entrance length for turbulent pipe flow by means of Eq. 1 (Ref. [20]): \[L_{H}=1.359D(Re)^{1/4} \tag{1}\] and expect the flow to be fully developed long before entering the observation section located at 5.8 meters downstream for the Reynolds number with the longest entrance length studied in this work, namely Re = 29000 (\(L_{H}=2.73\) m). Our experiment is based on a time-resolved Stereo PIV (SPIV) setup with two high-speed CMOS cameras (Phantom Speed-sense M310), arranged horizontally at an angle of 45 degrees to the pipe centerline aiming downstream in order to capture a transversal plane of the flow as shown in Fig. 1. The water-filled trapezoidal section was used to minimize optical distortions caused by the pipe curvature. A two-level 15.4 cm diameter calibration target, visible for each camera through the same angle of 45 degrees relative to the pipe, was moved into the measurement plane to calibrate the SPIV system by means of a long, pipe-centered traverse mechanism. After calibration, the target was moved downstream into a parking position located behind the pipe outlet in order to avoid any flow disturbance. The flow was seeded with silver-coated hollow glass spheres, neutrally buoyant with a mean size of 17 microns, which accurately follow, as tracers, turbulent fluctuations of the flow field. All Reynolds number measurements were acquired with a sampling frequency of 15 Hz and a considerable number of acquisitions, approximately 20000 captured vector fields for each run, which result in approximately 308, 628, 829, 1259, and 1492 pipe radii passing through the measurement plane for Re = 5300, 12000, 17800, 24400 and 29000, respectively. Each Reynolds number measurement was acquired in subsets of 2000 snapshots which were separated by time intervals of several minutes. We, therefore, consider these subsets statistically independent and the overall statistics of each Reynolds number set hardly to be distorted by any kind of Very Large Scale Motions (VLSMs) [7]. Within the lower and upper bounds of the Reynolds numbers studied in this work, the turbulent statistics measured with our SPIV setup show very good agreement with data from DenToonder and Nieuwstadt [5] and Eggels et al. [2] and also clearly shows the aforementioned Reynolds number effect in the log region, as demonstrated in Fig. 2. Also, as demonstrated in Fig. 3, the cross-stream vector fields we obtain with our measurement system are well-suited for the detection of near-wall structures, both for stream-wise streaks and also for in-plane motions like quasi-streamwise vortices (at least four of them can be detected directly by eye in this exemplary snapshot of Re = 24414). ### Detection and visualization of coherent states We base our coherent state detection on the appearance of positive and negative velocity fluctuations of the streamwise velocity components in each snapshot. These elongated, meandering regions of opposed fluctuations are advected along the mean flow direction, as seen in Fig. 4 observed at a Reynolds number of Re = 5300. In a cross-stream slice, these fluctuations appear as an alternating pattern of positive and negative fluctuations as seen in Fig.5 (left), similar to the ones also observed by Dennis and Sogaro [18]. Nonetheless, due to the high number of vector fields obtained for each Reynolds number set, the state detection together with the subsequent wave number assignment of the corresponding snapshots require an automated procedure. In the following, we present a procedure to reduce the complex pattern of a 3-dimensional flow field to only one parame ter, i.e. the azimuthal wave number, to which each snapshot will be assigned subsequently. First, we define a reference point (\(r_{0}\), \(\theta_{0}\)) and introduce the spatial correlation function between the reference point and equally distributed points located along the \(r=r_{0}\) circumference with an azimuthal spacing of \(\Delta\theta\) (see Fig. 5 (right)) by means of Eq. 2: \[R_{uu}(r_{0}+\Delta r,\Delta\theta)=\frac{\langle u(r_{0}+\Delta r,\theta_{0}+ \Delta\theta)\ u(r_{0},\theta_{0})\rangle}{u_{rms}^{2}} \tag{2}\] with an azimuthal spacing of \(\Delta\theta\) resulting in 72 equally spaced azimuthal points. The square brackets indicate an azimuthal average over the initial angles \(\theta_{0}\). If we limit Eq. 2 to a fixed radius of interest, in this work \(r_{0}=0.78R\), it turns into an azimuthal correlation function which will be used for the state detection. Fig. 6 shows an arbitrary section of the azimuthal correlation over a length of 5 radii, here for the Figure 1: Experimental setup (not to scale) of the pipe rig with SPIV system. The flow direction is clockwise. Figure 3: Instantaneous vector field for the flow at Re = 24414. The color bar indicates the magnitude of the streamwise velocity component normalized by the bulk velocity. Figure 2: Streamwise velocity profile in inner units at a Reynolds number of 4928 and 29089 obtained with SPIV, compared with the results of Eggels et al. [2]. Reynolds number of 5300, from zero to \(\pi\). Note that in the reference point itself, \(\Delta_{\theta}=0\), the correlation is one, as by definition. Further, it is possible to observe how the azimuthal correlation function changes its number of peaks several times along this streamwise dimension. We obtain the corresponding azimuthal wave number by taking the highest value of the power of the Fast Fourier Transform (FFT) on the azimuthal correlation, which relates the flow field to a well-defined wavenumber-labeled state. In this way, all snapshots can be classified, according to their corresponding wavenumber subset. For the visualization of the spatial correlation not only above a single circumference line but on the entire cross-stream plane (\(C=C(r,\theta)\)), we expand the azimuthal correlation along a radial grid with a spacing of 1 mm and plot the corresponding iso-surfaces of positive and negative correlation. In order to take advantage of the entire data set of flow fields obtained by our measurements, we apply a conditional average procedure in a twofold manner: The subsets of the iso-surface cross-stream plane correlations are averaged with regard to their allocated wavenumber. By means of this averaging procedure, we expect the patterns to smooth out and obtain figures comparable to the contours in Fig. 5 (right), in this case representative for the subset of azimuthal wave number \(k_{\theta}=4\). For wall-bounded turbulence [13] and coherent states [18] it is known that regions of negative streamwise fluctuations are often related to different in-plane motions than positive streamwise fluctuations, namely \(Q_{2}\) and \(Q_{4}\) quadrant motions (also known as ejections and sweeps [4]). Therefore, for the flow field averaging, we further bifurcate our sampling condition by dividing the wavenumber subsets into these snapshots with a positive and those with a negative streamwise velocity fluctuation in the initial reference point. To visualize the spatial distribution of the coherent states along the stream direction Taylor's hypothesis [22] was applied. In the appendix, we present a flow diagram (Fig. 14) which illustrates the principal steps of the procedure for the detection and visualization of coherent states in a concise manner. ## III Results #### iii.0.1 Flow patterns of coherent states Fig. 7 presents examples for instantaneous snapshots of velocity field fluctuations of Re =24400 assigned to azimuthal the wave numbers 2 to 7 by our detection method. Although small-scale fluctuations govern the background, the dominant pattern of streaks along the azimuth clearly resembles its respective wave number. Fig. 8 exemplarily visualizes the correlation contour levels of the coherent wave number states 2 to 7 obtained by the conditional averaging procedure for the Reynolds number of Re = 17800. The spatial correlation \(R_{uu}\) is visualized by iso-contours with respect to the reference point, whose location was also visualized by a black dot. The red level curves correspond to \(R_{uu}\) = 0.05 and 0.1, and the blue ones to the opposite sign. By means of conditional averaging, we visualized the coherent flow patterns in the cross-stream slice in Fig. 9 for the wave number state 4 of Re = 17800. All in-plane vectors were normalized to the same magnitude in order to improve the visualization, particularly of the vortex patterns. For all states, we were able to identify this alternating pattern of streamwise fluctuations, similar to the one observed by Dennis and Sogaro [18]. On average, the areas of negative streamwise fluctuations appear to extend more towards the pipe center than the positive. For all conditionally averaged vector fields, we clearly observe that regions of positive streamwise fluctuation are related to an in-plane movement Figure 5: Left: Instantaneous snapshot with an alternating pattern of streamwise velocity fluctuations.The red and blue isocontours correspond to velocities 1.5 per cent above and below the mean velocity profile, respectively. Right: reference point (\(r_{0}\), \(\theta_{0}\)) and azimuthal grid projected on an idealized pattern of spatial correlations for a wave number of 4. Figure 6: Streamwise extent of the azimuthal correlation over a length of 5 radii, here for the Reynolds number of 5300. Positive peaks indicate a correlation, negative peaks anti-correlation. Note that for reasons of symmetry we only plot along an azimuth from zero to \(\pi\). towards the pipe wall, while negative regions of streamwise fluctuations show a movement in the opposite direction, towards the pipe centre. These strong radial motions are accompanied by pairs of weaker, counter-rotating vortices that are saddled symmetrically along the lateral sides of the fluctuation regions. They are related to the shear layer between the regions of opposed radial motions. The ability to observe well-defined in-plane patterns underlines the potential of conditional averaging to decipher the apparently chaotic nature of turbulent flow fields, having in mind that an unconditional average of all flow field samples would just zero out all vector components apart from the mean flow direction. The principal coherent patterns that we found to govern the cross-stream vector fields of both Reynolds numbers are illustrated in Fig. 9. The relation between the streamwise and radial velocity components coincides with earlier findings in the quadrant analysis in turbulent pipe flow [23], showing the dominance of \(Q_{2}\) and \(Q_{4}\) quadrant motions in the near-wall region. By the application of Taylor's hypothesis, Fig. 10 exemplarily illustrates the advection of wave number states and their corresponding spatial extent along the main flow direction along an arbitrary section of 20 pipe radii for \(\text{Re}=5300\). At first glance, several wave number structures with streamwise extent within the order of the pipe radius can be identified. Longer structures can be observed for the wavenumbers 2 to 6. The remaining wave number states, on the other hand, show structures of a more intermittent nature, including several one-snapshot observations. Because we were particularly interested in structures with a streamwise extent, in Fig. 11 we excluded the wave number snapshots that were only observed in one snapshot. On these unstable wave number observations -hereafter called unstable remainders- we will take a closer look in the next section. #### iii.2.2 Statistical distribution of coherent states As emphasized in the foregoing, we applied a second conditional average on the allocated wave number vector fields with respect to the sign of the streamwise fluctuation in the reference point. The corresponding snapshot proportions are well-balanced for the states of all Reynolds numbers. We take this observation as an indicator that the number of samples allocated to each state is sufficient to consider our results as statistically converged for the ten wave number states we present. We first present the statistical weight distributions, i.e. the percental contribution of the ten dominant states with respect to the total number of state-assigned vector fields in Fig. 12, without applying any threshold. We observe that for all Reynolds numbers the weight distributions show positive skewness towards lower states. For all Reynolds numbers, the most detected state was wave num Figure 7: Instantaneous snapshots of velocity field fluctuations assigned to azimuthal wave numbers 2 to 7 of Re =24400. The red and blue iso-contours correspond to velocities 1.5 per cent above and below the mean velocity profile, respectively. ber 3, which is in agreement with the previous observations [18] for Re = 35000 and the most energetic azimuthal mode found using Proper Orthogonal Decomposition (POD) of turbulent pipe flow at Re = 24580 obtained with Direct Numerical Simulation (DNS) by Baltzer el al. [24]. Of the ten dominant wave numbers, state 10 showed the lowest statistical weight. We also detected wave number states above 10, but their statistical contribution was very low (below 1 per cent in all the sets) and might not show converged statistics. The highest wave numbers we detected in each set showed to increase with growing Reynolds number, namely 13, 14, 16, 21 and 22 for the Reynolds numbers of 5300, 12000, 17800, 24400, and 29000, respectively. Figure 13 shows the normalized distribution from the perspective of wave number structures with a streamwise extent. The unique feature of this presentation is that the statistical weight distribution is calculated with respect to the number of structures (independent of the number of snapshots it consists of) passing through the measurement plane, and not solely on the raw number of snapshots assigned to the wave number sets. Because we were interested in structures with a streamwise extent, we excluded the vector fields that were only observed in one single snapshot, namely the unstable remainders, from this representation. Nevertheless, the unstable remainders vector fields showed a significant weight contribution, namely 25.2, 20.9, 28.1, 33.3, and 35.3 per cent for the Reynolds number sets of 5300, 12000,17800, 24400, and 29000, respectively. With respected to the spatial resolution of the unstable remainders note that the advection velocity of the structures is increasing with the Reynolds number although the sampling rate was held constant (constrained by the maximum laser frequency) for all Reynolds number sets. This implies a different advected structure length between two snapshots for each Reynolds number, namely 0.028 pipe radii for the lowest Re of 5300 and 0.148 pipe radii for the highest Re of 29000. Comparing Figs. 12 and 13, we see that from the structure's perspective, the weight contribution of higher wave number structures is generally lower than observed from the viewpoint of snapshots. We interpret this as an indicator that higher wave number states present a more intermittent behavior, often being cut off as unstable remainder states in the structure representation. Another interesting statistical feature is that the distribution of wave number structures is reasonably well-described by a Poisson distribution with a mean value of \(\lambda=4\) (see Fig. 13). Based on the nature of Poisson distributions, this is a hint that the transition to a new wave number structure is independent of the present state and supports earlier assumptions of an un Figure 8: Organizational turbulent states with azimuthal wave numbers of 2 to 7 of Re =17800. The images show the spatial correlation function \(R_{uu}\), the red level curves correspond to \(R_{uu}\) = 0.05 and 0.1, and the blue ones to the opposite sign. The reference point is illustrated as a black dot. derlying Markovian description [7]. For both the snapshot's and the structure's perspective, we observe that the weight distribution of states is generally independent of the Reynolds number. On a closer perspective, we observe a very slight shift to the right with increasing Reynolds number, i.e., higher wave number states become more frequent. Schneider et al. [17] also observed that their weight distribution of states shifts to the right with increasing Reynolds number. Nevertheless, their work is focused on transitional pipe flow and also their window of observation was relatively small (Re = 2200, 2350, and 2500) for a strong statement on behalf of that matter. Furthermore, their state detection incorporates a cut-off threshold which is very likely the reason for the overall low weight contributions in the model. Figure 11: Streamwise extent of coherent states along the main flow direction in an arbitrary section 20 pipe radii at Re = 5300 without unstable remainders. Figure 12: Normalized distribution of dominant wave number states based on snapshot observations. Figure 10: Streamwise extent of coherent states along the main flow direction in an arbitrary section 20 pipe radii at Re = 5300. Figure 9: Conditionally averaged vector field (left) and corresponding principal coherent patterns (right) related to regions of positive (red) and negative velocity fluctuations (blue) of wave number state 4 of Re = 17800. the occurrence statistics of their wave numbers. Even with their relatively big data sets (ca. 15000- 17000 vector fields) it is difficult to state if their state contributions are statistically converged, particularly for the less encountered states. The generally constant weight distribution of wave number states for all Reynolds number leads us to the statement that the aforementioned Reynolds number effect, i.e. the Reynolds number dependence of turbulent statistics in the range of moderate Reynolds numbers, is not reflected from the dynamical systems viewpoint of coherent states. ## IV Discussion We set up a 6-inch diameter pipe flow loop with a SPIV system to investigate a number of interesting open issues related to coherent states in turbulent pipe flows. A robust detection algorithm was developed which is not affected by the background fluctuation of the flow. With this setup, we were able to reveal the nature of these states by visualizing their inherent patterns and uncovering interesting statistical features. In this way, we closed the huge gap between the Reynolds numbers at which Schneider et al. [17] and Dennis and Sogaro [18] observed these structures. Our key observations are presented in the following: 1. For all investigated Reynolds numbers, 10 dominant states were identified, consisting of patterns of alternating streaks along the pipe's azimuth. We thereby confirm Dennis and Sogaro's [18] assertion, that coherent states are not phenomena of laminar-turbulent transition as assumed earlier, but govern also the dynamics of fully developed turbulent pipe flow. 2. For all investigated Reynolds numbers, the weight distribution of states shows probabilities with positive skewness towards lower states, with a most encountered azimuthal wave number of 3. The weight distribution of wave number structures is reasonably-well described by a Poisson distribution. This is a hint that the transition to a new wave number structure is independent of the present state. 3. The weight distribution of wave number states for all Reynolds number is very similar. We conclude that the Reynolds number effect, i.e. the Reynolds number dependence of turbulent statistics in the range of moderate Reynolds numbers, is not reflected from the dynamical systems viewpoint of coherent states. There is a lot of future work required in this new field of turbulence: From a phenomenological perspective, we are particularly interested if it is possible to uncover any Reynolds number dependencies. Therefore, more Reynolds number sets are currently being measured in order to increase the resolution within the range of moderate Reynolds number flow and thereby increase our sensibility to unveil possible tendencies. In parallel, we are currently processing the present data sets to present more statistical features regarding the streamwise organization of the wave number states, _inter alia_ the maximum and average length distributions, as well as their state recurrence timescales. We assume that coherent states, apart from their phenomenological importance for the understanding of the nature of turbulence [25], play a key role in some of the most relevant fields of fluid engineering, e.g. as contributors to the Reynolds stresses, as well as to the heat- and species transport between the bulk and near-wall region. Coherent motions likely have a strong contribution to high particle concentrations close to the wall, namely turbophoresis, which causes to scaling of pipe walls, one of the key issues to be tackled in particle-laden flows. Therefore, we are interested in mechanisms to control the coherent structures. For instance, we address how the magnetic fields can influence coherent motions by turbulent dissipation from a theoretical, experimental and numerical perspective [26; 27; 28; 29], and suspect that damping of the coherent sweeps can reduce an important chain element of the process that transports scaling particles close to the wall region. ###### Acknowledgements. This research received financial support from the Brazilian National Council for Scientific and Technological Development (CNPq) and Petrobras. The authors gratefully acknowledge this support. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Appendix A Detection- and visualization procedure In Fig. 14, we present a concise illustration of the methodology for the detection and visualization of the coherent states. First, in the detection procedure, by means of a spatial correlation function long the pipe's azimuth, the individual snapshot's wave number can be obtained by an FFT- analysis. Its inherent flow field is then allocated to its corresponding state bin. Then, in the visualization procedure, the coherent patterns are reconstructed by means of conditional averaging of the state-assigned flow fields.
2302.08211
Stable-Limit Non-symmetric Macdonald Functions in Type A
We construct and study an explicit simultaneous $\mathscr{Y}$ eigenbasis of Ion and Wu's standard representation of the $^+$stable-limit double affine Hecke algebra for the limit Cherednik operators $\mathscr{Y}_i$. This basis arises as a generalization of Cherednik's non-symmetric Macdonald polynomials of type $GL_n$. We utilize links between $^+$stable-limit double affine Hecke algebra theory of Ion and Wu and the double Dyck path algebra of Carlsson and Mellit that arose in their proof of the Shuffle Conjecture. As a consequence, the spectral theory for the limit Cherednik operators is understood.
Milo Bechtloff Weising
2023-02-16T10:53:34Z
http://arxiv.org/abs/2302.08211v2
# Stable-Limit Non-symmetric Macdonald Functions in Type A ###### Abstract We construct and study an explicit simultaneous \(\mathcal{Y}\) eigenbasis of Ion and Wu's standard representation of the \({}^{+}\)stable-limit double affine Hecke algebra for the limit Cherednik operators \(\mathcal{Y}_{i}\). This basis arises as a generalization of Cherednik's non-symmetric Macdonald polynomials of type \(GL_{n}\). We utilize links between \({}^{+}\)stable-limit double affine Hecke algebra theory of Ion and Wu and the double Dyck path algebra of Carlsson and Mellit that arose in their proof of the Shuffle Conjecture. As a consequence, the spectral theory for the limit Cherednik operators is understood. **Keywords:** stable-limit, Macdonald polynomials, double affine Hecke algebra, double Dyck path algebra, Cherednik operators ## 1 Introduction This is a copy of the author's FPSAC 2023 submission. For the sake of satisfying the page limit for FPSAC most of the proofs are either only given as sketches or not given at all. The longer version with complete details will appear soon and possibly replace this version. The Shuffle Conjecture, now the Shuffle Theorem [2], is a combinatorial statement regarding the Frobenius character, \(\mathcal{F}_{R_{n}}\), of the diagonal coinvariant algebra \(R_{n}\) which generalizes the coinvariant algebra arising from the geometry of flag varieties. The following explicit formula is due to Haiman [5]: \[\mathcal{F}_{R_{n}}(X;q,t)=(-1)^{n}\nabla e_{n}[X]\] where the operator \(\nabla\) is an eigenoperator on symmetric functions prescribed by its action on the modified Macdonald symmetric functions as \[\nabla\widetilde{H}_{\mu}=\widetilde{H}_{\mu}[-1]\cdot\widetilde{H}_{\mu}.\] The original conjecture of Haglund, Haiman, Loehr, Remmel, and Ulyanov states the following: **Theorem 1** (Shuffle Theorem).: [5] \[(-1)^{n}\nabla e_{n}[X]=\sum_{\pi}\sum_{w\in WP_{\pi}}t^{\operatorname{area}( \pi)}q^{\operatorname{dinv}(\pi,w)}x_{w}.\] In the above, \(\pi\) ranges over the set of Dyck paths of length \(n\) and \(WP_{\pi}\) is the set of word parking functions corresponding to \(\pi\). The values \(area(\pi)\) and \(dinv(\pi,w)\) are certain statistics corresponding to \(\pi\) and \(w\in WP_{\pi}\). In [2], Carlsson and Mellit prove the Compositional Shuffle Conjecture, a generalization of the original Shuffle Conjecture. The authors construct and investigate a quiver path algebra, \(\mathbb{A}_{q,t}\), called the Double Dyck Path algebra. They construct a representation of \(\mathbb{A}_{q,t}\), called the standard representation, built on certain mixed symmetric and non-symmetric polynomial algebras with actions from Demazure-Lusztig operators, Hall-Littlewood creation operators, and plethysms. The Compositional Shuffle Conjecture falls out after a rich understanding of the standard representation is developed. Later analysis done by Carlsson, Gorsky, and Mellit [1] showed that in fact \(\mathbb{A}_{q,t}\) occurs naturally in the context of equivariant cohomology of Hilbert schemes. Recent work by Ion and Wu [6] has made progress in linking the work of Carlsson and Mellit on \(\mathbb{A}_{q,t}\) to the representation theory of double affine Hecke algebras. Ion and Wu introduce the \({}^{+}\)stable-limit double affine Hecke algebra \(\mathcal{H}^{+}\) along with a representation \(\mathcal{P}^{+}_{as}\) of \(\mathcal{H}^{+}\) from which one can recover the standard \(\mathbb{A}_{q,t}\) representation. The main obstruction in making a stable-limit theory for the double affine Hecke algebras is the lack of an inverse system of the double affine Hecke algebras in the traditional sense. Ion and Wu get around this obstruction by introducing a new notion of convergence (Defn. 6) for sequences of polynomials with increasing numbers of variables along with limit versions of the standard Cherednik operators defined by this convergence. Central to the study of the standard Cherednik operators are the non-symmetric Macdonald polynomials. The non-symmetric Macdonald polynomials in full generality were introduced first by Cherednik [3] in the context of proving the Macdonald constant-term conjecture. The introduction of the double affine Hecke algebra, along with the non-symmetric Macdonald polynomials by Cherednik, constituted a significant development in representation theory. They serve as a non-symmetric counterpart to the symmetric Macdonald polynomials introduced by Macdonald as a q,t-analog of Schur functions. Further, they give an orthogonal basis of the polynomial representation consisting of weight vectors for the Cherednik operators. In particular, the correct choice of symmetrization applied to a non-symmetric Macdonald polynomial will yield its symmetric counterpart. The type A symmetric Macdonald polynomials are a remarkable basis of symmetric polynomials simultaneously generalizing many other well studied bases which can be recovered by appropriate specializations of values for q and t. The aforementioned modified Macdonald functions \(\widetilde{H}_{\mu}\) can be obtained via a plethystic transformation from the symmetric Macdonald polynomials in sufficiently many variables. The spectral theory of non-symmetric Macdonald polynomials is well understood using the combinatorics of affine Weyl groups. It is natural to seek an asymptotic extension for the non-symmetric Macdonald polynomials following the methods of Ion and Wu. In particular, does the standard \(\mathcal{H}^{+}\) representation \(\mathcal{P}^{+}_{as}\) have a basis of weight vectors for the limit Cherednik operators \(\mathcal{Y}_{i}\)? The main result, Theorem 7, of this paper answers this question in the affirmative. The strategy for finding a basis of weight vectors for the limit Cherednik operators \(\mathcal{Y}_{i}\) is the following. First, we show that the non-symmetric Macdonald polynomials have stable-limits in the sense that if we start with a composition \(\mu\) and consider the compositions \(\mu*0^{m}\) for \(m\geq 0\) then the corresponding sequence of non-symmetric Macdonald polynomials \(E_{\mu*0^{m}}\) converges to an element \(\widetilde{E}_{\mu}\) of \(\mathcal{P}^{+}_{as}\). Next, we show that these limits of non-symmetric Macdonald polynomials are \(\mathcal{Y}\)-weight vectors. Importantly, the newly constructed set of \(\widetilde{E}_{\mu}\) do _not_ span \(\mathcal{P}^{+}_{as}\). To fill in these gaps, the lowering operators \(d_{-}\) from \(\mathbb{A}_{q,t}\) are used to create enough \(\mathcal{Y}\) weight vectors to span \(\mathcal{P}^{+}_{as}\). Finally, a symmetrization operator is used to show that the spanning set obtained from this process is actually a basis in Theorem 7. Lemma 1, Theorem 5, and Lemma 5 together give a description of the weights across all weight vectors in \(\mathcal{P}^{+}_{as}\). The author would like to thank the FPSAC referees who alerted the author to an unpublished work of Ion and Wu which independently determines the same explicit description of these eigenvalues. ## 2 Definitions and Notation ### Double Affine Hecke Algebras in Type GL **Definition 1**.: Define the _double affine Hecke algebra_\(\mathcal{H}_{n}\) to be the \(\mathbb{Q}(q,t)\)-algebra generated by \(T_{1},\ldots,T_{n-1}\), \(X_{1}^{\pm 1},\ldots,X_{n}^{\pm 1}\), and \(Y_{1}^{\pm 1},\ldots,Y_{n}^{\pm 1}\) with the following relations: * \((T_{i}-1)(T_{i}+t)=0\), (iii) \(T_{i}Y_{i}T_{i}=tY_{i+1}\), \[T_{i}T_{i+1}T_{i}=T_{i+1}T_{i}T_{i+1}, T_{i}Y_{j}=Y_{j}T_{i}\text{, }i\notin\{j,j+1\},\] \[T_{i}T_{j}=T_{j}T_{i}\text{, }|i-j|>1\text{, }Y_{i}Y_{j}=Y_{j}Y_{i},\] * \(T_{i}^{-1}X_{i}T_{i}^{-1}=t^{-1}X_{i+1}\), (iv) \(Y_{1}T_{1}X_{1}=X_{2}Y_{1}T_{1}\), \[T_{i}X_{j}=X_{j}T_{i}\text{, }i\notin\{j,j+1\}\text{, }\] (v) \(Y_{1}X_{1}\cdots X_{n}=qX_{1}\cdots X_{n}Y_{1}\) \[X_{i}X_{j}=X_{j}X_{i}\text{,}\] Further, define the special element \(\omega_{n}\) by \[\omega_{n}:=T_{n-1}^{-1}\cdots T_{1}^{-1}Y_{1}^{-1}\] #### 2.1.1 Standard DAHA representation **Definition 2**.: Let \(\mathcal{P}_{n}=\mathbb{Q}(q,t)[x_{1}^{\pm 1},\ldots,x_{n}^{\pm 1}]\). The _standard representation of \(\mathcal{H}_{n}\)_ is given by the following action on \(\mathcal{P}_{n}\): * \(T_{i}f(x_{1},\ldots,x_{n})=s_{i}f(x_{1},\ldots,x_{n})+(1-t)x_{i}\frac{1-s_{i}}{ x_{i}-x_{i+1}}f(x_{1},\ldots,x_{n})\) * \(X_{i}f(x_{1},..,x_{n})=x_{i}f(x_{1},\ldots,x_{n})\) * \(\omega_{n}f(x_{1},\ldots,x_{n})=f(q^{-1}x_{n},x_{1},\ldots,x_{n-1})\) Here \(s_{i}\) denotes the operator that swaps the variables \(x_{i}\) and \(x_{i+1}\). Under this action the \(T_{i}\) operators are known as the _Demazure-Lusztig operators_. For q,t generic \(\mathcal{P}_{n}\) is known to be a faithful representation of \(\mathcal{H}_{n}\). The action of the elements \(Y_{1},\ldots,Y_{n}\in\mathcal{H}_{n}\) are called _Cherednik operators_. Set \(\mathcal{H}_{n}^{+}\) to be the positive part of \(\mathcal{H}_{n}\) i.e. the subalgebra generated by \(T_{1},\ldots,T_{n-1}\), \(X_{1},\ldots,X_{n}\), and \(Y_{1},\ldots,Y_{n}\) without allowing for inverses in the \(X\) and \(Y\) elements and set \(\mathcal{P}_{n}^{+}=\mathbb{Q}(q,t)[x_{1},\ldots,x_{n}]\). Importantly, \(\mathcal{P}_{n}^{+}\) is a \(\mathcal{H}_{n}^{+}\) submodule of \(\mathcal{P}_{n}\). #### 2.1.2 Non-symmetric Macdonald Polynomials and Symmetric Functions **Definition 3**.: The _non-symmetric Macdonald polynomials_ (for \(GL_{n}\)) are a family of Laurent polynomials \(E_{\mu}\in\mathcal{P}_{n}\) for \(\mu\in\mathbb{Z}^{n}\) uniquely determined by the following: * Triangularity: Each \(E_{\mu}\) has a monomial expansion of the form \(E_{\mu}=x^{\mu}+\sum_{\lambda<\mu}a_{\lambda}x^{\lambda}\) where \({}^{\prime\prime}<{}^{\prime\prime}\) denotes the Bruhat order for \(\mathbb{Z}^{n}\) * Weight Vector: Each \(E_{\mu}\) is a weight vector for the operators \(Y_{1},\ldots,Y_{n}\in\mathcal{H}_{n}\). The non-symmetric Macdonald polynomials are a \(Y\) weight basis for the \(\mathcal{H}_{n}\) standard representation \(\mathcal{P}_{n}\). For \(\mu\in\mathbb{Z}^{n}\), \(E_{\mu}\) is homogeneous with degree \(\mu_{1}+\cdots+\mu_{n}\). Further, the set of \(E_{\mu}\) corresponding to \(\mu\in\mathbb{Z}_{\geq 0}^{n}\) gives a basis for \(\mathcal{P}_{n}^{+}\). **Definition 4**.: In this paper, a _composition_ will refer to a finite tuple \(\mu=(\mu_{1},\ldots,\mu_{n})\) of non-negative integers. We allow for the empty composition \(\emptyset\) with no parts. The length of a composition \(\mu=(\mu_{1},\ldots,\mu_{n})\) is \(\ell(\mu)=n\) and the size of the composition is \(|\mu|=\mu_{1}+\ldots+\mu_{n}\). Given two compositions \(\mu=(\mu_{1},\ldots,\mu_{n})\) and \(\beta=(\beta_{1},\ldots,\beta_{m})\), define \(\mu*\beta=(\mu_{1},\ldots,\mu_{n},\beta_{1},\ldots,\beta_{m})\). A _partition_ is a composition \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\) with \(\lambda_{1}\geq\ldots\geq\lambda_{n}\geq 1\). We denote \(sort(\mu)\) to be the partition obtained by ordering the nonzero elements of \(\mu\) in weakly decreasing order. Define the _ring of symmetric functions_\(\Lambda\) to be the inverse limit of the symmetric polynomial rings \(\mathbb{Q}(q,t)[x_{1},\ldots,x_{n}]^{S_{n}}\) with respect to the quotient maps sending \(x_{n}\to 0\). In this paper we use plethystic notation. For a complete introduction and explanation of plethysm we refer the reader to [7]. For example, if \(F\in\Lambda\) and \(\{t_{1},t_{2},\ldots\}\) is a set of independent variables, then we write \(F[t_{1}+t_{2}+\cdots]\) for the symmetric function given by F with variables in the set \(\{t_{1},t_{2},\ldots\}\). We will in a few instances use the notation \(\mathbb{1}(p)\) to denote the value \(1\) if the statement p is true and \(0\) otherwise. ### Stable-Limit DAHA of Ion and Wu **Definition 5**.: The _\({}^{+}\)stable-limit double affine Hecke algebra_ of Ion and Wu, \(\mathcal{H}^{+}\), is the algebra generated over \(\mathbb{Q}(q,t)\) by the elements \(T_{i},X_{i},Y_{i}\) for \(i\in\mathbb{N}\) satisfying the following relations: * The generators \(T_{i},X_{i}\) for \(i\in\mathbb{N}\) satisfy (i) and (ii) of Defn. 1. * The generators \(T_{i},Y_{i}\) for \(i\in\mathbb{N}\) satisfy (i) and (iii) of Defn. 1. * \(Y_{1}T_{1}X_{1}=X_{2}Y_{1}T_{1}\) We include Ion and Wu's full definition of convergence in Defn. 6 for the sake of completeness. A full understanding of convergence is not required to follow the rest of this paper. **Definition 6**.: [6] Let \(\mathcal{P}(k)^{+}:=\mathbb{Q}(q,t)[x_{1},\ldots,x_{k}]\otimes\Lambda[x_{k+1}+ x_{k+2}+\ldots]\). Define the _ring of almost symmetric functions_\(\mathcal{P}^{+}_{as}:=\bigcup_{k\geq 0}\mathcal{P}(k)^{+}\). Further, let \(\mathcal{P}^{+}_{\infty}\) denote the inverse limit of the rings \(\mathcal{P}^{+}_{k}\) with respect to the homomorphisms which send \(x_{k+1}\) to \(0\) at each step. Note \(\mathcal{P}^{+}_{as}\subset\mathcal{P}^{+}_{\infty}\). Define \(\rho:\mathcal{P}^{+}_{as}\to x_{1}\mathcal{P}^{+}_{as}\) to be the linear map defined by \(\rho(x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}F[x_{m}+x_{m+1}+\ldots])=\mathbb{1}(a_{ 1}>0)x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}F[x_{m}+x_{m+1}+\ldots]\) for \(F\in\Lambda\). Let \((f_{k})_{k\geq 1}\) be a sequence of polynomials with \(f_{k}\in\mathcal{P}^{+}_{k}\). Then the sequence \((f_{k})_{k\geq 1}\) is _convergent_ if there exist some N and auxiliary sequences \((h_{k})_{k\geq 1}\), \((g^{(i)}_{k})_{k\geq 1}\), and \((a^{(i)}_{k})_{k\geq 1}\) for \(1\leq i\leq N\) with \(h_{k},g^{(i)}_{k}\in\mathcal{P}^{+}_{k}\), \(a^{(i)}_{k}\in\mathbb{Q}(q,t)\) with the following properties: * For all k, \(f_{k}=h_{k}+\sum_{i=1}^{N}a^{(i)}_{k}g^{(i)}_{k}\). * The sequences \((h_{k})_{k\geq 1}\), \((g^{(i)}_{k})_{k\geq 1}\) for \(1\leq i\leq N\) converge in \(\mathcal{P}^{+}_{\infty}\) with limits \(h,g^{(i)}\) respectively. Further, \(g^{(i)}\in\mathcal{P}^{+}_{as}\). * The sequences \(a^{(i)}_{k}\) for \(1\leq i\leq N\) converge with respect to the t-adic topology on \(\mathbb{Q}(q,t)\) with limits \(a^{(i)}\) which are required to be in \(\mathbb{Q}(q,t)\). The sequence is said to have a limit given by \(\lim_{k}f_{k}=h+\sum_{i=1}^{N}a^{(i)}g^{(i)}\). Ion and Wu use their definition of convergence to define asymptotic versions of the Cherednik operators. **Theorem 2**.: [6] Consider the sequence of operators \(\widetilde{Y}_{1}^{(n)}:=t^{n}\rho\circ Y_{1}^{(n)}\) where \(Y_{1}^{(n)}\) is the operator coming from the action of \(Y_{1}\in\mathcal{H}_{n}^{+}\) on \(\mathcal{P}_{n}^{+}\). Let \(\pi_{n}:\mathcal{P}_{as}^{+}\to\mathcal{P}_{n}^{+}\) be the canonical projection and let \(f\in\mathcal{P}_{as}^{+}\). Then the sequence \((\widetilde{Y}_{1}^{(n)}\circ\pi_{n}(f))_{n\geq 1}\) is convergent with limit which is also almost symmetric. This yields a well-defined operator \(\mathcal{Y}_{1}:\mathcal{P}_{as}^{+}\to\mathcal{P}_{as}^{+}\) given by \(\mathcal{Y}_{1}(f):=\lim_{n}\widetilde{Y}_{1}^{(n)}\circ\pi_{n}(f)\). Further, the operator \(\mathcal{Y}_{1}\) along with the Demazure-Lusztig action of the \(T_{i}\)'s and multiplication by the \(X_{i}\)'s generate an \(\mathcal{H}^{+}\) action on \(\mathcal{P}_{as}^{+}\). ## 3 Stable-Limits of Non-symmetric Macdonald Polynomials Given a composition \(\mu\), consider the compositions \(\mu*0^{m}\) for \(m\geq 0\) and the corresponding sequence of non-symmetric Macdonald polynomials \((E_{\mu*0^{m}})_{m\geq 0}\). In order to prove the convergence of these sequences we use the following result of [4] giving an explicit combinatorial formula for the non-symmetric Macdonald polynomials. Note that the \(q,t\) conventions in [4] differ from those appearing in this paper. In the below theorem the appropriate translation \(q\to q^{-1}\) has been made. **Theorem 3**.: [4] For a composition \(\mu\) with \(\ell(\mu)=n\) the following holds: \[E_{\mu}=\sum_{\begin{subarray}{c}\sigma:\mu\to[n]\\ \text{non-attacking}\end{subarray}}X^{\sigma}q^{-maj(\hat{\sigma})}t^{ coinv(\hat{\sigma})}\prod_{\begin{subarray}{c}u\in dg^{\prime}(\mu)\\ \hat{\sigma}(u)\neq\hat{\sigma}(d(u))\end{subarray}}\left(\frac{1-t}{1-q^{-( \ell(u)+1)\hat{t}(a(u)+1)}}\right)\] The combinatorial description of non-symmetric Macdonald polynomials in the Haiman-Haglund-Loehr formula relies on the combinatorics of _non-attacking labellings_ of certain box diagrams corresponding to compositions. In the interest of space we refer the reader to [4] for all the notation used above such as \(\hat{\sigma}\), \(d\), \(a\), \(\ell\), \(maj\), and \(coinv\). We now show the convergence for the sequence \((E_{\mu*0^{m}})_{m\geq 0}\). The method used shows convergence and gives an explicit combinatorial formula for the limit functions. **Theorem 4**.: For a composition \(\mu\) with \(\ell(\mu)=n\) the sequence \((E_{\mu*0^{m}})_{m\geq 0}\) is convergent with limit \(\widetilde{E}_{\mu}\) in \(\mathcal{P}_{as}^{+}\) given by \[\widetilde{E}_{\mu}:=\sum_{\begin{subarray}{c}\lambda\text{ partition}\\ |\lambda|\leq|\mu|\end{subarray}}m_{\lambda}[x_{n+1}+\cdots]\sum_{ \begin{subarray}{c}\sigma:\mu*0^{\ell(\lambda)}\to\{1_{1},\ldots,n+\ell( \lambda)\}\\ \text{non-attacking}\\ |\sigma^{-1}(n+i)|=\lambda_{i}\end{subarray}}x_{1}^{|\sigma^{-1}(1)|}\cdots x _{n}^{|\sigma^{-1}(n)|}q^{-maj(\hat{\sigma})}t^{coinv(\hat{\sigma})}\widetilde {\Gamma}(\hat{\sigma})\] where \[\widetilde{\Gamma}(\hat{\sigma})=\prod_{\begin{subarray}{c}u\in dg^{\prime}( \mu*0^{\ell(\lambda)})\\ \hat{\sigma}(u)\neq\hat{\sigma}(d(u))\\ u\text{ not in row 1}\end{subarray}}\left(\frac{1-t}{1-q^{-(\ell(u)+1)\hat{t}(a(u)+1)}} \right)\prod_{\begin{subarray}{c}u\in dg^{\prime}(\mu*0^{\ell(\lambda)})\\ \hat{\sigma}(u)\neq\hat{\sigma}(d(u))\\ u\text{ in row 1}\end{subarray}}(1-t)\] Proof Sketch.: Start by using the HHL formula to expand \(E_{\mu*0^{m}}\) for \(m\geq 1\). Because \(E_{\mu*0^{m}}\) is symmetric in \(x_{n+1},\ldots,x_{n+m}\) we can expand relative to the monomial symmetric functions \(m_{\lambda}[x_{n+1}+\ldots+x_{n+m}]\). This is made explicit using the combinatorics of non-attacking labellings as per HHL. For sufficiently large \(m\geq|\mu|\) the \(\mathbb{Q}(q,t)[x_{1},\ldots,x_{n}]\)-coefficients of the \(m_{\lambda}[x_{n+1}+\ldots+x_{n+m}]\) stabilize to polynomials with coefficients that converge t-adically. **Remark**.: Note importantly, that for any composition \(\mu\) and \(m\geq 0\), by definition \(\widetilde{E}_{\mu*0^{m}}=\widetilde{E}_{\mu}\). #### 3.0.1 Example Here we list a few simple examples. * \(\widetilde{E}_{(1)}=x_{1}\) * \(\widetilde{E}_{(2,0)}=x_{1}^{2}+\frac{q^{-1}(1-t)}{1-q^{-1}t}x_{1}m_{1}[x_{2 }+x_{3}+\cdots]\) * \(\widetilde{E}_{(0,2)}=x_{2}^{2}+(1-t)x_{1}^{2}+\frac{1-q^{-1}t+q^{-1}}{1-q^{-1 }t}(1-t)x_{1}x_{2}+\left(\frac{q^{-1}(1-t)}{1-q^{-1}t}x_{2}+\frac{q^{-1}(1-t)^{ 2}}{1-q^{-1}t}x_{1}\right)m_{1}[x_{3}+\cdots]\) * \(\widetilde{E}_{(2,2)}=x_{1}^{2}x_{2}^{2}+\frac{q^{-1}(1-t)}{1-q^{-1}t}(x_{1}^{ 2}x_{2}+x_{1}x_{2}^{2})m_{1}[x_{3}+x_{4}+\cdots]+\left(\frac{q^{-2}(1-t)^{2}(1 +t)}{q^{-2}t^{3}-q^{-1}t^{2}-q^{-1}t+1}\right)x_{1}x_{2}m_{1,1}[x_{3}+x_{4}+ \cdots]\) ## 4 \(\mathcal{Y}\) Weight Basis of \(\mathcal{P}^{+}_{\text{as}}\) Given a family of commuting operators \(\{y_{i}:i\in I\}\) and a weight vector \(\nu\) we denote its weight by the function \(\alpha:I\to\mathbb{Q}(q,t)\) such that \(y_{i}\nu=\alpha(i)\nu.\) We sometimes denote \(\alpha\) as \((\alpha_{1},\alpha_{2},\ldots).\) ### The \(\widetilde{E}_{\mu}\) are \(\mathcal{Y}\) weight vectors In what follows, the classical spectral theory for non-symmetric Macdonald polynomials is used to demonstrate that the limit functions \(\widetilde{E}_{\mu}\) are \(\mathcal{Y}\) weight vectors. The below lemma is a simple application of this classical theory and of basic properties of the t-adic topology on \(\mathbb{Q}(q,t)\). **Lemma 1**.: For a composition \(\mu\) with \(\ell(\mu)=n\) define \(\alpha_{\mu}^{(m)}\) to be the weight of \(E_{\mu*0^{m}}\). Then in the \(t\)-adic topology on \(\mathbb{Q}(q,t)\) the sequence \(t^{n+m}\alpha_{\mu}^{(m)}(i)\) converges in \(\mathbb{m}\) to some \(\widetilde{\alpha}_{\mu}(i)\in\mathbb{Q}(q,t)\). In particular, \(\widetilde{\alpha}_{\mu}(i)=0\) for \(i>n\) and for \(1\leq i\leq n\) we have that \(\widetilde{\alpha}_{\mu}(i)=0\) exactly when \(\mu_{i}=0\). _Proof:_ Take \(\mu=(\mu_{1},\dots,\mu_{n})\). From classic double affine Hecke algebra theory we have \(\alpha_{\mu}^{(0)}(i)=q^{\mu_{i}}i^{1-\beta_{\mu}(i)}\) where \[\beta_{\mu}(i):=\#\{j:1\leq j\leq i\,\mu_{j}\leq\mu_{i}\}+\#\{j:i<j\leq n\,\mu_{i}>\mu_{j}\}.\] It follows then that \[t^{n+m}\alpha_{\mu}^{(m)}(i)=\begin{cases}q^{\mu_{i}}t^{n+m+1-(\beta_{\mu}(i)+ m\mathbb{1}(\mu_{i}\neq 0))}=t^{n}\alpha_{\mu}^{(0)}(i)&i\leq n,\mu_{i}\neq 0\\ q^{\mu_{i}}t^{n+m+1-(\beta_{\mu}(i)+m\mathbb{1}(\mu_{i}\neq 0))}=t^{n+m}\alpha_{ \mu}^{(0)}(i)&i\leq n,\mu_{i}=0\\ t^{n+m+1-(\#(\mu_{j}=0)+i-n)}=t^{\#(\mu_{j}\neq 0)}t^{m+1-(i-n)}&i>n\end{cases}\] Lastly, by limiting \(m\to\infty\) we get the result. For a composition \(\mu\) define the list of scalars \(\widetilde{\alpha}_{\mu}\) using the formula in Lemma 1 for \(\widetilde{\alpha}_{\mu}(i)\) for \(i\in\mathbb{N}\). We use Lemma 1 to show that certain denominators that occur in the proof of Lemma 2 below do not vanish in the limit as \(m\to\infty\). **Lemma 2**.: For \(\mu=(\mu_{1},\dots,\mu_{n})\) with \(\mu_{i}\neq 0\) for \(1\leq i\leq n\), \(\widetilde{E}_{\mu}\) is a \(\mathcal{Y}\)-weight vector with weight \(\widetilde{\alpha}_{\mu}\). Proof.: We spare the reader the direct calculation which uses the limit definition of the \(\mathcal{Y}_{r}\) operators and Prop. 6.21 from [6] which leads to \[\mathcal{Y}_{r}(\widetilde{E}_{\mu})=\widetilde{\alpha}_{\mu}(r)(T_{r-1}\cdots T _{1}\rho T_{1}^{-1}\cdots T_{r-1}^{-1})\widetilde{E}_{\mu}. \tag{4.1}\] We will show that the right side of (4.1) is \(\widetilde{\alpha}_{\mu}(r)\widetilde{E}_{\mu}\). As \(\widetilde{\alpha}_{\mu}(r)=0\) for \(r>n\) by Lemma 1, the lemma holds for \(r\leq n\). Now let us consider some fixed \(r\leq n\). Below we show that \(x_{1}|T_{1}^{-1}\cdots T_{r-1}^{-1}\widetilde{E}_{\mu}\) from which it follows that \[\rho(T_{1}^{-1}\cdots T_{r-1}^{-1}\widetilde{E}_{\mu})=T_{1}^{-1}\cdots T_{r-1 }^{-1}\widetilde{E}_{\mu}\] implying \[\mathcal{Y}_{r}(\widetilde{E}_{\mu}) =\widetilde{\alpha}_{\mu}(r)(T_{r-1}\cdots T_{1}\rho T_{1}^{-1} \cdots T_{r-1}^{-1})\widetilde{E}_{\mu}\] \[=\widetilde{\alpha}_{\mu}(r)(T_{r-1}\cdots T_{1}T_{1}^{-1}\cdots T _{r-1}^{-1})\widetilde{E}_{\mu}\] \[=\widetilde{\alpha}_{\mu}(r)\widetilde{E}_{\mu}\] as desired. To show that \(x_{1}|T_{1}^{-1}\cdots T_{r-1}^{-1}\widetilde{E}_{\mu}\) it suffices to show that for all \(m\geq 0\), \(x_{1}|T_{1}^{-1}\cdots T_{r-1}^{-1}E_{\mu*0^{m}}\). To this end fix \(m\geq 0\). We have that \[\alpha_{\mu}^{(m)}(r)E_{\mu*0^{m}} =Y_{r}^{(n+m)}(E_{\mu*0^{m}})\] \[=t^{-(r-1)}T_{r-1}\cdots T_{1}\omega_{n+m}^{-1}T_{n+m-1}^{-1} \cdots T_{r}^{-1}E_{\mu*0^{m}}.\] Since \(\alpha_{\mu}^{(m)}(r)\neq 0\) we can have \(\frac{1}{\alpha_{\mu}^{(m)}(r)}T_{1}^{-1}\cdots T_{r-1}^{-1}\) act on both sides to get \[T_{1}^{-1}\cdots T_{r-1}^{-1}E_{\mu*0^{m}}=\frac{t^{-(r-1)}}{\alpha_{\mu}^{(m)} (r)}\omega_{n+m}^{-1}T_{n+m-1}^{-1}\cdots T_{r}^{-1}E_{\mu*0^{m}}.\] By HHL any non-attacking labelling of \(\mu*0^{m}\) will have row 1 diagram labels given by \(\{1,2,\ldots,n\}\) so \(x_{1}\cdots x_{n}\) divides \(E_{\mu*0^{m}}\) so in particular \(x_{r}\) divides \(E_{\mu*0^{m}}\) for all \(m\geq 0\). Lastly, \[\omega_{n+m}^{-1}T_{n+m-1}^{-1}\cdots T_{r}^{-1}X_{r} =\omega_{n+m}^{-1}t^{-(n+m-r)}X_{n+m}T_{n+m-1}\cdots T_{r}\] \[=qt^{-(n+m-r)}X_{1}\omega_{n+m}^{-1}T_{n+m-1}\cdots T_{r}\] Thus \(x_{1}\) divides \(T_{1}^{-1}\cdots T_{r-1}^{-1}E_{\mu*0^{m}}\) for all \(m\geq 0\) showing the result. Now we consider the general situation where the composition \(\mu\) can have some parts which are 0. We can extend the above result, Lemma 2, by a straight-forward argument using intertwiner theory from the study of affine Hecke algebras. **Theorem 5**.: For all compositions \(\mu\), \(\widetilde{E}_{\mu}\) is a \(\mathcal{Y}\)-weight vector with weight \(\widetilde{\alpha}_{\mu}\). Proof Sketch: Lemma 2 shows that this statement holds for any composition with all parts nonzero. Further, every composition \(\mu\) can be written as a permutation of a composition of the form \(\nu*0^{m}\) for a partition \(\nu\) and some \(m\geq 0\). Hence, it suffices to show that for any composition \(\mu\), if \(\widetilde{E}_{\mu}\) satisfies the theorem then so will \(\widetilde{E}_{s_{i}(\mu)}\). This process is made rigorous by using induction on Bruhat order. Using the intertwiner operators from standard affine Hecke algebra theory, given by \(\varphi_{i}=T_{i}\mathcal{Y}_{i}-\mathcal{Y}_{i}T_{i}\), we only need to show that for any \(\mu\) with \(s_{i}(\mu)>\mu\) in Bruhat order, \[\varphi_{i}\widetilde{E}_{\mu}=(\widetilde{\alpha}_{\mu}(i)-\widetilde{\alpha }_{\mu}(i+1))\widetilde{E}_{s_{i}(\mu)}.\] Suppose the theorem holds for some \(\mu\) with \(\ell(\mu)=n\) and let \(1\leq i\leq n\) such that \(s_{i}(\mu)>\mu\). Then we have the following: \[\varphi_{i}\widetilde{E}_{\mu} =(T_{i}(\mathcal{Y}_{i}-\mathcal{Y}_{i+1})+(1-t)\mathcal{Y}_{i+1} )\widetilde{E}_{\mu}\] \[=(\widetilde{\alpha}_{\mu}(i)-\widetilde{\alpha}_{\mu}(i+1))T_{i }\widetilde{E}_{\mu}+(1-t)\widetilde{\alpha}_{\mu}(i+1)\widetilde{E}_{\mu}\] \[=\lim_{m}(t^{n+m}\alpha_{\mu}^{(m)}(i)-t^{n+m}\alpha_{\mu}^{(m)}( i+1))T_{i}E_{\mu*0^{m}}+(1-t)t^{n+m}\alpha_{\mu}^{(m)}(i+1)E_{\mu*0^{m}}\] \[=\lim_{m}(t^{n+m}\alpha_{\mu}^{(m)}(i)-t^{n+m}\alpha_{\mu}^{(m)}( i+1))E_{s_{i}(\mu)*0^{m}}\] \[=(\widetilde{\alpha}_{\mu}(i)-\widetilde{\alpha}_{\mu}(i+1)) \widetilde{E}_{s_{i}(\mu)}.\] We have shown in Theorem 5 there is an explicit collection of \(\mathcal{Y}\)-weight vectors \(\widetilde{E}_{\mu}\) in \(\mathcal{P}^{+}_{as}\) arising as the limits of non-symmetric Macdonald polynomials \(E_{\mu*0^{m}}\). Unfortunately, these \(\widetilde{E}_{\mu}\) do not span \(\mathcal{P}^{+}_{as}\). To see this note that one cannot write a non-constant symmetric function as a linear combination of the \(\widetilde{E}_{\mu}\). However, in the below work we build a full \(\mathcal{Y}\) weight basis. ### Constructing the Weight Basis To complete our construction of a full weight basis of \(\mathcal{P}^{+}_{as}\) one needs the \(\partial^{(k)}_{-}\) operators from Ion and Wu. These operators are, up to a change of variables and plethysm, the \(d_{-}\) operators from Carlson and Mellit's standard \(\mathbb{A}_{q,t}\) representation. **Definition 7**.: [6] Define the operator \(\partial^{(k)}_{-}:\mathcal{P}(k)^{+}\to\mathcal{P}(k-1)^{+}\) to be the \(\mathcal{P}^{+}_{k-1}\)-linear map which acts on elements of the form \(x_{k}^{n}F[x_{k+1}+x_{k+2}\cdots]\) for \(F\in\Lambda\) and \(n\geq 0\) as \[\partial^{(k)}_{-}(x_{k}^{n}F[x_{k+1}+x_{k+2}+\cdots])=\mathcal{B}_{n}(F)[x_{ k}+x_{k+1}+\cdots].\] Here the \(\mathcal{B}_{n}\) are the Jing operators which serve as creation operators for the Hall-Littlewood symmetric functions \(\mathcal{P}_{\lambda}\) given explicitly by the following plethystic formula: \[\mathcal{B}_{n}(F)[X]=\langle z^{n}\rangle F[X-z^{-1}]Exp[(1-t)zX].\] We refer the reader to [6] for a discussion on the Jing operators. Importantly, the \(\partial^{(k)}_{-}\) operators do not come from the \(\mathcal{H}^{+}\) action itself. Note that the \(\partial^{(k)}_{-}\) operators are homogeneous by construction. We require the following lemma. **Lemma 3**.: [6] The map \(\partial^{(n)}_{-}:\mathcal{P}(n)^{+}\to\mathcal{P}(n-1)^{+}\) is a projection onto \(\mathcal{P}(n-1)^{+}\) i.e. for \(f\in\mathcal{P}(n-1)^{+}\subset\mathcal{P}(n)^{+}\) we have that \(\partial^{(n)}_{-}(f)=f\). Lemma 3 shows that the following operator is well defined. **Definition 8**.: For \(f\in\mathcal{P}(n)^{+}\subset\mathcal{P}^{+}_{as}\) define \(\widetilde{\sigma}(f):=\partial^{(1)}_{-}\cdots\partial^{(n)}_{-}f\). Then \(\widetilde{\sigma}\) defines an operator \(\mathcal{P}^{+}_{as}\to\Lambda\) which we call the _stable-limit symmetrization operator_. For a partition \(\lambda\) define \(\mathcal{A}_{\lambda}=\widetilde{\sigma}(\widetilde{E}_{\lambda})\in\Lambda\). The \(\mathcal{A}_{\lambda}\) symmetric functions have many useful properties including, but not limited to, the following. **Theorem 6**.: The set \(\{\mathcal{A}_{\lambda}:\lambda\text{ is a partition}\}\) is a basis of \(\Lambda\). Proof Sketch.: The result follows after proving the stronger property that each \(A_{\lambda}\) has a unitriangular expansion with respect to dominance order into the Hall-Littlewood symmetric function basis. Stable-limit symmetrization behaves well with respect to permuting the defining composition \(\mu\) of each \(\widetilde{E}_{\mu}\). **Lemma 4**.: For any composition \(\mu\) there is some nonzero scalar \(\gamma_{\mu}\in\mathbb{Q}(q,t)\) such that \[\widetilde{\sigma}(\widetilde{E}_{\mu})=\gamma_{\mu}\mathcal{A}_{\text{sort}( \mu)}\] where \(\gamma_{\mu}=1\) when \(\mu\) is a partition. We can now construct a full \(\mathcal{Y}\)-weight basis of \(\mathcal{P}^{+}_{\text{as}}\). We parameterize this basis by pairs \((\mu|\lambda)\) for \(\mu\) a composition and \(\lambda\) a partition. **Definition 9**.: For \(\mu\) be a composition and \(\lambda\) a partition define the _stable-limit non-symmetric Macdonald function_ corresponding to \((\mu|\lambda)\) as \[\widetilde{E}_{(\mu|\lambda)}:=\partial_{-}^{(\ell(\mu)+1)}\cdots\partial_{- }^{(\ell(\mu)+\ell(\lambda))}\widetilde{E}_{\mu*\lambda}.\] **Remark**.: Note importantly \(\widetilde{E}_{(\mu|\lambda)}\in\mathcal{P}(\ell(\mu))^{+}\), \(\widetilde{\sigma}(\widetilde{E}_{(\mu|\lambda)})=\widetilde{\sigma}( \widetilde{E}_{\mu*\lambda})\), and \(\widetilde{E}_{(\mu|\lambda)}\) is homogeneous of degree \(|\mu|+|\lambda|\). Further, for any composition \(\mu\) and partition \(\lambda\) we have \(E_{(\mu|\varnothing)}=\widetilde{E}_{\mu}\) and \(\widetilde{E}_{(\varnothing|\lambda)}=\mathcal{A}_{\lambda}\). The following simple lemma shows that the stable-limit non-symmetric Macdonald functions \(\widetilde{E}_{(\mu|\lambda)}\) are \(\mathcal{Y}\)-weight vectors. **Lemma 5**.: Suppose \(f\in\mathcal{P}(k)^{+}\) is a \(\mathcal{Y}\)-weight vector with weight \((\alpha_{1},\ldots,\alpha_{k},0,0,\ldots)\). Then \(\partial_{-}^{(k)}f\in\mathcal{P}(k-1)^{+}\) is a \(\mathcal{Y}\)-weight vector with weight \((\alpha_{1},\ldots,\alpha_{k-1},0,0,\ldots)\). Proof Sketch.: We know that for \(g\in\mathcal{P}(k)^{+}\) and \(1\leq i\leq k-1\), \(\mathcal{Y}_{i}\partial_{-}^{(k)}g=\partial_{-}^{(k)}\mathcal{Y}_{ig}\) so \(\mathcal{Y}_{i}\partial_{-}^{(k)}f=\partial_{-}^{(k)}\mathcal{Y}_{i}f=\alpha_ {i}\partial_{-}^{(k)}f.\) One can show that if \(i\geq k\) then \(\mathcal{Y}_{i}\) annihilates \(\mathcal{P}(k-1)\). Since \(\partial_{-}^{(k)}f\in\mathcal{P}(k-1)^{+}\) for all \(i\geq k\), \(\mathcal{Y}_{i}\partial_{-}^{(k)}f=0\). Here we give a few basic examples of stable-limit non-symmetric Macdonald functions expanded in the Hall-Littlewood basis \(\mathcal{P}_{\lambda}\) and their corresponding weights: * \(\widetilde{E}_{(\varnothing|2)}=\mathcal{P}_{2}[x_{1}+\cdots]+\frac{q^{-1}}{1- q^{-1}t}\mathcal{P}_{1,1}[x_{1}+\cdots]\) and has weight \((0,0,\ldots)\) * \(\widetilde{E}_{(0|2)}=\mathcal{P}_{2}[x_{2}+\cdots]+(1-t)x_{1}^{2}+\frac{q^{-1} }{1-q^{-1}t}\mathcal{P}_{1,1}[x_{2}+\cdots]+\frac{(1+q^{-1})(1-t)}{1-q^{-1}t}x _{1}\mathcal{P}_{1}[x_{2}+\cdots]\) and has weight \((0,q^{2}t,0,\ldots)\) * \(\widetilde{E}_{(1|1,1)}=x_{1}\mathcal{P}_{1,1}[x_{2}+\cdots]\) and has weight \((qt^{3},0,\ldots)\) Finally, we prove that the stable-limit non-symmetric Macdonald functions are a basis for \(\mathcal{P}^{+}_{\text{as}}\). **Theorem 7**.: (Main Theorem) The \(\widetilde{E}_{(\mu|\lambda)}\) are a \(\mathcal{Y}\)-weight basis for \(\mathcal{P}^{+}_{as}\). Proof Sketch.: As there are sufficiently many \(\widetilde{E}_{(\mu|\lambda)}\) in each graded component of every \(\mathcal{P}(k)^{+}\) it suffices to show that these functions are linearly independent. Obviously weight vectors in distinct weight spaces are linearly independent. Using Lemmas 2 and 5, we deduce that if \(\widetilde{E}_{(\mu_{1}|\lambda_{1})}\) and \(\widetilde{E}_{(\mu_{2}|\lambda_{2})}\) have the same weight then necessarily \(\mu_{1}=\mu_{2}\). Hence, we can restrict to the case where we have a dependence relation \[c_{1}\widetilde{E}_{(\mu|\lambda^{(1)})}+\cdots+c_{N}\widetilde{E}_{(\mu| \lambda^{(N)})}=0\] for \(\lambda^{(1)},\ldots,\lambda^{(N)}\) distinct partitions. By applying the stable-limit symmetrization operator we see that \[\widetilde{\sigma}(c_{1}\widetilde{E}_{(\mu|\lambda^{(1)})}+\cdots+c_{N} \widetilde{E}_{(\mu|\lambda^{(N)})})=\widetilde{\sigma}(c_{1}\widetilde{E}_{ \mu*\lambda^{(1)}}+\cdots+c_{N}\widetilde{E}_{\mu*\lambda^{(N)}})=0.\] Now by Lemma 4, \(\widetilde{\sigma}(\widetilde{E}_{\mu*\lambda^{(i)}})=\gamma_{\mu*\lambda^{( i)}}\mathcal{A}_{\text{sort}(\mu*\lambda^{(i)})}\) with nonzero scalars \(\gamma_{\mu*\lambda^{(i)}}\) so \[0=c_{1}^{\prime}\mathcal{A}_{\text{sort}(\mu*\lambda^{(1)})}+\ldots+c_{n}^{ \prime}\mathcal{A}_{\text{sort}(\mu*\lambda^{(N)})}.\] The partitions \(\lambda^{(i)}\) are distinct so we know that the partitions \(\text{sort}(\mu*\lambda^{(i)})\) are distinct as well. By Theorem 6 the symmetric functions \(\mathcal{A}_{\text{sort}(\mu*\lambda^{(i)})}\) are linearly independent. Thus \(c_{i}^{\prime}=0\) implying \(c_{i}=0\) for all \(1\leq i\leq N\) as desired.
2310.11545
On Isospectral Integral Circulant Graphs
Understanding when two non-isomorphic graphs can have the same spectra is a classic problem that is still not completely understood, even for integral circulant graphs. We say that a natural number $N$ satisfies the \emph{integral spectral Ad\`{a}m property (ISAP)} if any two integral circulant graphs of order $N$ with the same spectra must be isomorphic. It seems to be open whether all $N$ satisfy the ISAP; M\"{o}nius and So showed that $N$ satisfies the ISAP if $N = p^k, pq^k,$ or $pqr$. We show that: (a) for any prime factorization structure $N = p_1^{a_1}\cdots p_k^{a_k}$, $N$ satisfies the ISAP for "most" values of the $p_i$; (b) $N=p^2q^n$ satisfy the ISAP if $p,q$ are odd and $(q-1) \nmid (p-1)^2(p+1)$; (c) all $N =p^2q^2$ satisfy the ISAP.
Yan X Zhang
2023-10-17T19:28:47Z
http://arxiv.org/abs/2310.11545v1
# On Isospectral Integral Circulant Graphs ###### Abstract Understanding when two non-isomorphic graphs can have the same spectra is a classic problem that is still not completely understood, even for integral circulant graphs. We say that a natural number \(N\) satisfies the _integral spectral Adam property (ISAP)_ if any two integral circulant graphs of order \(N\) with the same spectra must be isomorphic. It seems to be open whether all \(N\) satisfy the ISAP; Monius and So showed that \(N\) satisfies the ISAP if \(N=p^{k},pq^{k}\), or \(pqr\). We show that: (a) for any prime factorization structure \(N=p_{1}^{a_{1}}\cdots p_{k}^{a_{k}}\), \(N\) satisfies the ISAP for "most" values of the \(p_{i}\); (b) \(N=p^{2}q^{n}\) satisfy the ISAP if \(p,q\) are odd and \((q-1)\nmid(p-1)^{2}(p+1)\); (c) all \(N=p^{2}q^{2}\) satisfy the ISAP. ## 1 Introduction This work is primarily motivated by the following conjecture given by So in [10]: "There are exactly \(2^{\tau(N)-1}\) non-isospectral integral circulant graphs of order \(N\), where \(\tau(N)\) is the number of divisors of \(N\)." Monius and So [7]'s work proves this conjecture for: * \(N=p^{k}\), where \(p\) is a prime \(p\geq 2\); * \(N=pq^{k}\) or \(p^{2}q\) with primes \(q>p\geq 2\) and integer \(k\geq 1\); * \(N=pqr\) with primes \(r>q>p\geq 2\). Otherwise, the conjecture seems to be open. One way of looking at this conjecture is as an attempt to better understand integral circulant graphs. Circulant graphs of order \(N\) are defined by their _symbol_\(S\subset\mathbb{Z}/N\mathbb{Z}\), the set of column indices corresponding to nonzero elements of the first row of the graph's adjacency matrix. In an **integral** circulant graph, the symbol's information can be compressed into the _integral symbol_, which is a subset of \(\{d\colon 1<d<N,d|N\}\); the main idea is that different indices \(k\) in the symbol with the same \(\gcd(k,N)\) must occur together or not at all. Thus, there are \(2^{\tau(N)-1}\) possible integral symbols for integral circulant graphs. The authors of [7] show that for these \(N\), different integral symbols must have different spectra. Thus, there must be \(2^{\tau N-1}\) different spectra for these \(N\). In this light, we can also think of So's conjecture as a strengthening of a very similar result: **Theorem** (Klin and Kovacs [5]).: _There are exactly \(2^{\tau(N)-1}\) non-isomorphic integral circulant graphs of order \(N\)._ Another way of looking at this conjecture is as a variation on the "Adam property". As in [6], we say that a symbol \(S\subset\mathbb{Z}/N\mathbb{Z}\) has the _Adam property_ if it is isomorphic to another symbol \(T\) if and only if \(S\) and \(T\) are _proportional_ (that is, one can obtain one symbol to another by multiplying by a common element of \((\mathbb{Z}/N\mathbb{Z})^{*}\)). It is natural to say that \(N\in\mathbb{N}\) satisfies the Adam property if all symbols \(S\subset\mathbb{Z}/N\mathbb{Z}\) satisfy the _Adam_ property. Adam conjectured [1] that all natural numbers \(N\) satisfy the Adam property; that is, all pairs of isomorphic circulant graphs must have proportional symbols. Several classes of counterexamples were found, such as by Elspas and Turner for \(n=16\)[4] or Alspach [2] for broader classes of \(N\). However, the conjecture was also shown to be true for many \(N\); for example, Muzychuk proved that the conjecture holds for squarefree \(N\)[8] and double squarefree \(N\)[9]. One natural extension of the Adam property is the following: as in [6], we say that \(N\) satisfies the _spectral Adam property_ if two symbols \(S,T\subset\mathbb{Z}/N\mathbb{Z}\) have the same spectrum if and only if \(S\) and \(T\) are proportional. When we specialize to integral circulant graphs, proportionality is equivalent to equality, because multiplication by an element of \((\mathbb{Z}/N\mathbb{Z})^{*}\) fixes the greatest common divisor with \(N\). It is then natural to say that \(N\) satisfies the _integral spectral Adam property (ISAP)_ if two integral symbols have the same spectrum if and only if they are equal. Note that any \(N\) that satisfies the ISAP must have \(2^{\tau N-1}\) different spectra. In this light, we can reinterpret Monius and So's work as proving that \(N=p^{k},pq^{k},pqr\) satisfy the ISAP. After a quick review and some notation in Section 2, we start our work in Section 3, where we introduce some structure that will help us visualize and manipulate the spectra of integral circulant graphs. Our main contribution here is observing the structural simplicity of \(G_{N}(d)\) for \(N=p^{n}\) and then exploiting the multiplicative structure of the well-known \(\mu\) and \(\phi\) functions. In Section 4, we turn our goal of understanding the ISAP into looking for the existence of _nontrivial additive relations (NARs)_ that exist on products of \(\phi\). In Section 5 we give "weak but general" results that apply to many \(N\), but with many assumptions on \(N\). In Section 6, we prove the \(N=p^{2}q^{n_{2}}\) case under fewer assumptions. In Section 7 we give our main "narrow but strong" result that solves \(N=p^{2}q^{2}\) completely. We end with some remarks in Section 8. ## 2 Preliminaries ### Additive Relations and NARs Given a vector \(v\) with some index set \(S\), we use \(v[s]\) to denote the entry corresponding to \(s\in S\) in \(v\). For matrices \(M\), we use \(M[s,t]\) to denote the entry in row \(s\) and column \(t\). For a vector \(v\), we define an _additive relation on \(v\)_ to be a relation of the form \[\sum_{x\in S_{1}}v[x]=\sum_{x\in S_{2}}v[x]\] for distinct subsets \(S_{1},S_{2}\subset S\). Sometimes it will be more convenient to rewrite it as a single equation \[\sum_{i}a_{i}v[i]=0,\] where each \(a_{i}\) is in \(\{-1,0,1\}\) depending on whether \(v[i]\) appears only on the left, on both sides, or only on the right respectively. We call this the _one-sided version_ of the additive relation. Furthermore, 1. we say that an additive relation \(X\) is _nontrivial_ if \(S_{1}\neq S_{2}\). We abbreviate a nontrivial additive relation1 as _NAR_. Footnote 1: In our context it is important to be careful. For example, if our vector is \(v=[0,0,2]\) indexed by \([1,2,3]\), then \(v[0]=v[1]\) is a NAR because the indices are different, even if the values are the same. 2. if \(S_{1}\) and \(S_{2}\) are disjoint and nonempty, we call the (necessarily nontrivial) additive relation a _disjoint NAR_. Any NAR \(X\) on non-equal \(S_{1}\) and \(S_{2}\) creates a disjoint NAR \(d(X)\) if we remove the intersection \(S_{1}\cap S_{2}\). We call \(d(X)\) the _reduction_ of \(X\) and say that \(X\)_reduces_ to \(d(X)\). 3. we say that an additive relation \(X\)_involves (an index)_\(x\) (equivalently, \(x\)_is involved in_\(X\)) if \(x\) appears in the equation as \(x\in S_{1}\) or \(x\in S_{2}\). We similarly say that \(X\)_involves (a value)_\(y\) if \(v[x]=y\) for some index \(x\) involved in \(X\). We will often just say "involves" when the context is clear. Also, we say that an index (or value) is _involved nontrivially_ in a NAR \(X\) if the corresponding value does not only appear on one side of \(X\) (equivalently, the corresponding value is not involved in the reduction \(d(X)\). In [10], So defines a _super sequence_ to be a sequence of natural numbers \(a_{1}<a_{2}<\cdots<a_{k}\) such that for all \(s<k\), \(a_{s+1}>\sum_{i=1}^{s}a_{s}\). It is easy to observe that **Proposition 2.1**.: _There cannot exist NARs on a super sequence._ ### A Review of Spectral Theory of Integral Circulant Graphs The material in this section can be found in [10] and [7]. A _circulant graph_\(CG_{N}(S)\) of order \(N\) is characterized by a _symbol_\(S\), which is a subset of \(\{d:1\leq d<N\}\) where \(i\in S\) if and only if \((N-i)\in S\). The graph is constructed by labeling the vertices \(0,\ldots,N-1\) and creating an edge \((i,j)\) if and only if \(i-j\in S\). We can then write down the _spectrum_ (eigenvalues) of \(CG_{N}(S)\) as the multiset \[Sp(CG_{N}(S))=\{\lambda_{0}(S),\lambda_{1}(S),...,\lambda_{N-1}(S)\}\] where for \(0\leq t<N\), \[\lambda_{t}(S)=\sum_{j\in S}\omega^{tj}.\] An _integral circulant graph (ICG)_ is a circulant graph where the spectrum consists only of integers. So [10] showed that integral circulant graphs are characterized by symbols \(S\) where all the indices \(k\) with the same \(\gcd(k,N)\) must appear at the same time or not at all. In other words, we can define \(\tau(N)-1\)_basic integral symbols_\(\{G_{N}(d):d|N,d<N\}\), where \[G_{N}(d)=\{k:\gcd(N,k)=d\}\subset[N-1].\] (we use \([k]\) to denote the set \(\{1,2,\ldots,k\}\)) These \(\tau(N)-1\) basic integral symbols partition \([N-1]\). Then there are exactly \(2^{\tau(N)-1}\) integral circulant graphs of order \(N\), which corresponds to a choice to include all the values in each basic integral symbol or not. We can then compute the spectrum by just adding the corresponding spectra for the basic integral symbols, because the matrices corresponding to different basic integral symbols pairwise commute. In other words, we can use \(ICG_{N}(D)\) to denote the integral circulant graph of order \(N\) with the _integral symbol_\(\cup_{d\in D}G_{N}(d)\) (formally, \(ICG_{N}(D)=CG_{N}(\cup_{d\in D}G_{N}(d))\)). Its eigenvalues can then be computed as \[\lambda_{t}(D)=\lambda_{t}(\cup_{d\in D}G_{N}(d))=\sum_{d\in D}\lambda_{t}(G _{N}(d)).\] The spectra of the basic integral symbols can then be computed with the Euler function \(\phi\) and Mobius function \(\mu\) as follows: for \(0\leq t<N\), \[\lambda_{t}(G_{N}(d))=\frac{\phi(N/d)}{\phi\left(\frac{N/d}{\gcd(t,N/d)}\right)} \mu\left(\frac{N/d}{\gcd(t,N/d)}\right). \tag{2.2}\] ## 3 Spectral Theory of Integral Circulant Graphs ### The Spectral Theory for \(N=p^{n}\) Let \(N=p^{n}\) where \(p\) is a prime. Using Equation 2.2, we can characterize the spectra of the basic integral symbols (here they must be powers of \(p\) as the following): \[\lambda_{t}(G_{N}(p^{n-\beta}))=\frac{\phi(p^{\beta})}{\phi\left(\frac{p^{ \beta}}{\gcd(t,p^{\beta})}\right)}\mu\left(\frac{p^{\beta}}{\gcd(t,p^{\beta}) }\right),\] where \(t\) indexes over \(1\leq t\leq N\) and \(\beta\) indexes over \(0\leq\beta\leq n\). When \(\beta\) is fixed, this assigns to each \(G_{N}(p^{n-\beta})\) a \(N=p^{n}\)-dimensional vector \(\lambda\) indexed by \(t\), which is the spectrum of \(G_{N}(p^{n-\beta})\) when viewed as a multiset. First, observe that \(\gcd(t,p^{x})\) only depends on \(\gamma\) where \(p^{\gamma}\|t\) (we use \(p^{k}\|t\) to denote that \(p^{k}|t\) and \(p^{k+1}\nmid t\)). This means it suffices to only consider \(t=p^{\gamma}\). We obtain \[\lambda_{p^{\gamma}}(G_{N}(p^{n-\beta}))=\frac{\phi(p^{\beta})}{\phi\left( \frac{p^{\beta}}{p^{\min(\beta,\gamma)}}\right)}\mu\left(\frac{p^{\beta}}{p^{ \min(\beta,\gamma)}}\right).\] An equivalent formulation to the above computation is **Proposition 3.1**.: _The values \(\lambda_{p^{\gamma}}(G_{N}(p^{n-\beta}))\) can take are:_ 1. \(1\) _if_ \(\beta=0\)_. Otherwise,_ 2. \(-p^{\beta-1}\) _if_ \(\gamma=\beta+1\)_._ 3. \(\phi(p^{\beta})=(p-1)p^{\beta-1}\) _if_ \(\gamma\geq\beta\)_._ 4. \(0\) _if_ \(\beta>\gamma+1\)_._ _There are exactly \(\phi(p^{\beta-\gamma})\) different \(t\) in \([N-1]\) such that \(\gcd(t,p^{\beta})=p^{\gamma}\)._ To encode this information, we can define an \((n+1)\times(n+1)\) matrix \(M(N)=M(p^{n})\) with rows labeled by \(\gamma\in\{0,1,\ldots,n\}\) and columns labeled by \(\beta\in\{0,1,\ldots,n\}\), where \[M(N)[\gamma,\beta]=\lambda_{p^{\gamma}}(G_{N}(p^{n-\beta})).\] (we use \(M[r,c]\) to denote the entry of \(M\) in row \(r\) and column \(c\)) \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \(\gamma\)\(\beta\) & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(\cdots\) & \(n\) & \\ \hline \(0\) & \(1\) & \(-1\) & \(0\) & \(0\) & \(0\) & \(\cdots\) & \(0\) & \(\times\)\(\phi(p^{n})\) \\ \hline \(1\) & \(1\) & \(p-1\) & \(-p\) & \(0\) & \(0\) & \(\cdots\) & \(0\) & \(\times\phi(p^{n-1})\) \\ \hline \(2\) & \(1\) & \(p-1\) & \((p-1)p\) & \(-p^{2}\) & \(0\) & \(\cdots\) & \(0\) & \(\times\)\(\phi(p^{n-2})\) \\ \hline \(3\) & \(1\) & \(p-1\) & \((p-1)p\) & \((p-1)p^{2}\) & \(-p^{3}\) & \(\cdots\) & \(0\) & \(\times\phi(p^{n-3})\) \\ \hline \(4\) & \(1\) & \(p-1\) & \((p-1)p\) & \((p-1)p^{2}\) & \((p-1)p^{3}\) & \(\cdots\) & \(0\) & \(\times\phi(p^{n-4})\) \\ \hline \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \hline \(N\) & \(1\) & \(\phi(p)\) & \(\phi(p^{2})\) & \(\phi(p^{3})\) & \(\phi(p^{4})\) & \(\cdots\) & \(\phi(p^{n})\) & \(\times\phi(1)=1\) \\ \hline \end{tabular} In \(M(N)\), we say that row \(i\) has _(row) multiplicity_\(\phi(p^{n-i})\), corresponding to the fact that the entries in that row appears \(\phi(p^{n-i})\) times in each spectrum vector. Formally, let the _extended form_ matrix \(\overline{M(N)}\) be the \(N\times(n+1)\) matrix where the \((n+1)\) columns correspond to the spectra of the \((n+1)\) basic integral symbols for \(N\). We call \(M(N)\) the _compact form_ of \(\overline{M(N)}\) since it stores the same information but with only \((n+1)\) rows, with row \(i\) appearing \(\phi(p^{n-i})\) times in \(\overline{M(N)}\). As a sanity check, \[\phi(1)+\phi(p)+\cdots+\phi(p^{n})=p^{n}=N,\] so the number of rows works out. **Example 3.2**.: As an example, take \(N=2^{3}\). Then the compact form \(M(8)\) equals (with multiplicities on the right): \begin{tabular}{c|c|c|c|c|c} \hline \(\gamma\)\(\beta\) & \(0\) & \(1\) & \(2\) & \(3\) & \\ \hline \(0\) & \(1\) & \(-1\) & \(0\) & \(0\) & \(\times\)\(4\) \\ \hline \(1\) & \(1\) & \(1\) & \(-2\) & \(0\) & \(\times\)\(2\) \\ \hline \(2\) & \(1\) & \(1\) & \(2\) & \(-4\) & \(\times\)\(1\) \\ \hline \(3\) & \(1\) & \(1\) & \(2\) & \(4\) & \(\times\)\(1\) \\ \hline \end{tabular} We can write the extended form \(\overline{M(8)}\) as \begin{tabular}{c|c|c|c|c} \hline \(\gamma\)\(\beta\) & \(0\) & \(1\) & \(2\) & \(3\) \\ \hline \(0\) & \(1\) & \(-1\) & \(0\) & \(0\) \\ \hline \(0\) & \(1\) & \(-1\) & \(0\) & \(0\) \\ \hline \(0\) & \(1\) & \(-1\) & \(0\) & \(0\) \\ \hline \(0\) & \(1\) & \(-1\) & \(0\) & \(0\) \\ \hline \(1\) & \(1\) & \(1\) & \(-2\) & \(0\) \\ \hline \(2\) & \(1\) & \(1\) & \(2\) & \(-4\) \\ \hline \(3\) & \(1\) & \(1\) & \(2\) & \(4\) \\ \hline \end{tabular} The columns (except the leftmost) of \(\overline{M(8)}\) are the spectra of different \(G_{N}(d)\)'s. So each of the \(8\) subsets \(S\) of the \(\tau(8)-1=3\) columns on the right with \(\beta>0\) corresponds to a different ICG \(G\); summing the columns of \(\overline{M(8)}\) over \(S\) produces a vector containing the spectrum of \(G\). ### Using the Multiplicative Structure of \(\phi\) and \(\mu\) Let \(N=p^{n_{1}}q^{n_{2}}\), \(p^{\gamma_{1}}\|t\), and \(q^{\gamma_{2}}\|t\). Observe that \[\lambda_{t}(G_{N}(p^{n_{1}-\beta_{1}}q^{n_{2}-\beta_{2}}))=\lambda_{p^{\gamma_{ 1}}}(G_{p^{n_{1}}}(p^{n_{1}-\beta_{1}}))\lambda_{p^{\gamma_{2}}}(G_{p^{n_{2}}}( p^{n_{2}-\beta_{2}})).\] This is because the only terms that appear in \(\lambda\) are \(\phi\) and \(\mu\), which are multiplicative functions. As the row multiplicities are also just \(\phi\) functions, this means we can obtain \(M(N)\) by taking \(M(p^{n_{1}})\) and \(M(q^{n_{2}})\) and taking their tensor product! Formally, we define a matrix \(M(N)\) whose rows and columns are both labeled by \((m_{1},m_{2})\) where \(m_{1}\in\{0,\ldots,n_{1}\}\) and \(m_{2}\in\{0,\ldots,n_{2}\}\), and then construct the entry \[M(N)[(m_{1},m_{2}),(k_{1},k_{2})]=M(p^{n_{1}})[(m_{1},k_{1})]M(p^{n_{2}})[(m_ {2},k_{2})]\] where the row \((m_{1},m_{2})\) in \(M(N)\) has multiplicity \[\phi(m_{1},m_{2})=\phi(m_{1})\phi(m_{2}).\] **Example 3.3**.: As an example, with \[M(4)=\begin{bmatrix}1&-1&0&(\times 2)\\ 1&1&-2&(\times 1)\\ 1&1&2&(\times 1)\end{bmatrix};M(9)=\begin{bmatrix}1&-1&0&(\times 6)\\ 1&2&-3&(\times 2)\\ 1&2&6&(\times 1)\end{bmatrix},\] we can obtain \(M(36)\) by tensoring them to obtain a \(9\times 9\) matrix \[\begin{bmatrix}1&-1&0&-1&1&0&0&0&(\times 12)\\ 1&2&-3&-1&-2&3&0&0&(\times 4)\\ 1&2&6&-1&-2&-6&0&0&(\times 2)\\ 1&-1&0&1&1&0&-2&2&0&(\times 6)\\ 1&2&-3&1&2&-3&-2&-4&6&(\times 2)\\ 1&2&6&1&2&6&-2&-4&-12&(\times 1)\\ 1&-1&0&1&-1&0&2&-2&0&(\times 6)\\ 1&2&-3&1&2&-3&2&4&-6&(\times 2)\\ 1&2&6&1&2&6&2&4&12&(\times 1)\end{bmatrix}.\] The row multiplicities add up to \(36\), as expected. To summarize, for \(N=p_{1}^{n_{1}}p_{2}^{n_{2}}\cdots p_{r}^{n_{r}}\), we can find an \((n_{1}+1)\cdots(n_{r}+1)\times(n_{1}+1)\cdots(n_{r}+1)\) matrix \(M(N)\) with the rows and columns labeled by \((m_{1},\ldots,m_{r})\), where \(m_{i}\in\{0,\ldots,n_{i}\}\) for all \(i\). We call this common indexing set \(I(N)=\{0,1,\ldots,n_{1}\}\times\cdots\times\{0,1,\ldots,n_{r}\}\). For each \((m_{1},\ldots,m_{r})\), the numbers \(\phi(p_{1}^{m_{1}}\cdots p_{r}^{m_{r}})\) appear twice. On each column \((m_{1},\ldots,m_{r})\), they appear as the entry in the final row \((n_{1},\ldots,n_{r})\). On each row \((n_{1}-m_{1},\ldots,n_{r}-m_{r})\), they appear as the row's multiplicity in the \(N\times(n_{1}+1)\cdots(n_{r}+1)\) extended form matrix \(\overline{M(N)}\). We define \[\phi(m_{1},m_{2},\ldots,m_{r})\coloneqq\phi(p_{1}^{m_{1}}\cdots p_{r}^{m_{r}})\] and \[P(N)\coloneqq\{\phi(m_{1},\ldots,m_{r}):(m_{1},\ldots,m_{r})\in I(N)\},\] which we can consider to be a vector indexed by \(I(N)\). Cospectral Pairs and Nontrivial Additive Relations ### Cospectral Pairs For \(a=(m_{1},\ldots,m_{r})\in I(N)\) the index of some column of \(M(N)\), let \(v_{a}\) be the corresponding column of \(M(N)\) and \(\overline{v_{a}}\) be the corresponding column of \(\overline{M(N)}\). For any subset \(A\subset I(N)\) of the columns, we use \(v_{A}\) to mean \(\sum_{a\in A}v_{a}\), and similarly \(\overline{v_{A}}=\sum_{a\in A}\overline{v_{a}}\). **Proposition 4.1**.: \(N\) _does not satisfy the ISAP if and only if there exist \(2\) different subsets \(A\) and \(B\) of \(I(N)\backslash\{(0,\ldots,0\}\) and a permutation \(\rho\in S_{N}\) such that for all \(i\in[N]\), \((\overline{v_{A}})_{i}=(\overline{v_{B}})_{\rho(i)}\)._ Proof.: This is just a reformulation of Section 3; the \(2^{\tau(n)-1}\) subsets of the columns of \(\overline{M(N)}\) (except for the leftmost column with index \((0,\ldots,0)\)) generate the different possible spectra of integral circulant graphs with \(N\) vertices by summation. Two vectors represent the same spectra if and only if they equal under some permutation. **Remark 4.2**.: In Proposition 4.1, the statement holds even if we replace "\(I(N)\backslash\{(0,\ldots,0\}\)" by "\(I(N)\)." This is because the leftmost column in \(\overline{M(N)}\) has sum \(N\) (being the all 1's vector) while all other columns have sum \(0\). If \(\overline{v_{A}}=\overline{v_{B}}\) as multisets, the sums of their entries must be equal as well, which means they must either both contain the first column or both fail to contain the first column. If such \(A\), \(B\), \(\rho\) exist, we call them a _cospectral_ pair denoted by \(A\rightarrow_{\rho}B\), and say the \(\rho\)_connects_\(A\) to \(B\). We have therefore reduced the decision problem of finding if \(N\) satisfies the ISAP to the existence of cospectral pairs on \(I(N)\). We will soon see that this in turn reduces to the existence of certain additive relations on \(P(N)\). ### The Row NARs Induced by a Cospectral Pair Given a cospectral pair \(A\rightarrow_{\rho}B\), construct a bipartite graph \(\overline{G_{\rho}}\) on vertices \([N]\times\{0,1\}\), with the vertices \((*,0)\) on the left and \((*,1)\) on the right, such that there exists an edge \(((x,0),(y,1))\) if and only if \(\rho(x)=y\). We then construct a similar graph \(G_{\rho}\) on vertices \(I(N)\times\{0,1\}\) (also with the \((*,0)\) vertices on the left and the \((*,1)\) vertices on the right) such that there exists an edge \(((x,0),(y,1))\) if and only if there exists some \(x^{\prime},y^{\prime}\in[N]\) where row \(x^{\prime}\) (resp. \(y^{\prime}\)) in \(\overline{M(N)}\) is a copy of row \(x\) (resp. \(y\)) in \(M(N)\). If we index \(\overline{v_{A}}\) by the \((*,0)\) and \(\overline{v_{B}}\) by the \((0,*)\) in \(\overline{G_{\rho}}\), we can see that edges connect (some but not necessarily all) pairs of values of the \(\overline{v_{*}}\) with the same value. The same is true if we look at the compressed vectors \(v_{A}\) and \(v_{B}\). Consider a connected component in \(G_{\rho}\). We can write it as \(Y_{1}\cup Y_{2}\), where \(Y_{1}\) are the vertices on is on the left and the \(Y_{2}\) are the vertices on the right. This lifts (given an edge \(((x,0),(y,1))\) in \(G_{\rho}\), take all the edges \(((x^{\prime},0),(y^{\prime},1))\) where \(x^{\prime}\) are copies of \(x\) and \(y^{\prime}\) are copies of \(y\) in \(\overline{G\rho}\)) to some \(\overline{Y_{1}}\cup\overline{Y_{2}}\) in \(\overline{G_{\rho}}\), which must be a matching (a collection of disjoint edges) because \(\overline{G\rho}\) is itself a matching. This means \[|\overline{Y_{1}}|=|\overline{Y_{2}}|.\] Every \((r,0)\) in \(Y_{1}\) accounts for \(w(r)\) vertices in \(\overline{Y_{1}}\), and similarly for \((r,1)\) in \(Y_{2}\). Therefore, our equality translates to an additive relation \(X_{1}\) of the form \[\sum_{(r,0)\in Y_{1}}w(r)=\sum_{(r,1)\in Y_{2}}w(r)\] on the row weights \(w(r)\in P(N)\). Suppose \(G_{\rho}\) had \(s\) connected components. Then iterating our process \(s\) times creates \(s\) additive relations \(X_{1},\ldots,X_{s}\) on \(P(N)\) such that each element of \(P(N)\) appears exactly once on the left and exactly once on the right among the \(X_{i}\). In this case, we say that \(A\to_{\rho}B\)_induces_ relations \(X_{1},\ldots,X_{s}\). Suppose that for some row \(r\) in \(M(N)\), all copies \(r^{\prime}\) of \(r\) in \(\overline{M(N)}\) satisfy \(\rho(r^{\prime})=r^{\prime}\). Then we say that \(\rho\)_fixes_\(r\) and call the corresponding trivial relation \(w(r)=w(r)\)_fixed_. We call a \(\rho\)_simplified_ if for all \(r\) where \(v_{A}[r]=v_{B}[r]\), \(\rho\) fixes \(r\). Then, **Proposition 4.3**.: _Suppose \(A\to_{\rho^{\prime}}B\) is a cospectral pair. Then there exists a simplified \(\rho\) such that:_ 1. \(A\to_{\rho}B\) _is also a cospectral pair._ 2. \(A\to_{\rho}B\) _induces_ \(s\) _row relations_ \(X_{1},\ldots,X_{s}\)_, which are all either fixed or disjoint._ Proof.: Suppose there is some \(r\) such that \(v_{A}[r]=v_{B}[r]\). Then let \(Y_{1}\cup Y_{2}\) be the connected component in \(G_{\rho^{\prime}}\) containing \((r,0)\) and \((r,1)\). We can construct \(\rho\) from \(\rho^{\prime}\) by just letting \(\rho(x)=x\) for all copies \(x\) of \(r\), and re-map the other edges arbitrarily in \(\overline{Y_{1}}\cup\overline{Y_{2}}\) (which does not affect the validity of \(\rho^{\prime}\) as all the vertices involved in this component correspond to the same value in \(v_{A}\) or \(v_{B}\)). As a result, we have created a connected component of a single edge in two vertices \(\{(r,0)\cup(r,1)\}\) in \(G_{\rho}^{\prime}\), corresponding to a fixed relation \(w(r)=w(r)\). Repeating, the remaining non-fixed row relations must then be disjoint as none of them can use both \((r,0)\) and \((r,1)\) for any \(r\). **From this point on, we always assume \(\rho\) is simplified.** We call the resulting disjoint relations the _row NARs induced by \(A\to_{\rho}B\)_. **Example 4.4**.: We give an example of the simplification process in Figure 1. A possible (compact form) pair of cospectral \(v_{A}\) and \(v_{B}\) connected by some \(\rho\) is shown in Equation 4.5. Figure 1: Left: A possible \(G_{\rho}\). These edges index the 36 edges in \(\overline{G_{\rho}}\). Right: after simplification. Any connected component involving two “matching” vertices will have had those two vertices isolated into a single component of their own. \[\rho\left(\begin{bmatrix}\mathbf{0}&(\times 12)\\ 1&(\times 4)\\ 1&(\times 2)\\ \mathbf{1}&(\times 6)\\ 2&(\times 2)\\ 3&(\times 1)\\ 0&(\times 6)\\ 3&(\times 2)\\ 3&(\times 1)\end{bmatrix}\right)=\begin{bmatrix}\mathbf{0}&(\times 12)\\ 3&(\times 4)\\ 0&(\times 2)\\ \mathbf{1}&(\times 6)\\ 2&(\times 1)\\ 1&(\times 6)\\ 0&(\times 2)\\ 2&(\times 1)\end{bmatrix} \tag{4.5}\] The set of values appearing in these vectors is \(\{0,1,2,3\}\). It is possible to pick \(\rho\) such that \(G_{\rho}\) is as in Figure 1. There are \(4\) connected components, which induce \(4\) additive relations \[\mathbf{12}+6 =\mathbf{12}+2+2+2\] \[4+2+\mathbf{6} =6+\mathbf{6}\] \[2 =1+1\] \[1+2+1 =4\] over the row multiplicities. For two of these rows (which we marked in bold in Equations 4.5 and the relations above), the corresponding values in \(v_{A}\) and \(v_{B}\) equal, so we can remap \(\rho\) to be the identity on those rows. Now we have \(2\) (trivial) fixed relations and \(4\) disjoint row NARs \[6 =2+2+2\] \[4+2 =6\] \[2 =1+1\] \[1+2+1 =4\] \[12 =12\] \[6 =6\] corresponding to \(6\) connected components, \(2\) of which are horizontal edges. ### Consequences of Fixed Rows A cospectral pair \(A\to_{\rho}B\) (assuming a simplified \(\rho\)) induces some row NARs using the non-fixed rows, but the fixed rows give us information as well. First, if row \(r\) is fixed, then \(v_{A}[r]=v_{B}[r]\), so \[\sum_{c\in A}M[r,c]=\sum_{c\in B}M[r,c]\] gives a NAR on the values in the row. We call this the _column NAR (for row \(r\))_. This also holds for linear combinations of fixed rows (we skip the proof of this routine Lemma): **Lemma 4.6**.: _In a cospectral pair \(A\to_{\rho}B\). suppose that two rows \(r_{1}\) and \(r_{2}\) are both fixed. Then let \(\{\delta_{c}\coloneqq\alpha M(N)[r_{1},c]+\beta M(N)[r_{2},c]\}_{c\in I(N)}\) be a linear combination of the two rows. We must have_ \[\sum_{c\in A}\delta_{c}=\sum_{c\in B}\delta_{c}.\] Thus, it makes sense to talk about the column NAR for e.g. \(r_{1}+r_{2}\), where \(r_{1}\) and \(r_{2}\) are different rows. **Lemma 4.7**.: _Suppose \(v_{1},\ldots,v_{N}\) are linearly independent. Then let \(P=P_{1}\cup P_{2}\cup\cdots\cup P_{k}\) be a partition of \([N]\). Suppose we define \(v_{S}\), \(S\subset[N]\) to be \(\sum_{s\in S}v_{s}\), then \(V_{P_{1}},\ldots,V_{P_{k}}\) are linearly independent as well._ Proof.: Suppose \(\sum_{j}\alpha_{j}V_{P_{j}}=0\). Take any \(i\in[N]\). It only appears in one of the parts, without loss of generality \(P_{x}\). Since no other \(V_{P_{x^{\prime}}}\) with \(x^{\prime}\neq x\) contains a nontrivial multiple of \(v_{i}\), we must have \(\alpha_{x}=0\). Repeating the argument for all elements of \([N]\) shows that no nontrivial linear combination of the \(V_{P_{j}}\) can equal \(0\), so we are done. We say that a column is _matched_ if it is either in both \(A\) and \(B\) or neither. We will see some analogies between columns being matched and rows being fixed. **Proposition 4.8**.: _Let \(A\to_{\rho}B\) be a cospectral pair. Then:_ 1. _Row_ \((n_{1},\ldots,n_{r})\) _is fixed. Column_ \((0,\ldots,0)\) _is matched._ 2. _At least one column is not matched. At least one row is not fixed._ Proof.: Consider the last row \(R\) indexed by \((n_{1},\ldots,n_{r})\). This row has weight \(1\) and plays a special role; it contains the largest eigenvalues of the spectra corresponding to the columns. Since this property is stable under addition, we know that the corresponding value must equal in \(\overline{v_{A}}\) and \(\overline{v_{B}}\), so it is fixed. By construction of integral circulant graphs, neither \(A\) or \(B\) contains the first column, so it is matched. For the second part, we already know that \(A\neq B\), which implies the column relation is a NAR. It remains to show that not all the rows are fixed, which we prove with a character argument. If all the rows were fixed, we must have \(\overline{v_{A}}=\overline{v_{B}}\) as vectors. Recall from Section 2 that these vectors are sums of the columns of \(\overline{M(N)}\), which are themselves sums over vectors of the form \[z_{N,i}\coloneqq[1,\omega^{i},\omega^{2i}\ldots,\omega^{(N-1)i}],\] where \(\omega\) is the \(N\)-th root of unity. These vectors form characters for \(Z_{N}\to\mathbb{C}\), and so must be linearly independent (see e.g. Artin [3]). By Lemma 4.7, these vectors are also linearly independent, so having \(\overline{v_{A}}=\overline{v_{B}}\) implies \(A=B\), a contradiction. As an immediate consequence of the second part of Proposition 4.8, **Corollary 4.9**.: _If there is no NAR on \(P(N)\), then \(N\) satisfies the ISAP._ In [7], Monius and So's primary strategy was to show that \(P(N)\) for \(N=pq^{k}\), \(2<p<q\) and \(N=pqr\), \(2<p<q<r\) are both super sequences. Thus, we can rephrase their strategy as proving that no NARs exist for these \(N\) and then using Corollary 4.9. The rest of our paper explores further conditions beyond super sequences for when NARs cannot exist, which we then combine with observations about \(M(N)\) to eliminate possible counterexamples. We remark that it is not sufficient to **only** consider the nonexistence of NARs on \(P(N)\). In particular, [7] also proves that \(N=2q^{k}\) satisfies the ISAP, even though \(P(N)=\{1,1,(q-1),(q-1),\ldots,(q-1)q^{k-1},(q-1)q^{k-1}\}\) contains NARs (in particular, it contains repeated elements). General Results Given multisets \(S_{1},\ldots,S_{k}\), define \(\otimes_{i=1}^{k}S_{i}\) to be the multiset of \(|S_{1}|\times\cdots\times|S_{k}|\) numbers that are \(k\)-wise products coming from picking one element from each set. **Theorem 5.1**.: _Let \(N\) have the prime decomposition \(p_{1}^{n_{1}}\cdots p_{r}^{n_{r}}\). Suppose that:_ 1. _for_ _all___\(i\in[r]\)__, there exists no NAR on_ \[P\left(\frac{N}{p_{i}^{n_{i}}}\right)=\otimes_{j\neq i}\{1,(p_{j}-1),(p_{j}-1)p _{j},\ldots,(p_{j}-1)p_{j}^{n_{j}-1}\}\pmod{(p_{i}-1)};\] 2. _there_ _exists_ _an_ \(i\in[r]\) _such that there exists no NAR on_ \[\otimes_{j\neq i}\{1,p_{j},\ldots,p_{j}^{n_{j}-1}\}\pmod{p_{i}},\] _Then there exists no NAR on \(P(N)\). As a consequence, \(N\) satisfies the ISAP._ Proof.: Suppose we have a NAR \(R\) on \(P(N)\). Suppose at least one of the terms corresponds to \(\phi(m_{1},\ldots,m_{r})\) where \(m_{i}=0\) for some \(i\). Since all terms corresponding to \(m_{i}>1\) contains a factor of \((p_{i}-1)\), taking mod \((p_{i}-1)\) we obtain a NAR on just the terms with \(m_{i}=0\). This is exactly the set given in the first condition, so no such NAR exists. Therefore, \(R\) must only use the elements where all \(m_{i}\geq 1\). These are precisely terms in the product \(\otimes_{j=1}\{(p_{j}-1),(p_{j}-1)p_{j},\ldots,(p_{j}-1)p^{n_{j}-1}\}\). Since all the terms are divisible by \((p_{j}-i)\), there is a bijection between NARs on this set and NARs on \(\otimes_{j=1}\{1,p_{j},\ldots,p^{n_{j}-1}\}\). So we must have a corresponding \(R^{\prime}\) on the latter set. Furthermore, for any \(i\), if the minimum power of \(p_{i}\) that appears in any of the elements in \(R^{\prime}\) is \(m\), then dividing by \(p_{i}^{m}\) gives another NAR. This means we can further assume that for every \(i\), there must exist some element in \(R^{\prime}\) not divisible by \(p_{i}\). This means that taking \(\pmod{p_{i}}\) creates a NAR on just the elements not divisible by \(p_{i}\), which is \(\otimes_{j\neq i}\{1,p_{j},\ldots,p^{n_{j}-1}\}\pmod{p_{i}}\). In other words, if there exists an \(i\) such that there is no NAR on this set, we would obtain a contradiction. If we consider the \((n_{1},\ldots,n_{r})\) as fixed (all \(n_{i}\) roughly having size \(m\)) and consider random big primes (all \(p_{i}\) roughly having \(p\)), then the sets that appear in Theorem 5.1 are approximately uniformly random modulo \((p_{i}-1)\), so each condition is met with probability \(\approx 1-\frac{2^{m^{r-1}}}{p}\). This means as \(p\to\infty\) Theorem 5.1 gives a heuristic proof that our desired property holds for almost all \(N\) (of course, if we consider a different distribution then this heuristic does not hold; for starters, the theorem does not even work for any even numbers). We can obtain another such result by noticing that the \(p_{i}\) cannot be too far from one another: **Theorem 5.2**.: _Fix prime \(p_{1}\) and natural numbers \(n_{1},\ldots,n_{r}\). Then there are only possibly finitely many \(N\) of form \(N=p_{1}^{n_{1}}\cdots p_{r}^{n_{r}}\) that do not satisfy the ISAP._ Proof.: Let \(p_{2}>p_{1}^{n_{1}}+1\). We can check that the only terms in \(P(N)\) involving \(p_{1}\) and \(p_{2}\) form a super sequence when put in lexographic order sorted by the leading power of \(p_{2}\) and then \(p_{1}\): \[1,(p_{1}-1),p_{1}(p_{1}-1),\ldots,p_{1}^{n_{1}-1}(p_{1}-1),(p_{2}-1),(p_{2}-1) (p_{1}-1),(p_{2}-1)p_{1}(p_{1}-1),\ldots,\] because \[1+(p_{1}-1)+\cdots+p_{1}^{n_{1}-1}(p_{1}-1)=p_{1}^{n_{1}}<p_{2}-1\] and addition is otherwise dominated by the power of \(p_{2}\). Let the sum of all such terms be \(C\), and let \(p_{3}>C+1\). The same logic shows that all the terms involving only \(p_{1},p_{2},p_{3}\) can be arranged into a super sequence. Repeating the argument, we can conclude that as long as every \(p_{i}\) is sufficiently big compared to the previous \(p_{i}\), we can put all the elements of \(P(N)\) into a super sequence. Then Proposition 2.1 shows that \(N\) satisfies the ISAP. **Lemma 5.3**.: _In \(M(N)\), for each row index \((m_{1},\ldots,m_{i},\ldots,m_{r})\) where \(m_{i}<n_{i}\), the two rows with indices \((m_{1},\ldots,m_{i},\ldots,m_{r})\) and \((m_{1},\ldots,m_{i}+1,\ldots,m_{r})\) (which differ only in the \(i\)-th entry), have matching values on all coordinates except the columns \((*,m_{i}+1,*)\), where_ \[M(N)[(m_{1},\ldots,m_{i}+1,\ldots,m_{r}),(m_{1}^{\prime},\ldots, m_{i}+1,\ldots,m_{r}^{\prime})]\] \[-M(N)[(m_{1},\ldots,m_{i},\ldots,m_{r}),(m_{1}^{\prime},\ldots, m_{i}+1,\ldots,m_{r}^{\prime})]\] \[=p_{i}^{m_{i}+1}\prod_{j\neq i}M(p_{j}^{n_{j}})[m_{j},m_{j}^{ \prime}]\] _and the columns \((*,m_{i}+2,*)\), where_ \[M(N)[(m_{1},\ldots,m_{i}+1,\ldots,m_{r}),(m_{1}^{\prime},\ldots, m_{i}+2,\ldots,m_{r}^{\prime})]\] \[-M(N)[(m_{1},\ldots,m_{i},\ldots,m_{r}),(m_{1}^{\prime},\ldots,m_ {i}+2,\ldots,m_{r}^{\prime})]\] \[=-p_{i}^{m_{i}+1}\prod_{j\neq i}M(p_{j}^{n_{j}})[m_{j},m_{j}^{ \prime}];\] Proof.: Direct observation from \(M(N)\). **Proposition 5.4**.: _Let \(N\) have the prime decomposition \(p_{1}^{n_{1}}\cdots p_{r}^{n_{r}}\). Suppose that there exists a cospectral pair \(A\to_{\rho}B\) and some \(i\in[r]\) such that all \((n_{i}+1)\) rows indexed \((n_{1},\ldots,m_{i},\ldots,n_{r})\) are fixed. Then there exists a NAR on_ \[\otimes_{j\neq i}\{1,(p_{j}-1),(p_{j}-1)p_{j},\cdots,(p_{j}-1)p_{j}^{n_{j}-1}\}.\] Proof.: We prove by contradiction. Suppose no such NAR exists. Consider two of these fixed rows \((n_{1},\ldots,n_{i},\ldots,n_{r})\) and \((n_{1},\ldots,n_{i}-1,\ldots,n_{r})\). By Lemma 4.6, \(A\to_{\rho}B\) induces a column NAR \(X\) on the difference between these rows. By Lemma 5.3, they differ only in columns \((*,n_{i},*)\). For each such column \((m_{1},\ldots,n_{i},\ldots,m_{r})\), we have \[M(N)[(n_{1},\ldots,n_{i},\ldots,n_{r}),(m_{1},\ldots,n_{i}, \ldots,m_{r})]\] \[-M(N)[(n_{1},\ldots,n_{i}-1,\ldots,n_{r}),(m_{1},\ldots,n_{i}, \ldots,m_{r})]\] \[=p_{i}^{n_{i}}\prod_{j\neq i}M(p_{j}^{n_{j}})[n_{j},m_{j}]\] \[=p_{i}^{n_{i}}\prod_{j\neq i}(p_{j}-1)p_{j}^{m_{j}-1}.\] Equivalently, as we range over all the coordinates except for \(i\), the nonzero differences between our two rows form the multiset \[p_{i}^{n_{i}}\cdot\otimes_{j\neq i}\{1,(p_{j}-1),(p_{j}-1)p_{j},\ldots,(p_{j}-1 )p_{j}^{n_{j}-1}\}.\] By our assumption, no NAR exists on this set. Thus, none of their entries can be involved nontrivially in \(X\), so we can conclude that all columns of the form \((*,n_{i},*)\) are matched. Iterating, consider \((n_{1},\ldots,n_{i}-2,\ldots,n_{r})\) and \((n_{1},\ldots,n_{i}-1,\ldots,n_{r})\), which are again both fixed, so \(A\to_{\rho}B\) induces a column NAR \(X\) on their difference. Their coordinates only differ in columns \((*,n_{i}-1,*)\) and \((*,n_{i},*)\). As we just calculated, the multiset of differences on the columns in \((*,n_{i}-1,*)\) equals \[p_{i}^{n_{i}-1}\cdot\otimes_{j\neq i}\{1,(p_{j}-1),(p_{j}-1)p_{j},\ldots,(p_{j }-1)p_{j}^{n_{j}-1}\},\] which we assumed to be impossible. Since we have already shown that the columns \((*,n_{i},*)\) are matched, none of the columns \((*,n_{i}-1,*)\) can be involved nontrivially in \(X\) either, and thus they are also matched. Continuing this logic, all columns in \(I(N)\) must be matched, which is a contradiction. **Theorem 5.5**.: _Let \(N\) have the prime decomposition \(p_{1}^{n_{1}}\cdots p_{r}^{n_{r}}\). Suppose that there exists an \(i\in[r]\) such that:_ 1. _there exists no NAR on_ \[\{1,(p_{i}-1),(p_{i}-1)p_{i},\ldots,(p_{i}-1)p_{i}^{n_{i}-1}\}\pmod{\gcd(\{p_ {j}-1\}_{j\neq i})};\] 2. _there exists no NAR on_ \[\otimes_{j\neq i}\{1,(p_{j}-1),(p_{j}-1)p_{j},\ldots,(p_{j}-1)p_{j}^{n_{j}-1}\},\] _Then \(N\) satisfies the ISAP._ Proof.: We prove by contradiction and assume a cospectral pair \(A\to_{\rho}B\) exists. Let \(i\) be the number given by the Theorem assumption. The rows \((n_{1},\ldots,m_{i},\ldots,n_{r})\) where \(m_{i}\) ranges from \(0\) to \(n_{i}\) (while all the other coordinates are equal to their maxima \(n_{j}\)) have row multiplicities in \[J=\{1,(p_{i}-1),\ldots,(p_{i}-1)p_{i}^{n_{i}-1}\}.\] The other elements in \(P(N)\) are all divisible by at least one \((p_{j}-1)\) for some \(j\neq i\), so they are all divisible by \(\gcd(\{p_{j}-1\}_{j\neq i})\). Taking this modulus, the first condition enforces that none of the elements in \(J\) can be nontrivially involved in a NAR. As a result, their rows \((n_{1},\ldots,m_{i},\ldots,n_{r})\) must all be fixed. Proposition 5.4 then applies, meaning we must have a NAR on \[\otimes_{j\neq i}\{1,p_{j},\ldots,p_{j}^{n_{j}-1}\}\pmod{p_{i}}.\] As we do not, we have reached a contradiction. Unlike Theorem 5.1, Theorem 5.5 does **not** prove that there exists no NAR on \(P(N)\). For example, consider \(N=5^{3}19^{2}\). There is a NAR on \(P(N)\) because \((19-1)*19+(19-1)=(19-1)*(5-1)(5)\). However, as there does not exist a NAR on \(\{1,(5-1),(5-1)5,(5-1)5^{2}\}\pmod{19-1}\) and there does not exist a NAR on \(\{1,(19-1),(19-1)*19\}\), Theorem 5.5 still applies. This means we really are using additional structural properties of \(M(N)\) in addition to number theoretical properties of \(N\). Recall that [7] already proved the ISAP for \(p^{k}\), \(pq^{k}\), and \(pqr\), so the natural next step is \(N=p^{n_{1}}q^{n_{2}}\). One possible way to use Theorem 5.5 is: **Corollary 5.6**.: _Suppose \(N=p^{n_{1}}q^{n_{2}}\). If there exists no NAR on \(\{1,(p-1),(p-1)p,\ldots,(p-1)p^{n_{1}-1}\}\pmod{(q-1)}\), \(N\) satisfies the IAP._ Proof.: It suffices to check the second condition of Theorem 5.5. To see this, note that the multiset \(\otimes_{j\neq i}\{1,(p_{j}-1),(p_{j}-1)p_{j},\cdots,(p_{j}-1)p_{j}^{n_{j}-1}\}\) reduces to a single list, which cannot have a NAR because it is a super sequence. In the remaining sections, we give stronger results when one or both of the \(n_{i}\) equal \(2\). ## 6 \(N=p^{2}q^{n_{2}}\) When one of the coefficients equals \(2\), we can give a stronger statement than Corollary 5.6. The proof is similar to that of Theorem 5.5, except at various parts of it we use some alternative tactics. **Theorem 6.1**.: _Suppose \(N=p^{2}q^{n_{2}}\), where \(p,q\) are odd primes and \((q-1)\nmid(p-1)^{2}(p+1)\), then \(N\) satisfies the ISAP._ Proof.: We prove by contradiction, and assume that there exists some cospectral pair \(A\to_{\rho}B\). To start, recall that \[M(p^{2})=\left[\begin{array}{ccc|c}1&-1&0&\times p(p-1)\\ 1&(p-1)&-p&\times(p-1)\\ 1&(p-1)&p(p-1)&\times 1\end{array}\right]\] and \(M(q^{n_{2}})\) is some \((n_{2}+1)\times(n_{2}+1)\) matrix. Consider any row or column NAR \(X\) on \(P(N)\). Since row \((2,n_{2})\) is fixed and column \((0,0)\) is matched by Proposition 4.8, we can ignore the weight \(1\). Thus, the one-sided form of \(X\) can be written \[a_{0}(p-1)+a_{1}p(p-1)+a_{2}(q-1)+a_{3}(p-1)(q-1)+a_{4}(q-1)p(p-1)+\cdots=0.\] Suppose \(a_{0}\neq 0\). Without loss of generality \(a_{0}=1\). Looking at this equation mod \((q-1)\) gives \[(p-1)+a_{1}p(p-1)=0\pmod{(q-1)}.\] So this is only possible in \(3\) cases: * \(a_{1}=0\), so \((q-1)|(p-1)\), which violates \((p-1)^{2}\neq 0\pmod{q-1}\). * \(a_{1}=1\), so \((q-1)|(p-1)+p(p-1)\), which violates \(p^{2}-1\neq 0\pmod{q-1}\). * \(a_{1}=-1\): so \((q-1)|p(p-1)-(p-1)\), which again violates \((p-1)^{2}\neq 0\pmod{q-1}\). We have concluded \(a_{0}=0\). This means \((p-1)\) cannot be nontrivially involved in any of our row or column NARs, so row \((1,n_{2})\) is fixed and column \((1,0)\) is matched. Suppose row \((0,n_{2})\) is also fixed. Then we can apply Proposition 5.4 to prove that there exists a NAR on \[\otimes_{j\neq 1}\{1,(p_{j}-1),\ldots,(p_{j}-1)p_{j}^{n_{j}-1}\}=\{1,(q-1), \ldots,(q-1)q^{n_{2}-1}\},\] which is impossible since that is a super sequence. Thus, we know row \((0,n_{2})\) is not fixed. This means its weight \(p(p-1)\) is involved in a row NAR \[p(p-1)+a_{2}(q-1)+a_{3}(p-1)(q-1)+a_{4}(q-1)p(p-1)+\cdots=0,\] so \((q-1)|p(p-1)\). If \(\gcd(q-1,p)=1\), then we would have \((q-1)|(p-1)\), which is a contradiction since \((q-1)\nmid(p-1)^{2}.\) Therefore we must have \(q-1=kp\) where \(k|(p-1)\). In particular, \(k\neq 1\) (else \(q\) would be even; we are implicitly using here that \(p\) is odd), so \(q-1\geq 2p\). Recall that rows \((1,n_{2})\) and \((2,n_{2})\) are both fixed. Lemma 5.3 tells us that their corresponding entries are equal except for entries \((2,*)\), in which case they are negatives of each other. Let \(D_{+}\) be the columns \((i,j)\) where \(i<2\) (alternatively, where row \((1,n_{2})\) is positive) and \(D_{-}\) be the columns \((i,j)\) where \(i=2\) (alternatively, where row \((1,n_{2})\) is negative). We can observe that the sum of the two rows \(r_{s}\) has nonzero entries only on \(D_{+}\) and the difference \(r_{d}\) has nonzero entries only on \(D_{-}\). Furthermore, the nonzero entries are exactly twice their corresponding entries in row \((2,n_{2})\). As an example, if \(p=3\), \(q=7\), and \(n_{2}=2\), the rows \((1,n_{2})\) and \((2,n_{2})\), respectively, are \begin{tabular}{l|c c c c c c c c|c} \hline \((1,n_{2})\) & 1 & 6 & 42 & 2 & 12 & 84 & \(-6\) & \(-36\) & \(-252\) & \((\times 2)\) \\ \((2,n_{2})\) & 1 & 6 & 42 & 2 & 12 & 84 & 6 & 36 & 252 & \((\times 1)\) \\ \(r_{s}/2\) & 1 & 6 & 42 & 2 & 12 & 84 & 0 & 0 & 0 & \\ \(r_{d}/2\) & 0 & 0 & 0 & 0 & 0 & 0 & 6 & 36 & 252 & \\ \hline \end{tabular} Applying Lemma 4.6 to \(r_{s}/2\) and \(r_{d}/2\), we obtain that if the one-sided-form of the column NAR on \((2,n_{2})\) is \[X:\sum_{i\in D_{+}}a_{i}M(N)[(2,n_{2}),i]=0,\] we must also have individually that \[X_{+}:\sum_{i\in D_{+}}a_{i}M(N)[(2,n_{2}),i]=0\] and \[X_{-}:\sum_{i\in D_{-}}a_{i}M(N)[(2,n_{2}),i]=0,\] so the column NAR for \((2,n_{2})\) "splits" into a column NAR \(X_{+}\) on \(r_{s}/2\) and also a column NAR \(X_{-}\) on \(r_{d}/2\). Looking at \(r_{d}/2\), if any of the columns in \(D_{-}\) are involved nontrivially in \(X_{-}\), we must have a NAR on \[p(p-1),p(p-1)(q-1),p(p-1)(q-1)q,\ldots,p(p-1)(q-1)q^{n_{2}-1}.\] These are exactly \(p(p-1)\) times the elements in \(P(q^{k})\), which form a super sequence and thus cannot form NARs. Thus, all the columns in \(D_{-}\) are matched, and \(X=X_{+}\) only involves columns in \(D_{+}\). These have weights (excluding the first column with weight 1), \[\{(q-1),(q-1)q,(q-1)q^{2}\ldots,(q-1)q^{n_{2}-1}\}\] \[\cup \{(p-1),(p-1)(q-1),(p-1)(q-1)q,(p-1)(q-1)q^{2}\ldots,(p-1)(q-1)q^{ n_{2}-1}\}.\] We now use our earlier observation that \(q=kp+1\) for some \(k>1\). Put the above weights in the "interlaced" order: \[(p-1),(q-1),(p-1)(q-1),(q-1)q,(p-1)(q-1)q,(q-1)q^{2},\ldots\] We can check inductively that this is a super sequence (this trick also appears in [7]): 1. \((p-1)<(q-1)\) since \(q-1=kp\), \(k>1\). 2. \((p-1)+(q-1)<(p-1)(q-1)\) since \((p-1),(q-1)\) are both \(\geq 3\). 3. If we sum the first \(2t+1\) elements (where \(t\geq 1\)), we obtain \[(p-1)+(q-1)+\cdots+(q-1)q^{t-1}+(p-1)(q-1)q^{t-1}\] \[=(p-1)(q^{t})+(q-1)(q^{t}-1)/(q-1)\] \[=pq^{t}-1<(q-1)q^{t}.\] 4. If If we sum the first \(2t\) elements (where \(t\geq 2\)) we obtain \[(p-1)+(q-1)+\cdots+(p-1)(q-1)q^{t-1}+(q-1)q^{t}\] \[=pq^{t}-1+(q-1)q^{t}\] \[=(p+q-1)q^{t}-1\] \[<(p-1)(q-1)q^{t}.\] As a result, the existence of such an \(X\) is impossible, and we arrived at a contradiction. Compared to Corollary 5.6, this result assumes less because we allow for the case where row \((0,n_{2})\) were not fixed. ## 7 \(N=p^{2}q^{2}\) In this section, we prove that all \(N=p^{2}q^{2}\) satisfy the ISAP. Without loss of generality, we assume \(p<q\) for this entire section. We first compute \(M(N)\). Since the coordinates in \(I(N)\) are all single digits, for this section we will suppress the parentheses. For example, we use "\(21\)" as shorthand for \((2,1)\). The table follows, where we omit the unused leftmost column \(00\): \begin{tabular}{|c|c c c c c c c c|c|} \hline & \(01\) & \(02\) & \(10\) & \(11\) & \(12\) & \(20\) & \(21\) & \(22\) & mult. \\ \hline \(00\) & \(-1\) & \(0\) & \(-1\) & \(1\) & \(0\) & \(0\) & \(0\) & \(0\) & \(\times^{pq(p-1)}\cdot(q-1)\) \\ \hline \(01\) & \((q-1)\) & \(-q\) & \(-1\) & \(-(q-1)\) & \(q\) & \(0\) & \(0\) & \(0\) & \(\times^{p(p-1)}\cdot(q-1)\) \\ \hline \(02\) & \((q-1)\) & \(q(q-1)\) & \(-1\) & \(-(q-1)\) & \(-q(q-1)\) & \(0\) & \(0\) & \(0\) & \(\times^{p(p-1)}\cdot\cdot(q-1)\) \\ \hline \(10\) & \(-1\) & \(0\) & \(p-1\) & \(-(p-1)\) & \(0\) & \(-p\) & \(p\) & \(0\) & \(\times^{q(p-1)}\cdot(q-1)\) \\ \hline \(11\) & \((q-1)\) & \(-q\) & \(p-1\) & \(\begin{array}{c}(p-1)\cdot\\ (q-1)\end{array}\) & \(-(p-1)q\) & \(-p\) & \(-(q-1)p\) & \(pq\) & \(\times^{p(p-1)}\cdot(q-1)\) \\ \hline \(12\) & \((q-1)\) & \(q(q-1)\) & \(p-1\) & \(\begin{array}{c}(p-1)\cdot\\ (q-1)\end{array}\) & \(q(p-1)\) & \(q(p-1)\) & \(-p\) & \(-(q-1)p\) & \(-pq(q-1)\) & \(\times(p-1)\) \\ \hline \(20\) & \(-1\) & \(0\) & \((p-1)\) & \(-(p-1)\) & \(0\) & \(p(p-1)\) & \(-p(p-1)\) & \(0\) & \(\times q(q-1)\) \\ \hline \(21\) & \((q-1)\) & \(-q\) & \((p-1)\) & \(\begin{array}{c}(p-1)\cdot\\ (q-1)\end{array}\) & \(-(p-1)\) & \(-(p-1)q\) & \(p(p-1)\) & \(p(p-1)\cdot(q-1)\) & \(-qp(p-1)\) & \(\times(q-1)\) \\ \hline \(22\) & \((q-1)\) & \(q(q-1)\) & \((p-1)\) & \(\begin{array}{c}(p-1)\cdot\\ (q-1)\end{array}\) & \(q(q-1)\) & \(p(p-1)\) & \(p(p-1)\) & \(p(p-1)\cdot(q-1)\) & \(\times 1\) \\ \hline \end{tabular} \end{table} Table 1: The number of \((p-1)\) is \((p-1)(q-1)( **Lemma 7.1**.: _If \(p<11\), \(N=p^{2}q^{2}\), and \(p<q\), then \(N\) satisfies the ISAP._ Proof.: We can see this Lemma as an example of Theorem 5.2 "in practice." For any particular choice of \(p\), note that if \(q>p^{3}\), the sequence \[(p-1),p(p-1),(q-1),(p-1)(q-1),p(p-1)(q-1),q(q-1),q(p-1)(q-1),pq(p-1)(q-1)\] is an increasing super sequence, so all the rows must be fixed. This means for us to have a counterexample, for any fixed \(p\) we only have to search up to \(q=p^{3}\). We used a computer program to search the values of \(p\in\{2,3,5,7\}\) up to their corresponding upper bounds and found no counterexamples. A seemingly small but very useful consequence of Lemma 7.1 is we can assume both \(p\) and \(q\) are odd. In fact, since we will frequently implicitly use \((p-1)\neq 1\) to argue certain numbers are different, allowing \(p=2\) would make many of our arguments false. Our "precomputation" also allows us to prove an amusing (and necessary for later!) condition: **Lemma 7.2**.: _If \(N=p^{2}q^{2}\) and \(p\) and \(q\) are twin primes, then \(N\) satisfies the ISAP._ Proof.: In this situation, \((q-1)=p+1\) and \(q=p+2\). Because row 22 is fixed, it suffices to consider the weights of the other 8 rows. These have weights \[(q-1),q(q-1),(p-1),(q-1)(p-1),p(p-1),q(p-1)(q-1),p(p-1)(q-1),pq(p-1)(q-1).\] Substituting, we get that they have the weights \[(p+1),(p+2)(p+1),(p-1),(p+1)(p-1),p(p-1),(p+2)(p-1)(p+1),p(p-1)(p+1),p(p+2)(p- 1)(p+1).\] Note that the first two weights (corresponding to rows 21 and 20 respectively) are the only ones not immediately divisible by \((p-1)\), so their combined involvement in any (disjoint) row NAR must be divisible by \((p-1)\). When \(p>3\) we cannot have \((p-1)|p+1\). When \(p>5\) we cannot have \((p-1)|(p+1)^{2}\). When \(p>7\) we cannot have \((p-1)|(p+2)(p+1)\), when \(p>9\) we cannot have \((p-1)|[(p+2)(p+1)+(p+1)]\), so since we know \(p\geq 11\) by Lemma 7.1, these 2 elements cannot be involved in a NAR. Their corresponding rows 21 and 20 thus must be fixed. As in the proof of Theorem 5.5, if the rows \(22,21,20\) are all fixed and \(\{1,(p-1),(p-1)p\}\) has no NAR (which is true since \(p>2\)), \(N\) satisfies the ISAP. **Lemma 7.3**.: _If \(N=p^{2}q^{2}\) with \(p<q\), then the values \(pq(p-1)(q-1)\) and \(q(q-1)(p-1)\) cannot be involved nontrivially in any NARs on \(P(N)\). As a consequence, if \(A\) and \(B\) are cospectral with this \(N\), rows \(00\) and \(10\) must be fixed and columns \(22\) and \(12\) must be matched._ Proof.: The sum of \(I(N)\) is \(p^{2}q^{2}\). By Lemma 7.1, \(p,q\) are odd. By elemntary algebra, this implies \(2(p-1)(q-1)>pq\). Thus, \(pq(p-1)(q-1)>p^{2}q^{2}/2\), which implies that this single value is greater than the sum of all other values in \(I(N)\). This means if \(pq(p-1)(q-1)\) appears on one side of an additive relation on \(I(N)\), it must appear on the other as well, so it cannot be involved nontrivially in any NARs on \(I(N)\). Row 00 and column 22 correspond to weight \(pq(p-1)(q-1)\), so they must be fixed and matched respectively. We can now remove the value \(pq(p-1)(q-1)\) from our consideration and compare the second largest element \(q(q-1)(p-1)\) with the other values in \(P(N)\). consider \[q(q-1)(p-1)-q(q-1)-(q-1)-(p-1)-(p-1)(q-1)-p(p-1)-p(p-1)(q-1)\] \[=q(q-1)(p-2)-(qp-1)-qp(p-1)\] \[=q(q-1)(p-2)+1-qp^{2}\] \[\geq q[(p+3)(p-2)-p^{2}]+1,\] which is true when \(p\geq 7\). The inequality in the last line used \(q\geq p+4\), which is true since \(p\) and \(q\) cannot be twin primes by Lemma 7.2. This means \(q(q-1)(p-1)\) cannot be involved nontrivially in any NAR on \(P(N)\) either, so the corresponding row \(10\) and column \(12\) must be fixed and matched respectively. We are now ready to prove our main result. **Theorem 7.4**.: _Let \(N=p^{2}q^{2}\) with \(p<q\). Then \(N\) satisfies the ISAP._ Proof.: We prove by contradiction. Suppose not, then there exist some cospectral \(A\) and \(B\). We know that rows \(22\) and \(00\) are fixed. By Lemma 4.6, we know that there exists a column NAR for the difference between these two rows \[\begin{bmatrix}01&02&10&11&20&21\\ \hline q&q(q-1)&0&q(p-1)&p(p-1)&p(p-1)(q-1)\end{bmatrix}.\] In the expression above, we omitted columns \(00\), \(12\), \(22\) by Lemma 7.3 because we only care about nontrivial contributions to this NAR. Taken mod \(q\), the entries become \[\begin{bmatrix}0&0&0&0&p(p-1)&-p(p-1)\end{bmatrix}\pmod{q}.\] Because \(q>p\), \(p(p-1)\neq 0\pmod{q}\). This means the last two columns \(20\) and \(21\) must appear together (in exactly one of \(A\) or \(B\)) or be matched. Looking at row \(10\) individually, which has entries \[\begin{bmatrix}01&02&10&11&20&21\\ \hline-1&0&(p-1)&-(p-1)&-p&p\end{bmatrix},\] this means the columns \(20\) and \(21\) contributes a net \(0\) to the column sums for \(A\) or \(B\) for this row. Therefore, column \(01\) must be matched, since there is no way (implicitly using \(p>3\)) to cancel out \(-1\) with just a single copy of \((p-1)\) and \(-(p-1)\). Thus, columns \(10\) and \(11\) must appear together (in exactly one of \(A\) or \(B\)) or be matched. We can restate our observations about rows \(20,21,10,11\) as saying that we can write down NAR on \(3\) columns \(\{02,10+11,20+21\}\) where, for example, column \(10+11\) corresponds to the sum of the two corresponding column weights. We obtain \[\begin{bmatrix}02&10+11&20+21\\ \hline q(q-1)&q(p-1)&qp(p-1)\end{bmatrix},\] which we can divide out by \(q\) to obtain \((q-1),(p-1),p(p-1)\). If \((p-1)+p(p-1)=(q-1)\), then \(q=p^{2}\), a contradiction. This means this NAR must be \((q-1)+(p-1)=p(p-1)\), which means, without loss of generality, \(A\) has columns \(\{02,10,11\}\) and \(B\) has columns \(\{20,21\}\) (and some subset of the other columns appear in both). This means row \(01\) must be not fixed, because otherwise we would obtain the equation \[(-q)+(-1)+(-(q-1))=0,\] which is impossible. We also know that \(I(N)\) must induce at least one (disjoint) row NAR, where rows \(00,22,10\) are not involved because they are fixed. Take any such NAR \(X\) and write it in the one-sided form \[a_{21}(q-1)+a_{20}q(q-1)+a_{12}(p-1)+a_{11}(p-1)(q-1)+a_{02}p(p-1)+a_{01}p(p-1) (q-1)=0,\] where each \(a_{i}\) is in \(\{-1,0,1\}\) depending on if it appears on the left-hand-side, neither, or the right-hand-side of \(X\) respectively. Since the terms on \(a_{11}\) and \(a_{01}\) are divisible by \((p-1)(q-1)\), we must have (switching signs for the \(a_{i}\)'s if necessary) \[a_{12}(p-1)+a_{02}p(p-1)+a_{21}(q-1)+a_{20}q(q-1)=k(p-1)(q-1),\] for some \(k\in\mathbb{Z},k\geq 0\). The maximum possible sum of the \(4\) terms on the left is \((p-1)+p(p-1)+(q-1)+q(q-1)=(p+1)(p-1)+(q+1)(q-1)\). So we know that \(k\) is at most \[\frac{(p+1)(p-1)+(q+1)(q-1)}{(p-1)(q-1)} =\frac{p+1}{q-1}+\frac{q+1}{p-1}\] \[=\frac{p+1}{(p-1)^{2}}+\frac{(p-1)^{2}+2}{p-1}\] \[=(p-1)+\frac{p+1+2(p-1)}{(p-1)^{2}}\] \[<p.\] Since \(k\) is an integer, \(k\leq(p-1)\). Therefore, because \[k(p-1)(q-1)+a_{11}(p-1)(q-1)+a_{01}p(p-1)(q-1)=0,\] we can divide out to get \[k+a_{11}+a_{01}p=0.\] Because we showed earlier that row \(01\) is not fixed, we can assume we picked an \(X\) where \(a_{01}\neq 0\) (there should be exactly two such choices, from the construction of the row NARs). Since \((p-1)\geq k>0\), this means we must have \(a_{01}=-1\), \(a_{11}=1\), and \(k=(p-1)\). Therefore, \[(p-1)^{2}(q-1) =a_{12}(p-1)+a_{02}p(p-1)+a_{21}(q-1)+a_{20}q(q-1)\] \[(p-1)^{4} =a_{12}(p-1)+a_{02}p(p-1)+a_{21}(p-1)^{2}+a_{20}q(p-1)^{2}\] \[(p-1)^{3} =a_{12}+a_{02}p+a_{21}(p-1)+a_{20}q(p-1).\] It is clear that we need \(a_{20}=1\), else the right-hand-side is not big enough. Thus, we have concluded that if \(01\) is on one side of a row NAR, \(11\) and \(20\) must both be on the other side. Finally, let \(C\) be the set of columns that appear in both \(A\) and \(B\). This means \(v_{A}=v_{A^{\prime}}+v_{C}\) and \(v_{B}=v_{B^{\prime}}+v_{C}\), where \(A^{\prime}=\{02,10,11\}\) and \(B^{\prime}=\{20,21\}\). We can compute \[\begin{array}{c|c|c}&v_{A}^{\prime}&v_{B}^{\prime}\\ \hline 00&0&0\\ 01&-2(q-1)&0\\ 02&q(q-2)&0\\ 10&0&0\\ 11&(p-2)q&-pq\\ 12&q(p+q-2)&-pq\\ 20&0&0\\ 21&q(p-2)&pq(p-1)\\ 22&pq(p-1)&pq(p-1).\end{array}\] Recall that we just proved that there must be exactly \(2\) row NARs with \(a_{01}\neq 1\) and \(a_{11},a_{20}\) having the opposite sign as \(a_{01}\). By construction of the row NARs (recall that they encode which entries in \(v_{A}\) equal which entries in \(v_{B}\) via \(\rho\)), we obtain that \(v_{A}[01]=v_{B}[11]=v_{B}[20]\) and \(v_{B}[01]=v_{A}[11]=v_{A}[20]\). This implies \[0=v_{B}[11]-v_{B}[20]=(v_{B^{\prime}}[11]+v_{C}[11])-(v_{B^{\prime}}[20]+v_{C} [20]),\] so we know that \((v_{B^{\prime}}[20]-v_{B^{\prime}}[11])=(v_{C}[11]-v_{C}[20])\), which must also be equal to \((v_{A^{\prime}}[20]-v_{A^{\prime}}[11])\) by a symmetric argument. However, \((v_{A^{\prime}}[20]-v_{A^{\prime}}[11]=-(p-2)(q)\) and \((v_{B^{\prime}}[20]-v_{B^{\prime}}[11])=pq\), which gives a contradiction. ## 8 Conclusion The main generalizable contributions of this paper are exploiting the multiplicative structure of \(M(N)\) to compactly represent spectra of ICGs and identifying row and column NARs induced by a cospectral pair; these techniques can be strengthened and reused in future work on ICGs. Both the \(N=p^{2}q^{2}\) and \(N=p^{2}q^{n_{2}}\) results involved nontrivial amounts of ad-hoc tinkering that was hard for us to generalize. As an example of the underlying complexity, one of our main strategies was extracting information (in form of column NARs) when we can prove or assume that rows are fixed. However, even in the "small" \(N=p^{2}q^{2}\) case, we have found plausible NARs involving every weight outside of \(\{1,q(p-1)(q-1),pq(p-1)(q-1)\}\) (although not simultaneously). This implies it is a priori impossible to assume any rows are fixed outside of rows \(\{00,10,22\}\). This situation remains an obstacle for bigger \(N\). ## Acknowledgments We thank Wasin So for introducing us to this problem and valuable discussion.
2302.06228
Unsupervised Detection of Behavioural Drifts with Dynamic Clustering and Trajectory Analysis
Real-time monitoring of human behaviours, especially in e-Health applications, has been an active area of research in the past decades. On top of IoT-based sensing environments, anomaly detection algorithms have been proposed for the early detection of abnormalities. Gradual change procedures, commonly referred to as drift anomalies, have received much less attention in the literature because they represent a much more challenging scenario than sudden temporary changes (point anomalies). In this paper, we propose, for the first time, a fully unsupervised real-time drift detection algorithm named DynAmo, which can identify drift periods as they are happening. DynAmo comprises a dynamic clustering component to capture the overall trends of monitored behaviours and a trajectory generation component, which extracts features from the densest cluster centroids. Finally, we apply an ensemble of divergence tests on sliding reference and detection windows to detect drift periods in the behavioural sequence.
Bardh Prenkaj, Paola Velardi
2023-02-13T10:02:20Z
http://arxiv.org/abs/2302.06228v2
# Unsupervised Detection of Behavioural Drifts with Dynamic Clustering and Trajectory Analysis ###### Abstract Real-time monitoring of human behaviours, especially in e-Health applications, has been an active area of research in the past decades. On top of IoT-based sensing environments, anomaly detection algorithms have been proposed for the early detection of abnormalities. Gradual change procedures, commonly referred to as drift anomalies, have received much less attention in the literature because they represent a much more challenging scenario than sudden temporary changes (point anomalies). In this paper, we propose, for the first time, a fully unsupervised real-time drift detection algorithm named DynAmo, which can identify drift periods as they are happening. DynAmo comprises a dynamic clustering component to capture the overall trends of monitored behaviours and a trajectory generation component, which extracts features from the densest cluster centroids. Finally, we apply an ensemble of divergence tests on sliding reference and detection windows to detect drift periods in the behavioural sequence. Anomaly detection, unsupervised detection, drift detection, behavioural changes, e-health, dynamic clustering. ## 1 Introduction Behavioural changes are gradual processes that take place over a long period of time [1]. Gradual change procedures represent a conceptually systematic set of behaviours [2], widely analysed in many contexts, among which, patterns of decline in the elderly resulting from the Alzheimer and Parkinson diseases [3], and personal or collective behaviour changes, such as stop smoking, saving energy and losing weight [4]. Recently, real-time monitoring systems based on sensors offer an unprecedented opportunity to monitor human behaviour [5] unobtrusively. For example, environmental sensors and wearable devices are widely used in telemedicine applications to support doctors in preventing, treating, and improving health conditions [6]. On top of these systems, deep learning anomaly detection algorithms have been proposed to automatically identify and detect various behavioural changes, as surveyed in [7]. However, most models in the literature suffer from at least one of the following limitations: 1. they mostly concentrate on sudden, temporary changes, referred to as _point anomalies_[8], rather than gradual changes (_drift anomalies_); 2. they need training on behavioural data, which might not be realistic since behaviours, and anomalies therein, are highly context- and person-dependent; 3. they fail to discover latent drift periods when the training (reference) set contains anomalous behaviours, a possibility that cannot be ruled out in real-world contexts. This paper presents DynAmo, short for **D**ynamic Drift **A**on**naly **D**etector, a fully unsupervised strategy for detecting gradual behavioural changes based on dynamic clustering and trajectory detection. The dynamic clustering component captures an overall trend of the time series representing a monitored behaviour (e.g., sleeping) and produces clusters for each monitoring interval (e.g., one day). The densest cluster in each interval becomes the input to the next component, a trajectory generator, which extracts features from the cluster centroids. Finally, DynAmo predicts the drift areas for each observed feature of the monitored action, for example, the duration and onset of sleep or the number of sleep interruptions. Although the proposed strategy applies to general drift detection, this paper explicitly addresses a challenging scenario where the goal is unsupervised real-time detection of drift changes from sensor data sequences. This context is particularly relevant in telemedicine, and continuous patient care [9]. We organise the rest of this paper as follows. Section 2 discusses the related work and provides an overview of the contribution this paper offers. Section 3 describes our strategy ranging from the input modelling techniques to the drift detection mechanism. Section 4 enlists synthetic and real-scenario datasets and illustrates their characteristics in the normal/anomalous period throughout the time series. Section 5 provides extensive experiments on DynAmo and SOTA methods. Finally, Section 6 concludes the paper. ## 2 Related Work As we already remarked, while there is a vast literature on point anomaly detection [7], drift detection received much less attention, also due to its increased complexity. As illustrated in Table I, systems of drift anomaly detections are divided into batch and online detectors. According to Gemaque et al. [28], drift detection methods utilise a reference and detection window. The former (usually) contains the normal1 event distribution, whereas the latter contains unseen data, possibly including sudden or drift anomalies. In _batch detection_ approaches, the reference window remains fixed in time, whereas the detection window slices through the trajectory of events. Batch detectors raise a drift anomaly when the distribution of the detection window differs from the reference window. Contrarily, the reference window of _online detectors_ is dynamically replaced by the detection window when their distributions differ more than an established threshold. This window change renders online detectors adjustable to routine changes (e.g. seasonality shifts, permanent or temporary changes of lifestyles). As the detection window moves through the series, the reference window can generally slide one event at a time2. Footnote 1: In real-life contexts, this is not guaranteed: anomalies may occur at any time, including the initial monitoring period. The literature has contributed with several _(semi)supervised_ solutions towards drift anomaly detection whether by exploiting batch [10, 16, 17, 20] or online approaches [11, 12, 21, 24, 27, 15]. Although being the minority of works, _unsupervised_ drift detection has been covered in the literature [11, 13, 14, 18, 19, 22, 23]. **Batch drift detectors:** Liu et al. [19] propose NN-DVI, a distribution-based approach that assumes that misdrifts are caused by regional density changes. The authors rely on three modules: i.e. (1) kNN-based space partitioning for data modelling; (2) distance function to accumulate density discrepancies; (3) statistical significance test to determine the drifts. Li et al. [22] build models using random feature sampling and calculate their corresponding anomaly scores. They exploit an anomaly buffer based on model dynamic adjustment algorithm to distinguish between true drifts and normal sequences incorrectly labelled as anomalous. Bashir et al. [16] propose a two phase architecture. First, they train a classifier and collect data characterising each class. Second, they collect batches of data for each class and verify whether the instances of these classes differ from the data of the previous phase. Sethi and Kantardzic [10] propose MD3 to monitor changes in the region of the classifier decision space where predictions are uncertain. The authors assume that a drift occurs when the density of this variation is higher than a specific threshold, similarly to [18]. Inspired by the limitation of [10], the same authors [17] propose MD3-EGM, a semi-supervised method based on ensembles. Lastly, Cerqueira et al. [26] propose STUDD, a semi-supervised teacher-student learning paradigm where drifts are detected according to the error of the student model. **Online drift detectors:** dos Reis et al. [11] propose IKS-bdd, an online form of the KS test with two sliding windows for drift detection. Koh [14] proposes a drift detector on transactional data streams.The method has two parts: i.e. local and global drift detection. The main idea behind detecting local drift is to compare two windows, \(W_{0}\) and \(W_{1}\), using the Hoeffding Bound. When the sample mean difference between \(W_{0}\) and \(W_{1}\) is more than \(\delta\), a drift is signaled. For global drift detection, the author uses two decision trees for \(W_{0}\) and \(W_{1}\) and examines their disagreement. Lughofer et al. [15] use the Page-Hinkley test to detect changes. Chang et al. [21] propose KLCPD, a method of composite kernels that combine RBF kernels with injective functions. The authors parameterise the injective functions via RNNs to capture the temporal dynamics of complex time series. De Mello et al. [23] exploit the concept of stability by computing the divergence between the sliding reference and detection windows. Haque et al. [12] propose SAND, an ensemble of kNN classifiers. SAND predicts the label of an unknown example \(x^{\prime}\) by majority voting, and it stores the predicted class and the confidence scores. If the confidence values diverge from the beta distribution by exceeding a threshold, then SAND detects a drift. An adaptation procedure takes place by updating the ensemble with the true labels of the instances with low confidence scores. Similarly, Pinage et al. [24] propose a dynamic classifier based on an initial ensemble and a configurable drift detector guided by a pseudo-error rate to perform detections. When a certain number of the base classifiers in the ensemble indicate a drift, then the validation set gets updated using the new labelled samples; otherwise, the ensemble continues learning using the ground truth labels. Kim et al. [13] first train the model on labelled instances. Then, they monitor the differences in uncertainty for the instances in both windows. Before labelling the instances of the detection window, the model calculates a confidence interval for the uncertainty of the events in the reference window: if it exceeds the upper limit, a drift is signaled. Upon detecting a drift, the authors retrain the model on the true positive events in the detection window. Haug and Kasneci [25] propose ERICS, a model-agnostic framework that treats the parameters of a predictive model as random variables and detects concept drifts by associating it to a change in the distribution of the optimal parameters. Lastly, to bridge the gap of invalid local attributions under drift conditions, Haug et al. [27] propose CDLEEDS, an adaptive hierarchical clustering approach capable of detecting local and global distributional drifts. ### _Limitations of the works in the literature and open challenges_ The works in the literature suffer from several limitations, including the cold-start problem, specifically for batch-based detectors. In detail, batch-based detectors do not cope with incoming real-time data. Hence, these detectors suffer \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Elucture type} & \multicolumn{3}{c}{Learning} & Pub. year \\ \cline{2-7} & Whole-batch & Partial-batch & Fixed performance & Multiple performance & & \\ \hline MD3 [20] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 2015 \\ \hline IKS-bdd [11] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2016 \\ \hline SAUND [11] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 2016 \\ DCAK [13] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2016 \\ DCAK [14] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2016 \\ DCAK [14] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2016 \\ DCAK [15] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 2016 \\ DCAK [16] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 2017 \\ DCAK [17] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 2018 \\ DCAK [18] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 2018 \\ DCAK [19] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2018 \\ DCAK [20] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2018 \\ DCAK [21] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2019 \\ DCAK [22] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2019 \\ DCAK [23] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2019 \\ DCAK [24] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2019 \\ DCAK [25] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2013 \\ DCAK [26] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2011 \\ DCAK [27] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [28] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [29] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [28] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [25] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [26] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [27] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [28] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [29] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [20] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [21] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [23] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [24] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [25] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [26] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [27] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK [28] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK[29] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK[20] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK[20] & \(\times\) & \(\times\) & \(\times\) & \(\times\) & U & 2012 \\ DCAK[23] & \(\times\) & \(\times\) & \(\times\) & in critical scenarios such as continual remote monitoring. Additionally, most of the methods enlisted in Table I require prior knowledge of the underlying distribution (data labels) to identify future drift periods correctly. Contrarily, knowing classes is generally acceptable for non-impactful scenarios, _one cannot assume to know human's normal behaviour beforehand_, especially for patients, whose behaviour may be altered by their specific health conditions. For example, a disturbed sleep with many interruptions may be the normality for a certain patient, for whom, instead, a gradual change may be represented by the lengthening of the period in which they stay in bed. Although unsupervised methods proposed in the literature overcome this problem, they suffer from intra-window distributional changes and feature evolution. Therefore, while they might be able to identify a drift occurring inside the reference window correctly, _they cannot detect a drift starting inside the reference window_. This is a drawback since, in real-world scenarios, anomalies can occur anytime after monitoring starts. We argue that a deferred drift detector is a special case of the cold-start problem where the drift can only be identified in the detection window. This phenomenon worsens if the detector is batch-based or an online detector with a fixed-reference window. Another common limitation is that drift detectors are usually specialised in detecting only some kinds of drift types [29] (e.g., gradual or recurrent drifts). Thus, they are not suitable to cope with sudden spikes of distributional changes since the width of the windows needs to be calibrated to handle the specific type of shift. A final open issue is the reproducibility and replicability of the experiments provided in the original papers, which is essential to compare and evaluate the merits and drawbacks of the different solutions. While some works do not publish their code online, others do not thoroughly explain the data processing, hyperparameter tuning, and evaluation used3. Footnote 3: Do the methods employ soft-margins to help in detecting drift periods? We refer to soft-margins as the area before and after the drift - in terms of a particular time unit - that the prediction can still be considered correct/valid. Typically, soft-margins are used in scenarios where a prediction at an exact time unit (date) is not necessary. ### _Our contribution to the literature_ Considering the drawbacks of the SOTA described above, we provide the following contributions: 1. We propose a _fully unsupervised_ drift detection technique based on dynamic clustering and trajectory detection, which works independently of input data distribution and prior knowledge of anomaly types (see Section 3.5 and Algorithm 1). 2. We avoid the cold-start problem, frequently observed in the literature, by not reserving portions of the input to fine-tune the model to detect drifts. 3. DynAmo is agnostic to various gradual drift anomaly types (e.g. gradual and recurrent drift), which provides robustness w.r.t. other strategies in the literature. 4. DynAmo has an integrated backward lookup parameter \(\lambda\) that considers past events in a behavioural trajectory. DynAmo uses \(\lambda\) to check the evolution of the feature hyperboxes associated with monitored events and determine potential shifts within the same window (reference or detection) regardless of the distribution being anomalous or not (see Algorithm 2 and Section 5.2). 5. DynAmo traces the trajectory of the densest cluster centroid for each sliding step, thus providing a visual and interpretable tool which gives domain non-specialists the ability to identify drifting trends in a two-dimensional space (see Sections 3.3 and 3.4). 6. To support the Open Science movement, we publish the code to our solution and provide easy steps for reproduction/replicability purposes of the experiments (see Section 5.1). ## 3 Methodology This section describes the proposed method for modelling and detecting anomalous time-periods in behavioural sequences. ### _Application Scenario and Summary Workflow_ We refer to a scenario in which an IoT environment is set to collect signals from a variety of ambient and wearable sensors to monitor specific behaviours such as daily activities (sleeping, eating, personal hygiene), vital signs (pressure, ECG), energy consumption (lighting, heating, use of household appliances), eating habits, smoking, physical activity, and more. We also assume that one or more sensors are set to monitor a specific behaviour (e.g., for sleep: pressure sensor in the bed, lighting, wearable sleep trackers), generating signals that are pre-processed and transformed into temporal sequences of discrete events (_data points_). The problem addressed in this Section is detecting gradual changes (drift anomalies) in the characterisation of data points, for example, changes in sleep quality. Note that addressing single behaviours does not mean that they are considered in isolation, since, as described hereafter, a behaviour is represented as a complex event, that may include contextual features such as, for the case of sleep, the activities performed before and after, or any breaks to go to the toilet. Figure 1 illustrates the steps of the proposed pipeline, summarised in the caption4. We begin by presenting a mathematical formalisation of the input model (see Section 3.2). As shown in the Figure, the crucial steps of drift detection are based on dynamic clustering [30] of the data points referring to a given observed behaviour, and next, on capturing the trend of the trajectory by building a stream of centroids that maximally comprehend the original series. Footnote 4: Notice that we enumerate the highlighted portions of the figure with the sections where we describe each component of the overall workflow. We give a brief description of how dynamic clustering works in Section 3.3. Trend capturing is crucial because it eliminates noisy data points (such as outliers) within a specific time window. This strategy allows us to discard data coming from the stream that do not contribute to the overall trend of the original behavioural sequence (see Section 3.4). Then, we explain our prediction framework (see Section 3.5). ### _Input modelling_ Before explaining the proposed method, we provide the reader with a brief formalisation of the behavioural sequences given in input. The following formalisation takes inspiration from the works in [31, 32]. The input modelling described hereafter applies to the context of multi-sensor monitoring of human behaviour in controlled environments, however, it can be easily extended to input data represented as multivariate temporal trajectories. In this context: 1. one or more sensors may concur to identify a specific event type \(i\) such as sleep, hygiene, or eating; 2. each event \(e_{i}\) of type \(i\) has its associated vector of features \(\Phi_{i}=\{\phi_{i,1},\phi_{i,2},\ldots,\phi_{i,|\Phi_{i}|}\}\); 3. events occurrences can be non-contiguous since the environment can contain blind spots out of sensor reach, or simply, there might be unobserved/unobservable behaviours; 4. events are non overlapping. These time series are composed of events of different types with associated beginning and end times. In other words, we consider discrete time series of events \(e_{i}^{b,f}\) with beginning time \(b\) and end time \(f\) such that \(b<f\), and type \(i\). Notice that, in our scenario, two events of any type5\(e_{i}^{b,f^{\prime}}\) and \(e_{i}^{b^{\prime\prime},f^{\prime\prime}}\) cannot overlap with one another: i.e., \(b^{\prime}\geq f^{\prime\prime}\ \vee\ b^{\prime\prime}\geq f^{\prime}\). Footnote 5: We use \(*\) to denote any type of events in our time series. Furthermore, each event \(e_{i}\) has its feature vector \(\Phi_{i}\), and two events of different types might have different features recorded and different dimensions of the feature space: i.e., given \(e_{i}\) and \(e_{j}\) s.t. \(i\neq j\), there is no constraint in the representation of \(\Phi_{i}\) and \(\Phi_{j}\). To generate a behavioural representation of the events of the same type, we group them together and order them according to the beginning time - see step 1. Thus, we can define \(\mathbb{X}_{i}\) as the sequence that encompasses the feature vectors of events \(e_{i}\) of type \(i\) - see step 2. Note that besides the feature vector \(\Phi_{i}\), \(\mathbb{X}_{i}\) also contains the beginning and end time of each \(e_{i}\). Hence, the temporal dimension of the events is preserved through the input transformation. For readability purposes, we omit the type \(i\) of the events from the notation, and assume that all the following formulas hold for any event type. Therefore, \(\mathbb{X}_{i}\) becomes \(\mathbb{X}\), and \(\Phi_{i}=\{\phi_{i,1},\ldots,\phi_{i,|\Phi_{i}|}\}\) becomes \(\Phi=\{\phi_{1},\ldots,\phi_{|\Phi|}\}\). ### _Capturing trends via dynamic clustering_ Unlike other approaches in drift anomaly detection, we use dynamic clustering, a method for tracking evolving environments (see steps 3-4 of Figure 1). We build on top of DyClee [33] and adapt it to create a trajectory of denser clusters used to classify, without any supervision, a sequence as anomalous or not. We refer the reader to Appendix C for a detailed description of the adaption of DyClee to our scenario. DyClee is a distance and density-based algorithm that handles non-convex and multi-density clustering, working incrementally in an unsupervised fashion. It exploits a two-staged algorithm to produce the final dense set of clusters. Fig. 1: Workflow of the proposed pipeline: i.e. input modelling (3.1) dynamic clustering (3.2), trajectory generation (3.3), and DynAmo (3.4). In **Step 1**, event vectors, generated in real-time during the signal pre-processing phase, are accumulated according to their type. In **Step 2**, a sequence \(\mathbb{X}_{i}\) is generated for each event type \(i\), where each row represents the feature vector of a new event of type \(i\) detected along the monitoring period. In **Steps 3** and **Step 4**, the dynamic clustering component captures an overall trend of the time series and produces clusters for each temporal interval \(\Delta\). The clusters in each interval become the input to the next component - trajectory generation (**Steps 5** and **Step 6**) - which extracts features from the densest cluster centroids. Finally, in **Steps 7** and **Step 9**, DynAmo predicts - in a fully unsupervised way - the drift areas (green boxes) for each feature of the events in the series, using an ensemble of divergence tests. For visualisation purposes, we report true positives, false negatives, and false positives only. In the first phase, it collects, processes, and compresses data samples in \(\mu\)-clusters, based on data point similarity according to the Manhattan distance. In the second phase, it performs another clustering pass over the density of each \(\mu\)-cluster calculated as the hypervolume of the bounding hyperbox that includes each hyper-dimensional cluster. To coordinate the clustering procedure, we employ a specific time interval \(\Delta\), - e.g., hourly, daily, weekly - that operates as a wake-up protocol over the incoming stream of events \(\mathbb{X}\). In detail, the first phase starts with an empty set of \(\mu\)-clusters. The first sample becomes the centre of the first \(\mu\)-cluster. The subsequent samples are grouped together according to Definition 3.1. **Definition 3.1**.: A \(\mu\)-cluster \(\mu C_{k}\) is reachable from \(\mathbb{X}\) at the \(j\)-th \(\Delta\) interval if \[L_{\infty}(\pi_{j}(\mathbb{X}),\vec{o}_{k})\equiv\max_{h}| \phi_{h}-o_{k,h}|<\frac{U_{h}}{2}\] \[\forall h\in[1,|\Phi|]\] where \(\pi_{j}(\mathbb{X})\) selects the events in \(\mathbb{X}\) that are situated within the temporal boundaries of the \(j\)-th \(\Delta\) interval, \(\vec{o}_{k}=\{o_{k,1},\ldots,o_{k,|\Phi|}\}\) is the origin of \(\mu C_{k}\), and \(U_{h}\) is the span of the \(h\)-th feature dimension (see Appendix C). Hence, the events in \(\pi_{j}(\mathbb{X})\) of the \(j\)-th \(\Delta\) interval are inserted in the closest _reachable_ cluster according to the Manhattan distance \[dist(\pi_{j}(\mathbb{X}),\mu C_{k})=\sum_{h=1}^{|\Phi|}|\phi_{h}-o_{k,h}|\] Having generated the \(\mu\)-clusters, we perform an additional step (step 4). In this step, we classify a \(\mu\)-cluster \(\mu C_{k}\) w.r.t. its density - see Equation 3 - as dense (\(\mathbb{D}\mu\)-cluster), semi-dense (\(\mathbb{S}\mu\)-cluster), or low-dense (\(\mathbb{O}\mu\)-cluster). An important feature of dynamic clustering is to monitor the trend changes in the distribution of \(\mu\)-clusters, and thus that of the incoming events. Recall that the dynamic clustering wakes up at every \(\Delta\) interval. To monitor temporal trend shifts, the algorithm employs a forgetting function \(g(j,\zeta(\mu C_{k}))\) based on the \(j\)-th \(\Delta\) interval and \(\zeta(\mu C_{k})\) is the latest time an event was associated to \(\mu C_{k}\). As suggested by the authors of the original paper, we use \(g(j,\zeta(\mu C_{k}))=e^{-0.02(t-\zeta(\mu C_{k}))}\) where \(t\) represents the wake-up time of DyClee in the \(j\)-th \(\Delta\) interval. The forgetting function takes care of those \(\mu\)-clusters that are no more significant in determining the trend shifts, thus, they do not contribute to forming new \(\mu\)-clusters. The output of the second step is a set of \(\mu\)-cluster groups where each \(\mu\)-cluster in a particular group is a \(\mathbb{D}\mu\)-cluster and all its surrounding \(\mu\)-clusters are either \(\mathbb{S}\mu\)- or \(\mathbb{D}\mu\)-clusters. A cluster can be defined according to Definitions 3.2 and 3.3. **Definition 3.2**.: Two \(\mu\)-clusters \(\mu C_{k_{1}}\) and \(\mu C_{k_{n}}\) are connected if there is a chain of \(\mu\)-clusters \(\{\mu C_{k_{1}},\ldots,\mu C_{k_{n}}\}\) such that the hyperbox6 of \(\mu C_{k_{h}}\) overlaps with that of \(\mu C_{k_{h+1}}\)\(\forall h\in[1,n)\) in all but \(m\) dimensions. Footnote 6: According to [33], a hyperbox is defined as the minimum and maximum (feature span) values on each feature dimension. **Definition 3.3**.: \(\mathbb{C}_{j}=\{\mu C_{k_{1}},\ldots,\mu C_{k_{n}}\}\) is a (dynamic) cluster in the \(j\)-th \(\Delta\) interval if all the \(\mu\)-clusters belonging to \(\mathbb{C}_{j}\) are connected with one another. One of the drawbacks of DyClee is that low-density \(\mathbb{O}\mu\)-clusters are considered as groups formed of outliers in each of the \(\Delta\) intervals. However, \(\mathbb{O}\mu\)-clusters do not necessarily include an anomalous event, because its density is lower than the median and the average of the already-formed \(\mu\)-clusters. Moreover, the construction of \(\mathbb{O}\mu\)-clusters is capable of capturing only abrupt anomalous events (i.e. point anomalies) and not gradual trend shifts in the trajectory (i.e. drift anomalies). This is due to the fact that \(\mathbb{O}\mu\)-clusters can transit into being \(\mathbb{S}\mu\)-clusters in case their density exceeds the median or the average density of the other \(\mu\)-clusters formed throughout the first step of the algorithm. Therefore, the density of a \(\mu\)-cluster is a necessary but not sufficient criterion in determining anomalous clusters that continuously grow in time due to a drift happening in the original time series. ### _Trajectory generation_ To account for the drawback mentioned above in signalling a potential drift, we add a mechanism to identify trajectory Fig. 2: Generation of a behavioural trajectory according to the densest cluster produced for each day. The central part of the image illustrates a sleep patterns throughout the days of monitoring. For simplicity, we depict only two instead of all features (duration and begin time), and we postulate a drift on both. We divided the time series into three time-intervals: i.e. \([t_{0},t_{1})\) and \((t_{2},t_{3})\) depicting normality, and \([t_{1},t_{2}]\) depicting a potential shift of distributions. Notice that the red box is the ground truth where the drift occurs. In \([t_{0},t_{1})\) (lower-left) the densest cluster \(C_{1}^{\text{sleep}}\) gets traced, which, as shown, are contained in a specific region; in \([t_{1},t_{2}]\) (upper-central) we detect \(C_{2}^{\text{sleep}}\) whose trend is shifting towards the upper quadrant of the plot indicating a possible drift; in \((t_{2},t_{3})\) (lower-right) the trajectory is again contained within a specific region depicting a new stable state. trends from dynamic clusters (see steps 5-6). In detail, Figure 2 illustrates the process of trajectory generation. As an example, let's refer to a scenario in which we observe the sleep of elder patients, with \(\Delta=1\) day: the red area illustrates how a monitored patient begins sleeping earlier and longer than usual. For visualisation purposes, we divide the observation period7 into three portions: Footnote 7: In this example, we consider only the sleep event type, and two features: its duration and begin time over the days of monitoring. * \([t_{0},t_{1})\) and \((t_{2},t_{3}]\) depict the normality of the series where the first interval before the drift and the second corresponds to the new behaviour after the drift; * \([t_{1},t_{2}]\) represents a potential shift of feature distributions. For each day \(j\)-th \(\Delta\) interval, we trace the densest cluster to form a trend trajectory of the multivariate time series \(\mathbb{X}\). Knowing that a cluster \(\mathbb{C}_{j}\) comprises of a set of \(\mu\)-clusters (see Definition 3.3) we obtain the densest cluster \[\mathcal{C}_{j}=\arg\max_{\mathbb{C}_{j}}\ \frac{\sum_{\mu C_{k}\in \mathcal{C}_{j}}\rho(\mu C_{k})}{|\mathcal{C}_{j}|}\,\forall\mathbb{C}_{j}\ \text{in the $j$-th $\Delta$ interval} \tag{1}\] where \(\rho(\mu C_{k})\) calculates the density of \(\mu C_{k}\) as described in Equation 3. Visually, one can notice how the trace of the densest clusters in \([t_{0},t_{1})\) and \((t_{2},t_{3}]\) (the "normal" periods) remains within a specific region of the plot (see lower-left and lower-right subplots in Figure 2), while the trace of the densest cluster in \([t_{2},t_{3}]\) (see upper-central subplot) moves diagonally from one extreme to the other. We can exploit this trajectory to determine the periods of drifts in the original series. Hence, we represent our trajectory as the centroids of the feature vectors of the events belonging to the densest cluster \(\mathcal{C}_{j}\). Here, we abuse the notation for better readability of Equation 2. To this end, let \(\{e^{1},\ldots,e^{|\mathcal{C}_{j}|}\}\) be the set of events that belong to cluster \(\mathcal{C}_{j}\), and let \(\Phi(k)\) be the feature vector of \(e^{k}\in\mathcal{C}_{j}\). The centroids of all features in the \(j\)-th \(\Delta\) interval are computed as follows: \[q_{j,h}=\frac{1}{|\mathcal{C}_{j}|}\cdot\sum_{k=1}^{|\mathcal{C}_{j}|}\Phi(k)_ {h}\ \ \forall h\in[1,|\Phi|] \tag{2}\] where \(\Phi(k)_{h}\) accesses the \(h\)-th position of the feature vector \(\Phi(k)\). Finally, we denote with \[\mathbb{Q}=\begin{bmatrix}q_{1,1}&q_{1,2}&\ldots&q_{1,|\Phi|}\\ \vdots&\vdots&\ddots&\vdots\\ q_{j,1}&q_{j,2}&\ldots&q_{j,|\Phi|}\\ \vdots&\vdots&\ddots&\vdots\\ q_{n,1}&q_{n,2}&\ldots&q_{n,|\Phi|}\end{bmatrix}\] the overall trend trajectory where \(n\) is the number of \(\Delta\) intervals extracted from the entire monitoring time. ### _Predictive strategy_ **Summary:** As done in online drift detection scenarios, we employ two sliding windows, named reference and detection. We check whether their data distributions change according to a divergence test \(\gamma\) (e.g., KL divergence,) applied on hyperboxes defined on the trend trajectory \(\mathbb{Q}\). If \(\gamma\) gives a positive outcome, then the events that belong to the detection window become part of the reference window, and the detection window moves along the time axis. Notice that both windows move according to a user-defined time step \(\delta\) at each iteration. Additionally, for \(\gamma\) to verify inter-window distributional changes, both windows need to contain \(\frac{\ell}{2}\) hyperboxes from the trend trajectory \(\mathbb{Q}\), s.t. \(4\leq\ell\leq\lfloor\frac{n}{2}\rfloor\). Hence, \(\ell\) denotes the parameter that specifies the window size, rather than the training set size as proposed in the literature (see Section 2), since we are in a fully unsupervised scenario. Differently from the classic sliding window online detectors, DynAmo permits the windows to reconstruct the evolution trend of the last \(\lambda\) hyperboxes. In this way, the distributional change between the reference and the detection window is not only based on the current view of the hyperboxes, but rather, it exploits their evolution to check whether the feature dimensions of the hyperbox centroids (\(q_{j,h}\in\mathbb{Q}\ \forall h\in[1,|\Phi|]\ \forall j\in[1,n]\)) vary in time. Notice that the evolution of the \(\lambda\) hyperboxes can be tracked in several ways that we detail below. Therefore, we adopt an ensemble of trackers \(T=\{\tau_{1},\ldots,\tau_{|T|}\}\) allowing us to have several drift detection strategies contributing to a more robust labelling [34] of a particular window. Similarly, we rely on a set of divergence tests \(\Gamma=\{\gamma_{1},\ldots,\gamma_{|\Gamma|}\}\) that contribute in generating the divergence prediction for each base component in the ensemble. To put this into perspective, we have \(|T|\cdot|\Gamma|\) base predictions at each iteration. Using an ensemble of trackers/divergence tests allows DynAmo to interpret slight evolution shifts in the hyperbox dimensions of \(\mathbb{Q}\). In this way, DynAmo is able to _potentially identify drift periods as they are happening_. Finally, as for all ensemble mechanisms, we need to aggregate the base components' prediction. DynAmo is a fluid framework that accepts any consensus function (e.g., averaging the "base" predictions) that might incorporate a threshold \(\sigma\) in binarising the prediction. **Algorithms:** In detail, we track the evolution of the hyperbox feature space delineating the densest centroid \(q_{j,h}\in\mathbb{Q}\ \forall h\in[1,|\Phi|]\ \forall j\in[1,n]\). All the trackers are based on the feature changes in the centroid hyperboxes. More formally, let \(\pi_{x,y}(\mathbb{Q})\) be the selection of the hyperboxes in \(\mathbb{Q}\) that are within the \(x\)-th and \(y\)-th \(\Delta\) interval (i.e., \(\pi_{x,y}(\mathbb{Q})=\{q_{j,h}\ |\ q_{j,h}\in\mathbb{Q}\ \wedge\ x\leq j\leq y\ \forall h\in[1,|\Phi|]\}\)) such that \(x\leq y\)). Here, \(x\) and \(y\) represent two \(\Delta\) intervals in \(\mathbb{Q}\) that summarise the evolution time span of the hyperboxes. In this way, in each iteration, \(x\) and \(y\) move along the time axis, and DynAmo is able to reconstruct the hyperbox evolutions in all feature dimensions \(\phi_{h}\ \forall h\in[1,|\Phi|]\). In other words, DynAmo uses \(\pi_{x,y}(\mathbb{Q})\) to calculate _evolving criteria_ according to the trackers in \(T\) that are then fed to the detection strategies in \(\Gamma\). As mentioned above, we rely on an ensemble of trackers to reconstruct the evolution of the hyperboxes, and, then use it to detect drift anomalies. Here, we define four hyperbox trackers - i.e., \(|T|=4\): * _The girth of the hyperbox (\(\tau_{1}\))._ We indicate with \(\eta_{x,y}\) the product of all feature spans in the hyperbox. In other words, \(\eta_{x,y}=\prod_{\phi_{h}\in\Phi}\max_{\phi_{h}}\pi_{x,y}(\mathbb{Q})-\min_{ \phi_{h}}\pi_{x,y}(\mathbb{Q})\). * _The difference between two hyperboxes (\(\tau_{2}\))._ We measure the element-wise difference between two hyper boxes \(\pi_{x,y}(\mathbb{Q})\) and \(\pi_{x^{\prime},y^{\prime}}(\mathbb{Q})\) where \(x^{\prime}\geq y\). Hence, we denote with \(\max_{x,y}^{x,y^{\prime}}\in\mathbb{R}^{|\Phi|}\) and \(\min_{x,y}^{x^{\prime},y^{\prime}}\in\mathbb{R}^{|\Phi|}\) the vectors that represent the difference of the maximum and the minimum between the two hyperboxes, respectively. In other words, \(\max_{x,y}^{x,y^{\prime}}=\{\max_{\phi_{b}}\pi_{x,y}(\mathbb{Q})-\max_{\phi_{b }}\pi_{x^{\prime},y^{\prime}}(\mathbb{Q})\mid\forall\phi_{b}\in\Phi\}\), and \(\min_{x,y}^{x,y^{\prime}}=\{\min_{\phi_{b}}\pi_{x,y}(\mathbb{Q})-\min_{\phi_{b }}\pi_{x^{\prime},y^{\prime}}(\mathbb{Q})\mid\forall\phi_{b}\in\Phi\}\). We can denote with \(M_{x,y}^{x^{\prime},y^{\prime}}\in\mathbb{R}^{|\Phi|,2}\) the matrix that contains the two vectors, \(\max_{x,y}^{x,y^{\prime}}\) and \(\min_{x,y}^{x,y^{\prime}}\), in its columns. Here, it is interesting to notice that the sign illustrates shrinkage/expansion phenomena happening for feature \(\phi_{h}\in\Phi\) through time; meanwhile, its magnitude is represented by the difference value. * _The Frobenius norm of the girth of the hyperbox_ (\(\tau_{3}\)). Instead of multiplying the different feature span in the hyperboxes \(\pi_{x,y}(\mathbb{Q})\), we calculate the norm. In other words, \(\tilde{\eta}_{x,y}=\sqrt{\sum_{\phi_{h}\in\Phi}(\max_{\phi_{h}}\pi_{x,y}( \mathbb{Q})-\min_{\phi_{h}}\pi_{x,y}(\mathbb{Q}))^{2}}\). * _The Frobenius norm of the differences between the hyperboxes (\(\tau_{4}\))._ Similarly to the norm of the girth, \(\widetilde{\max}_{x,y}^{x^{\prime},y^{\prime}}=\sum_{\phi_{h}}(\max_{\phi_{h }}\pi_{x,y}(\mathbb{Q})\quad-\quad\max_{\phi_{h}}\pi_{x^{\prime},y^{\prime}}( \mathbb{Q}))^{2}\) and \(\widetilde{\min}_{x^{\prime},y^{\prime}}=\sum_{\phi_{h}}(\min_{\phi_{h}}\pi_{x,y}(\mathbb{Q})-\min_{\phi_{h}}\pi_{x^{\prime},y^{\prime}}(\mathbb{Q}))^{2}\). Hence, we denote with \(\widetilde{M}_{x,y}^{x^{\prime},y^{\prime}}=\{\widetilde{\max}_{x,y}^{x^{ \prime},y^{\prime}},\widetilde{\min}_{x,y}^{x^{\prime},y^{\prime}}\}\) the vector with the two norms calculated previously. We use the trackers above to build the evolution of the feature space in the two sliding windows (see Algorithm 1). The algorithm shows how the previous and current windows move along the time axis (i.e., the rows in \(\mathbb{Q}\)). Notice that we trace the evolution of the hyperbox feature space in an intra-window fashion (see lines 7-27) for all the trackers \(\tau\in T\). We use a sliding mechanism handled by the index \(j\) and the window moving step parameter \(\delta\). For each \(j\)-th \(\Delta\) interval, we use \(populate_{ref}/populate_{det}\) to populate the reference/detection window dictionaries with the hyperbox evolution traces as described above. Notice that \(\tau\).track(\(W_{prev},W_{curr}\)) tracks the hyperbox evolution - see the previous four trackers. In our scenario, \(\tau\) uses \(W_{curr}\) only when we are tracking the difference between two hyperboxes (see \(\tau_{2}\)) or the norm of the differences of the two hyperboxes (see \(\tau_{4}\)). However, our framework permits users to define ad-hoc trackers and include them as base components in the ensemble mechanism. The guard \(populate_{ref}/populate_{det}\) gets toggled once the reference/detection window reaches the window size of \(\lfloor\frac{\ell}{2}\rfloor\). After \(|R_{\tau}|=|D_{\tau}|=\lfloor\frac{L}{2}\rfloor\)\(\forall\tau\in T\), we can detect anomaly drifts (line 28-29). We rely on the **detect** subroutine (see Algorithm 2) to detect drifts according to the evolution of the tracked hyperboxes and the divergence strategies \(\gamma\in\Gamma\). The **detect** subroutine outputs a matrix \(\hat{Y}\in\{0,1\}^{|\Gamma|,|T|}\) s.t. ``` 0:\(\mathbb{Q}\in\mathbb{R}^{n,|\Phi|}\), \(\delta>0\), \(\lambda\geq 0\), \(4\leq\ell\leq\lfloor\frac{n}{2}\rfloor\), \(0<\sigma<1\), \(\Gamma=\{\gamma_{1},\ldots,\gamma_{|\Gamma|}\}\), \(T=\{\tau_{1},\ldots,\tau_{|T|}\}\) 1:\(\hat{y}\gets 0^{n-\lambda}\)\(\triangleright\) initialise a zero vector of length \(n-\lambda\) 2:\(i,j,k\gets 1,2,1\) 3:\(populate_{ref},populate_{det}\gets true,false\) 4:\(R\leftarrow\{\}\) dictionary that will contain the tracked hyperboxes for each tracker \(\tau\in T\) in the reference window 5:\(D\leftarrow\{\}\) dictionary that will contain the tracked hyperboxes for each tracker \(\tau\in T\) in the detection window 6:\(x\leftarrow\max\{1,i-\lambda\}\) 7:\(W_{prev}\leftarrow\pi_{x,i}(\mathbb{Q})\) 8:while\(j<n\)do 9:\(y\leftarrow\max\{\delta,j-\lambda\}\) 10:\(W_{curr}\leftarrow\pi_{y,j}(\mathbb{Q})\) 11:for\(\tau\in T\)do 12:if\(populate_{ref}\)then 13:\(R_{\tau}\gets R_{\tau}\cup\tau\).track(\(W_{prev},W_{curr}\)) 14:if\(k=\lfloor\frac{f}{2}\rfloor\)then 15:\(populate_{ref},populate_{det}\gets false,true\) 16:\(k\gets 0\) 17:endif 18:elseif\(populate_{det}\)then 19:\(D_{\tau}\gets D_{\tau}\cup\tau\).track(\(W_{prev},W_{curr}\)) 20:if\(k=\lfloor\frac{f}{2}\rfloor\)then 21:\(populate_{det}\gets false\) 22:\(k\gets 0\) 23:endif 24:else 25: break 26:endif 27:endfor 28:if\(\neg populate_{ref}\lor populate_{det}\))then 29:\(\hat{Y}\leftarrow\) detect(\(R,D,\Gamma,T\)) 30:\(c\leftarrow\) consensus(\(\hat{Y},\sigma\)) 31:\(\hat{y}[j-\lfloor\frac{\ell}{2}\rfloor:j]=c\) 32:if\(c=1\)then 33:\(W_{prev}\gets W_{curr}\) 34:endif 35:\(i\gets j\) 36:\(populate_{ref},populate_{det}\gets true,false\) 37:\(k\gets 0\) 38:\(R,D\leftarrow\emptyset,\emptyset\) 39:endif 40:\(j,k\gets j+\delta,k+1\) 41:endwhile 42:return\(\hat{y}\) ``` **Algorithm 1****DynAmo** Ensembles of window-based trackers and drift checkers. Additionally, we optimised the running time of the original reference-detection window schema proposed in [28] to be linear in terms of the trajectory dimension. In this way, our strategy does a single pass, and ensures a correct tracking of the evolution of the feature space in the two windows. Lastly, our strategy is fully unsupervised requiring only \(\ell\) data points to build the two windows for detection. If we assume that calculating maximums, minimums, norms, summation, and products for the trackers, and performing the consensus aggregation can be done in constant time, then the time complexity of the overall algorithm is \(O(n|\Gamma||T|^{2})\). In detail, the \(O(n)\) is the loop through the \(\Delta\) intervals in \(\mathbb{Q}\) with a step of \(\delta\). Then we iterate through the tracker list, which costs \(O(|T|)\). Hence, without considering the **detect** subroutine, the time complexity amounts to \(O(n|T|)\). The **detect** subroutine has a time complexity of \(O(|\Gamma||T|)\). The space complexity of the entire algorithm is \(O(n|\Phi|+n+\ell|T|+|\Gamma||T|)=O(n|\Phi|)+O(|T|(\ell+|\Gamma|))\), where \(O(n|\Phi|)\) is the space required for \(\mathbb{Q}\); \(O(n)\) accounts for maintaining the \(\hat{y}\) vector; \(O(\ell|T|)\) accounts for saving \(R\) and \(D_{i}\) and \(O(|\Gamma||T|)\) is the space complexity of maintaining matrix \(\hat{Y}\) in the **detect** subroutine. Here, we assume that maintaining \(W_{ref}\) and \(W_{det}\) has a space complexity of \(O(1)\) since we can use pointers to the trend trajectory \(\mathbb{Q}\) and update them accordingly once a drift occurs. ## 4 Datasets To the best of our knowledge, there are no works in the literature that evaluate on datasets containing behavioural trajectories annotated with drifts during the time of monitoring. Rather, the literature concentrates on activity recognition in smart home environments [35, 36, 37, 38]. However, we rely on the reproduction of ARAS [35], VanKastereen [38], and PolimiHouse as proposed in [39] to evaluate DynAmo. These datasets have a synthetically generated drift period, that we attach to the end of the normal period. Each day of simulation contains activities of daily living and the scheduling timetable (wake-up call) of each sensor. Each dataset comprises of 90 days of a virtual inhabitant's life and has drift periods compatible with dementia symptoms. Additionally, we use the E-Linus (EL) dataset consisting of daily routines of two patients with symptomatic senile social isolation disorders. We created this dataset by collecting activity data within an ambient assisted living environment for elder people, during an industry-driven project, as detailed in [40]. We note that, although two patients may seem a small number, we consider them as single datasets that present different anomaly types (duration, sequence, start time, daily frequency, etc.) on 6 different activities (sleep, hygiene, etc.). Moreover, the challenge here is to learn a model of normality and abnormality tailored to each patient's peculiarities, since the conditions of the two selected patients, and their habitual activities were very different, i.e., one with regular and the other with deregulated sleep patterns. Given the relatively short monitoring period, we artificially extend these sequences over longer periods, based on small realistic perturbations of the observed routines relying on the tool proposed in [41]. Furthermore, we injected various types of drifts by perturbating the features according to well-defined rules specified with the help of geriatricians participating in the E-Linus project. For each patient (ELP1 and ELP2), we generated two datasets: D, with perturbations on the sleep duration, and I, with perturbations on the number of sleep interruptions. We describe the feature processing in Appendix A. Table II illustrates the characteristics of the datasets. For the sake of space, we report only the characteristics of the _sleep_ event and two features, _Duration (D) and Interruptions (I)_. Notice how the first patient (P1) in EL has more regular sleep patterns than the second patient (P2) - see 3rd and 4th column. Additionally, we report the average duration of sleep and the interruption duration when the drift occurs (see the last two columns). We invite the reader to notice that the synthetic datasets clearly define an easier scenario to detect abnormal periods of activities. In particular, for PolimiHouse (PH) and ARAS (AS), the sleeping duration decrements noticeably and the interruptions take, approximately, twice as much as in the normal period. In VanKastareen (VK) the durations of interruptions and sleep increase, respectively, of \(\sim\!164\%\) and 14%. Therefore, for the synthetic datasets, we expect that approaches based on a fixed reference window will have an advantage over others because the drift period happens near the end of the monitoring time. Contrarily, for EL we expect that fixed-reference window approach to underperform w.r.t. sliding window approaches since the distribution of the sleeping patterns continues changing inside the drift period. ## 5 Experiments Here, we describe the experiments performed on all datasets and compare DynAmo with a number of state of the art systems. Section 5.1 enlists the compared methods, and explains the experimental and hyperparameter settings to endorse reproducibility for future research. Section 5.2 describes the performances of the compared methods and provides detailed insights and limitations of each of them. ### _Compared methods, experimental setup, metrics, and hyperparameters_ **Compared methods**: We compare with baseline strategies such as Keep It Simple (KIS)8, BinSeg [42], BottomUp [43], PELT [44, 45], Window [46], IKSSW9, KernelCPD [47]. We also compare with the following state-of-the-art methods10: KLCPD [21], MD3 [10], MD3-EGM [17], STUDD [26], D3 [48], NN-DVI [19], ERICS [25], and CDLEEDS [27]. We refer the reader to Section 2 and Table 1 for a summary description of compared methods, and to Appendix E for a detailed description. Footnote 9: A sliding window approach that extends IKS-bdd. Footnote 10: We searched for the implementation of all the papers in Table I, however [12, 18, 24] are not replicable/reproducible. Whereas, [13, 14, 15, 16, 20, 22, 23] do not have a publicly available code repository. **Hyperparameters and reproducibility of DynAmo:** To maintain a fully unsupervised drift detection approach, we do not divide the input trajectory into sets for training, validation, and testing. Instead, we label the windows in an online fashion. We performed a Bayesian optimisation - see Appendix D for more details - for 100 trials and achieved the best performances for: * _EL_ by setting \(\lambda=25\), \(\delta=10\), \(\ell=30\), and \(\sigma=0.2666\), * _Synthetic datasets (PH, AS, VK)_ by setting \(\lambda=4\), \(\delta=4\), \(\ell=17\), and \(\sigma=0.3422\). We use two different drift detection criteria: i.e. \(\Gamma=\{\gamma_{1},\gamma_{2}\}\). Recall that we are examining one activity type at a time. Thus, \(\gamma_{1}(\mathbf{a},\mathbf{b})=\mathbb{I}\left[\sum_{i=2}^{\lfloor\frac{ \sigma}{2}\rfloor}\left|\mathbf{a}_{i}-\mathbf{a}_{i-1}\right|<\sum_{i=2}^{ \lfloor\frac{\sigma}{2}\rfloor}\left|\mathbf{b}_{i}-\mathbf{b}_{i-1}\right|\right]\) and \(\gamma_{2}(\mathbf{a},\mathbf{b})=\mathbb{I}\left[-\gamma(\mu(\mathbf{a})- \sigma(\mathbf{a})<\mu(\mathbf{b})<\mu(\mathbf{a})+\sigma(\mathbf{a}))\right]\) where \(\mu(\cdot)\) and \(\sigma(\cdot)\) calculate the mean and the standard deviation of the input vector, respectively. We set the consensus function to the average voting strategy: i.e. if the average exceeds \(\sigma\), then a drift is signalled; otherwise, it is considered part of the normal behaviour distribution. Finally we use a daily \(\Delta\) interval to generate the input trajectory \(\mathbb{Q}\). **Fair comparison policy**: To make sure that all of the compared methods are in the same scenario - i.e., a situation in which it is not guaranteed that the reference (training) window represents normality, nor it is known whether any drifts start to appear during this window - we need to uniform the amount of "history" that each method can consider before making predictions. This phenomenon is even more pronounced with (semi)supervised strategies that need a minimum amount of data points reserved for training purposes. It is natural that we reserve only \(\lfloor\frac{L}{2}\rfloor\) data points for the compared methods for "training" purposes such that they have the same view of the distributional changes as DynAmo has in each iteration. Notice that in STUDD, MD3, MD3-EGM, and D3, this is inherently inhibited due to the fact that the underlying binary classifiers require to have at least one observation belonging to the anomalous class11. Therefore, these strategies cannot be fully aligned to fairly compare against the others. Besides setting the window width and the moving step after each iteration for all compared strategies, we leave the rest of the parameters unvaried12. We invite the reader to see Appendix F for more details on how to align each method for comparison purposes. Footnote 11: We reserve around 400 (\(\sim\)27\(\%\)) and 100 (\(\sim\)61\(\%\)) days of history, respectively, for EL and the synthetic datasets, to make sure that the underlying binary classifiers receive a distribution of anomalous events. Footnote 12: Note that the compared systems set the length of the observation window optimally for every single dataset, while we use the same value for all synthetic datasets. In addition, compared systems use much longer windows, order of 100 days, which impose a considerable constraint in real-life scenarios. **Evaluation metrics:** We use F1 scores to evaluate all the methods. For methods that output probabilities, we binarise the outcome according to 10 thresholds chosen from the uniform distribution \(\mathcal{U}_{a}^{b}\), we calculate the performances w.r.t. the ground truth, and, finally, report the average. Here \(a\) and \(b\) depict the minimum and the maximum probabilities in the predicted outcome, respectively. The code to replicate/reproduce with easy steps our solution and experiments is available online13. \begin{table} \begin{tabular}{l l|c c c c c c c} \hline \hline \multicolumn{2}{c|}{Datasets} & \multicolumn{3}{c}{Monitoring days} & \% of drift & Avg daily duration of sleep (\(\hbar\)) & Avg. daily duration of sleep interruptions (mins) & When does the patient go & Avg. duration of sleep during drift (\(\hbar\)) & Avg. interruptions of sleep during drift (\(\hbar\)) \\ \hline \multirow{3}{*}{Real} & ELP1 & D & 1,460 & 40.00\% & 8.96 \(\pm\) 1.24 & 9.22 \(\pm\) 3.76 & 22.04\(\pm\) 0.36 \(\pm\) 0.37.54 & 9.22 \(\pm\) 0.78 & 9.17 \(\pm\) 3.69 \\ & 1 & 1,460 & 40.00\% & 7.54 \(\pm\) 0.22 & 12.76 \(\pm\) 8.05 & 22.54\(\pm\) 0.090\(\pm\)0.01 & 7.52 \(\pm\) 0.25 & 15.19 \(\pm\) 6.61 \\ \cline{2-8} & ELP2 & D & 1,460 & 40.00\% & 8.46 \(\pm\) 1.51 & 22.90 \(\pm\) 1.82 & 21.45\(\pm\) 0.57\(\pm\) 0.57\(\pm\) 31 & 8.73 \(\pm\) 1.18 & 23.06 \(\pm\) 1.27 \\ & I & 1,460 & 40.00\% & 6.89 \(\pm\) 0.93 & 35.84 \(\pm\) 18.93 & 22.162\(\pm\) 0.043.56 & 6.86 \(\pm\) 0.95 & 39.61 \(\pm\) 16.82 \\ \hline \multirow{3}{*}{Synthetic} & PH & 170 & 47.08\% & 7.51 \(\pm\) 1.37 & 5.84 \(\pm\) 20.86 & 22.19\(\pm\) 0.50\(\pm\)32 & 7.08 \(\pm\) 1.72 & 10.86 \(\pm\) 28.75 \\ & AS & 171 & 47.37\% & 7.31 \(\pm\) 1.54 & 4.64 \(\pm\) 20.94 & 22.23\(\pm\) 0.03\(\pm\)95.54 & 6.58 \(\pm\) 1.54 & 7.71 \(\pm\) 28.86 \\ & VK & 151 & 39.23\% & 7.97 \(\pm\) 3.37 & 9.60 \(\pm\) 68.16 & 22.28\(\pm\) 19.04\(\pm\) 0.492.9 & 9.11 \(\pm\) 5.27 & 25.31 \(\pm\) 10.877 \\ \hline \hline \end{tabular} \end{table} TABLE II: The dataset characteristics for each monitoring scenario in E-Linus (obtained from real data with realistic perturbations) and the other synthetic datasets. For simplicity purposes, we analyse only the activity of sleep. \begin{table} \begin{tabular}{l|c c c c c c c c c c} \hline \hline \multicolumn{2}{c|}{} & \multicolumn{3}{c}{Synthetic-datasets} & \multicolumn{6}{c}{Nonline-datasets} \\ \cline{3-10} \multicolumn{2}{c|}{} & PH & AS & VK & ELP1-D & ELP1-D & ELP1-D & ELP2-D & ELP2-D & \(\mu(\Gamma)\) & - & \(\mu(\Gamma)\) \\ \hline \multirow{5}{*}{IKS} & IKS & 0.3845 & 0.3084 & 0.2466 & 0.3320 & 0.3439 & 0.3128 & 0.3313 & 0.3034 & 0.3034 \\ & PAB [44, 45] & 0.025 & 0.024 & 0.0333 & 0.0240 & 0.021 & 0.021 & 0.008 & 0.0230 & 0.0209 \\ & Rande [42, 49] & 0.3308 & 0.2430 & 0.2497 & 0.0088 & 0.0208 & 0.024 & 0.294 & 0.2970 & 0.0011 \\ & Wandler [42] & 0.494 & 0.496 & 0.496 & 0.496 & 0.496 & 0.496 & 0.496 & 0.496 & 0.4962 \\ & Kessler-[45] & 0.495 & 0.496 & 0.496 & 0.500 & 0.5003 & 0.0208 & 0.496 & 0.4964 & 0.4964 \\ & IKSW & 0.336 & 0.3188 & 0.3009 & 0.3000 & 0.0200 & 0.496 & 0.4964 & 0.496 & 0.4960 & 0.496 \\ & IKSW & 0.336 & 0.3188 & 0.3009 & 0.3009 & 0.3000 & 0.496 & 0.496 & 0.4960 & 0.4960 & 0.4960 \\ & IKSW-DVI & 0.494 & 0.4920 & 0.496 & 0.496 & 0.496 & 0.496 & 0.496 & 0.496 & 0.4960 & 0.496 \\ \hline \hline \multirow{5}{*}{IKS} & IKS & 0.3845 & 0.308 ### _Discussions_ **DynAmo outperforms the state-of-the-art with 36.9% amelioration in terms of average F1 scores over the second best-performing method:** Table III shows the performances in terms of F1 scores for each dataset. As shown in column \(\mu\)(F1), on average DynAmo surpasses the second-best model CDLEEDS by 36.9%. We remark that CDLEEDS is one of the most recent approaches to drift detection. In the synthetic scenario, DynAmo can capture the normality of the trajectory and make predictions without having any knowledge of what an anomaly is. Only in the PH dataset, MD3 performs better since its underlying supervised SVM classifier needs to have explicit labelling of anomalies/normality. In the challenging and realistic scenario of EL, DynAmo has consistently good performances on both patients without being affected by their heterogeneous behavioural patterns. D3 and ERICS fail to perform because the normal behaviour of P2 is oscillatory (i.e., the routine is not stable). Besides CDLEEDS, other (semi)supervised approaches perform poorly because the "normality" is tainted with variable yet normal (for the patient) behavioural patterns which are misleading. Additionally, in these scenarios, the ability of DynAmo to capture the evolution of the feature space within the two windows results in a boost in performance compared to CDLEEDS. Finally, DynAmo is the only detector that has consistently better performances than KIS without making any assumptions on the underlying data distribution. These characteristics make DynAmo suitable to cope with complex and realistic domains like E-Linus. **DynAmo is robust to distributional changes within the drift period:** During the drift period, the patient behaviour continues to change until a new "normality" pattern is adopted. We want to demonstrate that DynAmo is robust against any setting of the reference and detection windows throughout the trajectory given in input. To this end, Figure 3 illustrates the difference in terms of F1 scores between DynAmo and CDLEEDS. In the x-axis, we vary the starting point of the beginning of the monitoring14. Additionally, positive (blue) values indicate that DynAmo outperforms CDLEEDS; whereas, negative (red) values show that CDLEEDS is better. Notice how DynAmo outperforms CDLEEDS when the beginning of the monitoring is within the drift period. Additionally, because the routine can change within the drift period, DynAmo can capture these distributional shifts by looking back in time for up to \(\lambda\) days. For P2, we argue that CDLEEDS has the upper hand during the first portion of the trajectory since it updates its training with each new behavioural profile. Contrarily, being unsupervised, DynAmo lacks training/updating upon receiving a new daily profile which can lead to potential misclassifications. For the synthetic datasets, it is interesting to notice how - in line with what is reported in Table II - for VK, both strategies are competitive until the near end of the behavioural sequence. In this case, neither DynAmo nor CDLEEDS are capable of capturing anomalous behaviours when the beginning day of monitoring is situated within the drift period. Contrarily, for PH and AS, DynAmo clearly outperforms CDLEEDS. Footnote 14: We vary the monitoring time from 0 to 1200 days with a step of 50 in EL; and, from 0 to 105 days with a step of 5 in the synthetic datasets. **More daily hyperboxes for testing the distributional shifts imply more confident detections:** Figure 4 depicts the contribution of the amount of daily hyperboxes \(\ell\) used to populate the two windows. Here, we set \(\ell\in[2,40]\) for EL and \(\ell\in[2,30]\) for the synthetic datasets, and leave the other hyperparameters unchanged. We report with a black circle the average performances for all datasets reached according to the Bayesian optimisation (see Appendix D). Additionally, a blue circle illustrates similar performances as the ones reported from the optimisation with a different choice of the window size \(\ell^{\prime}\). Red circles represent the choices of the window size which produce better average results than those reported in Table III leaving the other hyperparameters unchanged. This phenomenon is present specifically in the synthetic datasets which leads us to believe that the optimisation strategy reached a local minimum, pruning the trials that might have generated a better combination of hyperparameters with higher average F1 scores on the datasets. For the synthetic datasets, the F1 curve keeps increasing due to the homoscedasticity of the "normal" period in the behavioural sequence. Nevertheless, we need to cope with _real-world critical scenarios where the promptness of prediction is a crucial aspect and not much time is spent in training/updating the models_. Hence, without loss of generality, the fewer daily profiles needed to make predictions, the more robust the system is. We invite the reader to notice that \(\lfloor\frac{\ell}{2}\rfloor=15\) days are a reasonable amount of time to build the reference/detection window (i.e., only \(\sim 1\%\) of the total monitoring days) in E-Linus. Similarly for the synthetic datasets, although we can reach better F1 scores with larger \(\ell\) - see the red dots - we argue that coping with the cold start problem and capping the window size is more beneficial. **A short/medium term look-back has benefits in dealing with recurrent normal behaviour:** Figure 5 illustrates the contribution of the look-back amount \(\lambda\) used to trace the evolution of the feature hyperboxes within the same window. The circles have the same meaning as presented in the previous ablation study (see Figure 4), now with \(\lambda\) as the parameter of interest. Notice that in EL a large \(\lambda\) degrades the performances in almost all cases, besides ELP2-I. In particular, we argue that maintaining the history of \(\lambda\) days prior to the current might help have a complete overview of the hyperbox evolution in time, but it does not always represent the current behavioural situation due to outdated routine activities. ELP2-I presents a sawtooth-like trend which leads us to believe that the noise in the daily routines does not permit DynAmo to build an effective evolutionary view of the feature hyperboxes in time (see Appendix B). Additionally, we can notice that a \(\lambda\in[1,30]\) has a substantial performance gain w.r.t. \(\lambda=0\) because daily routines tend to re-occur according to a specific seasonality trend. More specifically, \(\lambda\) gives DynAmo the possibility to learn this intrinsic and latent seasonality in the evolution of the feature hyperboxes (e.g., sleeping less in hot seasons). Contrarily, the synthetic datasets do not have a clear shared monotonicity of the F1 scores when varying \(\lambda\). However, it is interesting to notice that AS does not rely on \(\lambda\) to ameliorate the performances which contradicts our intuition that looking backwards helps in detecting drift anomalies. Moreover, one can notice how looking back for too much leads to worse performances than not looking back at all. For this reason, a trade-off between \(\lambda\) (past), \(\ell\) (current and future) and the length of the trajectory is necessary for real-world complex scenarios. ## 6 Conclusion This paper presented a drift anomaly detection framework for multivariate symbolic sequences, such as human behavioural patterns. Our approach, DynAmo, is based on dynamically clustering the monitored events of the same type with a selected frequency (e.g., days or weeks), generating a trajectory by extracting features from the densest clusters centroids. Finally, DynAmo exploits an ensemble of trackers/divergence tests to predict a drift on the reference and detection windows. A notable feature of the proposed approach is that it is fully unsupervised and can detect drift periods as they are happening, even during the initial observation period. Furthermore, DynAmo copes with real-world critical scenarios where the promptness of prediction is crucial since it can build a reference/detection behavioural model after only two weeks of observation. In the proposed comparative experiments, we have shown that Dynamo, on average, outperforms the second-best model by almost 40%. Furthermore, it is robust to distributional changes within the drift periods. A limitation of DynAmo is that it needs to integrate point and drift anomalies into a single detection framework. However, it can identify potential outliers as low-density peripheral clusters (see Section 3.3). We leave this extension to future studies.
2310.06039
Rescuing Gravitational-Reheating in Chaotic Inflation
We show, within the single-field inflationary paradigm, that a linear non-minimal interaction $\xi\,M_P\,\phi\,R$ between the inflaton field $\phi$ and the Ricci scalar $R$ can result in successful inflation that concludes with an efficient heating of the Universe via perturbative decays of the inflaton, aided entirely by gravity. Considering the inflaton field to oscillate in a quadratic potential, we find that $\mathcal{O}(10^{-1}) \lesssim \xi \lesssim \mathcal{O}(10^2)$ is required to satisfy the observational bounds from Cosmic Microwave Background (CMB) and Big Bang Nucleosynthesis (BBN). Interestingly, the upper bound on the non-minimal coupling guarantees a tensor-to-scalar ratio $r \gtrsim 10^{-4}$, within the range of current and future planned experiments. We also discuss implications of dark matter production, along with the potential generation of the matter-antimatter asymmetry resulting from inflaton decay, through the same gravity portal.
Basabendu Barman, Nicolás Bernal, Javier Rubio
2023-10-09T18:00:07Z
http://arxiv.org/abs/2310.06039v2
# Rescuing Gravitational-Reheating ###### Abstract We show, within the single-field inflationary paradigm, that a linear non-minimal interaction \(\xi\,M_{P}\,\phi\,R\) between the inflaton field \(\phi\) and the Ricci scalar \(R\) can result in successful inflation that concludes with an efficient heating of the Universe via perturbative decays of the inflaton, aided entirely by gravity. Considering the inflaton field to oscillate in a quadratic potential, we find that \(\mathcal{O}(10^{-1})\lesssim\xi\lesssim\mathcal{O}(10^{2})\) is required to satisfy the observational bounds from Cosmic Microwave Background (CMB) and Big Bang Nucleosynthesis (BBN). Interestingly, the upper bound on the non-minimal coupling guarantees a tensor-to-scalar ratio \(r\gtrsim 10^{-4}\), within the range of current and future planned experiments. We also discuss implications of dark matter production, along with the potential generation of the matter-antimatter asymmetry resulting from inflaton decay, through the same gravity portal. ###### Contents * 1 Introduction * 2 Inflation with a Linear Non-minimal Coupling * 3 Heating with a Linear Non-minimal Coupling * 3.1 Producing the standard model bath * 3.2 Dark matter from inflaton decay * 3.3 Leptogenesis from inflaton decay * 4 Conclusions * A Inflaton Interactions and Decays * B CP Asymmetry ## 1 Introduction Inflation serves as a well-established framework that harmoniously aligns with our empirical observations, offering elegant solutions to the puzzles within the hot Big Bang model [1; 2]. In its simplest avatar, the inflationary stage is driven by a slowly rolling scalar field whose energy density dominates the Universe at some early epoch and is eventually converted into a radiation bath, (re)heating the Universe and signaling the onset of radiation domination [3; 4; 5; 6]. Depending on the model under consideration, this relocation of energy can take place via perturbative decays [7; 8; 9] or involve highly nonlinear and non-perturbative effects such as parametric resonance [10; 11; 12; 13], tachyonic instabilities [14; 15; 16; 17; 18; 19], oscillon formation [20; 21; 22; 23; 24] and turbulent energy cascades [25; 26], in isolation or co-existence [27; 28]. As has recently been pointed out [29], an irreducible Planck suppressed coupling between all matter fields and gravity can lead to gravity-mediated heating, which has been named as "gravitational reheating" scenario. As shown in Refs. [29; 30; 31; 32], for an inflaton \(\phi\) oscillating in a monomial potential \(V(\phi)\propto\phi^{k}\), the minimal gravitational heating scenario, where a pair of inflaton condensate excitations scatters via massless gravitons into standard model (SM) particles (like the Higgs boson), requires \(k>9\). Interestingly enough, this bound can be relaxed to \(k>4\) if one introduces a non-minimal _quadratic_ coupling between gravity and the scalars of the theory. However, gravity-mediated gravitational heating through 2-to-2 scattering remains still not viable if the inflaton oscillates at the minimum of a quartic (\(k=4\)) or a quadratic (\(k=2\)) potential. In this work, we will explore a scenario where successful inflation, together with heating, can be achieved through the non-minimal _linear_ interaction \(\xi\,M_{P}\,\phi\,R\) between the inflaton field \(\phi\) and the Ricci scalar \(R\), where \(M_{P}\) is the reduced Planck mass and \(\xi\) the non-minimal coupling. In particular, we will assume the inflaton field to oscillate in a simple quadratic potential, showing explicitly that this setting can give rise to an adequate number of \(e\)-folds of inflation and to the onset of radiation domination prior to Big Bang Nucleosynthesis (BBN) [33; 34; 35; 36; 37; 38], such that the standard cosmological lore is not hampered. Interestingly, both the inflationary and heating dynamics are governed by a single free parameter \(\xi\). Moreover, the inadequacy of the SM in offering a viable dark matter (DM) candidate necessarily calls for beyond the SM fields. Measurements of anisotropies in the cosmic microwave background radiation (CMB) provides the most precise measurement of the DM relic density, usually expressed as \(\Omega_{\rm DM}h^{2}\simeq 0.12\)[39], which the candidate for DM must satisfy. Now, irreducible gravitational interaction can lead to inevitable DM production (or production of _any_ particle in general), commonly known as "gravitational production" of DM.1 For instance, the production of purely gravitational DM due to expansion of the Universe (which is the conventional gravitational production) has been extensively discussed in Refs. [67; 68; 69], through the \(s\)-channel exchange of massless gravitons in, e.g., Refs. [70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80], while DM is sourced from the decay of the inflaton in Refs. [81; 82; 83; 84]. Being a purely gravitational process, the corresponding DM yield is Planck suppressed and can only be dominant at high temperatures. Footnote 1: Purely gravitational production of particles beyond the SM can also emerge from Hawking radiation of evaporating primordial black holes; see, for example, Refs. [40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66]. Finally, the observed excess of baryons over antibaryons in the Universe is quantified in terms of the baryon-to-photon ratio \(\eta_{B}\simeq 6.2\times 10^{-10}\)[39], based on CMB measurements, which also agrees well with the BBN estimates [85]. Although it has all the necessary components, the SM does not satisfy the Sakharov conditions [86] necessary to generate the adequate asymmetry, demanding physics beyond the SM. An intriguing possibility to achieve baryogenesis (that is, the dynamical generation of the baryon asymmetry of the Universe (BAU)) is known as leptogenesis [87] where, instead of explicitly creating a baryon asymmetry, first a lepton asymmetry is produced that subsequently converts into baryon asymmetry by the \((B+L)\)-violating electroweak sphaleron transitions [88]. In the present context, both the DM and the observed BAU (along with right active neutrino mass) can be generated from decays of the inflaton, once beyond the SM fields are introduced.2 This also falls under the category of gravitational production, as in the absence of the non-minimal coupling to gravity, such a production channel ceases to exist. We emphasize that such linear non-minimal coupling has not been widely discussed in the literature in the context of inflation and heating.3 Footnote 2: Gravitational production of DM, along with the BAU, has also been addressed in Refs. [89; 31]. Footnote 3: In context of DM decay this has been discussed in Refs. [90; 91; 92]. This paper is organized as follows. In Section 2 we discuss our model construction together with the inflationary framework. In Section 3 we elaborate on the heating scenario and investigate the production of DM and baryon asymmetry. Finally, we conclude in Section 4. ## 2 Inflation with a Linear Non-minimal Coupling Let us consider the following action for a secluded inflaton field \(\tilde{\phi}\) of mass \(m_{\phi}\), non-minimally coupled to gravity and without tree-level interactions to the SM states, \[S_{\phi}=\int d^{4}x\,\sqrt{-\tilde{g}}\left[\frac{1}{2}\,M_{P}^{2}\,F(\tilde {\phi})\,\tilde{g}^{\mu\nu}\,\widetilde{R}_{\mu\nu}(\widetilde{\Gamma})+\frac{ 1}{2}\,\tilde{g}^{\mu\nu}\,\partial_{\mu}\tilde{\phi}\,\partial_{\nu}\tilde{ \phi}-\frac{1}{2}\,m_{\phi}^{2}\,\tilde{\phi}^{2}\right]. \tag{1}\] Here we have adopted a mostly-plus convention for the metric \(\tilde{g}_{\mu\nu}\), \(M_{P}\simeq 2.44\times 10^{18}\) GeV stands for the reduced Planck mass, \(F(\tilde{\phi})\) is a general function of \(\tilde{\phi}/M_{P}\) admitting a Taylor expansion around unity, and \[\widetilde{R}_{\mu\nu}=\partial_{\sigma}\widetilde{\Gamma}^{\sigma}_{\ \mu\nu}- \partial_{\mu}\widetilde{\Gamma}^{\sigma}_{\ \sigma\nu}+\widetilde{\Gamma}^{\rho}_{\ \mu\nu} \widetilde{\Gamma}^{\sigma}_{\ \sigma\rho}-\widetilde{\Gamma}^{\rho}_{\ \sigma\nu} \widetilde{\Gamma}^{\sigma}_{\ \mu\rho} \tag{2}\] denotes the Ricci tensor constructed out of a connection \(\widetilde{\Gamma}^{\rho}{}_{\mu\nu}\), to be specified in what follows. Note that, for negligible non-minimal couplings \(F(\tilde{\phi})\simeq 1\), this action reduces to a particularly simple chaotic scenario, the seminal quadratic inflation model, where the only free parameter \(m_{\phi}\) is completely determined by the measured amplitude of the primordial power spectrum of density fluctuations. However, this simplified scenario is in conflict with the combined Planck and BICEP2/Keck bound on the tensor-to-scalar ratio, namely \(r<0.032\) at \(95\%\) CL [93]. Interestingly enough, this limitation is generically surpassed in the presence of sizable non-minimal couplings to gravity [94; 95; 96]. The inclusion of non-minimal couplings to gravity explicitly breaks the well-known degeneracy between metric and Palatini formulations, making it necessary to specify the properties of the connection \(\widetilde{\Gamma}^{\rho}{}_{\mu\nu}\) in order to completely define the theory under consideration. For the sake of simplicity and without lack of generality, we will assume in what follows a Palatini formulation of gravity where the connection \(\widetilde{\Gamma}^{\rho}{}_{\mu\nu}\) is taken to be arbitrary but torsion-free, i.e. \(\widetilde{\Gamma}^{\rho}{}_{\mu\nu}=\widetilde{\Gamma}^{\rho}{}_{\nu\mu}\). Compared to the most common metric approach, this formulation displays some interesting features. On the one hand, it does not require the introduction of the usual Gibbons-Hawking-York term to obtain the equations of motion [97]. On the other hand, since the metric and the connection are completely unrelated in Palatini gravity, the Ricci scalar remains invariant under Weyl transformations, simplifying the transitions among conformal frames and the analysis of the cosmological implications of the model, as we explicitly demonstrate in what follows. The nonlinearities associated with the non-minimal coupling in Eq. (1) can be transferred to the kinetic and potential sectors of the theory by performing a Weyl transformation \(\tilde{g}_{\mu\nu}=F^{-1}(\tilde{\phi})\,g_{\mu\nu}\), which, as anticipated, affects only the metric field and its determinant. The resulting action takes the form \[S_{\phi}=\int d^{4}x\,\sqrt{-g}\left[\frac{M_{P}^{2}}{2}\,R(\Gamma)+\frac{1}{2 \,F(\tilde{\phi})}\,g^{\mu\nu}\,\partial_{\mu}\tilde{\phi}\,\partial_{\nu} \tilde{\phi}-\frac{1}{2}\frac{m_{\phi}^{2}\,\tilde{\phi}^{2}}{F^{2}(\tilde{ \phi})}\right], \tag{3}\] with \(\Gamma\) identified now with the Levi-Civita connection, \[\Gamma^{\lambda}_{\alpha\beta}=\frac{1}{2}g^{\lambda\rho}\,\left(\partial_{ \alpha}g_{\beta\rho}+\partial_{\beta}g_{\rho\alpha}-\partial_{\rho}g_{\alpha \beta}\right)\,. \tag{4}\] The noncanonical kinetic term in Eq. (3) can be made canonical by performing an additional field redefinition \[\frac{d\phi}{d\tilde{\phi}}=\frac{1}{\sqrt{F}}\,. \tag{5}\] Assuming the Taylor expansion of \(F(\tilde{\phi})\) to admit a dominant linear term in the field regime of interest with positive coefficient \(\xi\),4 Footnote 4: The absolute value in this expression guarantees a positive definite graviton propagator at all field values, even in the absence of higher order corrections. \[F(\tilde{\phi})=1+\xi\,\frac{|\tilde{\phi}|}{M_{P}}+\mathcal{O}\left(\tilde{ \phi}/M_{P}\right)^{2}\,, \tag{6}\] the integration of Eq. (5) with boundary condition \(\phi(0)=0\) provides a relation \[|\phi|=\frac{2M_{P}}{\xi}\left(\sqrt{1+\xi\frac{|\tilde{\phi}|}{M_{P}}}-1 \right), \tag{7}\] which can be easily inverted, \[|\tilde{\phi}|=|\phi|\left(1+\frac{\xi}{4}\frac{|\phi|}{M_{P}}\right), \tag{8}\] to obtain a \(\phi\)-dependent action \[S_{\phi}=\int d^{4}x\,\sqrt{-g}\left[\frac{M_{P}^{2}}{2}\,R+\frac{1}{2}\,g^{\mu \nu}\,\partial_{\mu}\phi\,\partial_{\nu}\phi-V(\phi)\right], \tag{9}\] with effective potential \[V=\frac{1}{2}\,m_{\phi}^{2}\,|\phi|^{2}\,\frac{\left(1+\frac{\xi}{4}\frac{| \phi|}{M_{P}}\right)^{2}}{\left(1+\frac{\xi|\phi|}{2M_{P}}\right)^{4}}\,. \tag{10}\] Restricting ourselves to the asymptotic plateau-like region at large field values \(\phi>0\), and dropping consequently the absolute value in all the following expressions, we obtain the potential slow-roll (SR) parameters \[\epsilon_{V} \equiv\frac{M_{P}^{2}}{2}\,\left(\frac{V^{\prime}}{V}\right)^{2}=2 \left(\frac{M_{P}}{\phi}\right)^{2}\left(1+\frac{1}{2}\frac{\xi\phi}{M_{P}} \right)^{-2}\left(1+\frac{1}{4}\frac{\xi\phi}{M_{P}}\right)^{-2}, \tag{11}\] \[\eta_{V} \equiv M_{P}^{2}\,\frac{V^{\prime}}{V}=\left[1-\frac{3}{2}\frac{ \xi\phi}{M_{P}}-\frac{3}{8}\left(\frac{\xi\phi}{M_{P}}\right)^{2}\right] \epsilon_{V}\,, \tag{12}\] and the number of \(e\)-folds of inflation \[N=\frac{1}{M_{P}^{2}}\int_{\phi_{\rm end}}^{\phi_{*}}\frac{V}{V^{\prime}}\,d \phi=\frac{1}{4}\left(\frac{\phi}{M_{P}}\right)^{2}\left(1+\frac{\xi}{4}\frac{ \phi}{M_{P}}\right)^{2}\Bigg{|}_{\phi_{\rm end}}^{\phi_{*}}\, \tag{13}\] between the field value \(\phi_{*}\) at which the pivot scale \(k_{*}\) exited the horizon during inflation and the corresponding one at the very end of inflation, \[\phi_{\rm end}\simeq\frac{2^{3/2}\,M_{P}}{(\sqrt{2}\,\xi)^{2/3}}\left(1-\frac {(\sqrt{2}\,\xi)^{1/3}-1/3}{(\sqrt{2}\,\xi)^{2/3}}\right)\,, \tag{14}\] where SR is completely violated, that is \(\epsilon_{V}(\phi_{\rm end})\simeq 1\). Neglecting the small contribution of the lower limit in Eq. (13) and solving for \(\phi_{*}\), \[\phi_{*}(N)=\frac{2\,M_{P}}{\xi}\left(\sqrt{1+2\,\xi\,N^{1/2}}-1\right)\,, \tag{15}\] we can now express the SR parameters (11) and (12) as a function of the number of \(e\)-folds of inflation, \[\epsilon_{V}\simeq\frac{1}{4\,\xi\,N^{3/2}}+\mathcal{O}\left(\frac{1}{\xi^{2} N^{2}}\right),\hskip 28.452756pt\eta_{V}\simeq-\frac{3}{4\,N}+\frac{5}{8\, \xi\,N^{3/2}}+\mathcal{O}\left(\frac{1}{\xi^{2}N^{2}}\right)\,. \tag{16}\] This allows us to determine the amplitude of the amplitude of the primordial spectrum of curvature perturbations, the associated spectral tilt \(n_{s}=1+2\,\eta_{V}-6\,\epsilon_{V}\) and the tensor-to-scalar ratio \(r=16\,\epsilon_{V}\), \[\mathcal{P}\simeq\frac{m_{\phi}^{2}}{12\,\pi^{2}M_{P}^{2}}\left(\frac{N^{3/2}} {\xi}\right),\hskip 28.452756ptn_{s}=1-\frac{3}{2N}-\frac{1}{4\,\xi\,N^{3/2}}\,, \hskip 42.679134ptr=\frac{4}{\xi\,N^{3/2}}\,. \tag{17}\] he observed spectral amplitude \(\mathcal{P}\simeq 2.1\times 10^{-9}\)[98] determines the inflaton mass, \[m_{\phi}^{2}\simeq 12\,\pi^{2}\,\mathcal{P}\,M_{P}^{2}\,\left(\frac{\xi}{N^{3/2}} \right)\,, \tag{18}\] which, as shown in Fig. 1, turns out to exceed generically the unification scale \(m_{\phi}\sim\mathcal{O}(10^{13})\) GeV associated with quadratic chaotic models of inflation, with larger values of the non-minimal coupling \(\xi\) leading to larger inflaton masses. Requiring \(m_{\phi}\) to stay sub-Planckian results in an upper bound for \(\xi\), namely \[\xi\lesssim 1.5\times 10^{9}\left(\frac{N}{50}\right)^{3/2}\,. \tag{19}\] The compatibility of the spectral tilt and the tensor-to-scalar ratio with the CMB observations due to Planck [98] is displayed in Fig. 2. Nonvanishing values of \(\xi\) result generically in small tensor-to-scalar ratios, which, as anticipated, are fully compatible with Planck data for a fixed number of \(e\)-folds \(N=50\), a fiducial value to be assumed in what follows. In some cases, the predicted tensor-to-scalar ratios are also well within the reach of current or future planned experiments such as BICEP3 [99], LiteBIRD [100] and the Simons Observatory [101]. ## 3 Heating with a Linear Non-minimal Coupling In this section we explore the aftermaths of the inflaton decay induced by the linear non-minimal coupling. We begin with the production of the SM bath from the decay of the inflaton field, which is necessary for heating. We then discuss the production of DM and the dynamical generation of the BAU, for which we introduce new physics states. Inevitably, in all cases the yield is proportional to the squared of the non-minimal coupling. We emphasize that all our computations consider perturbative 2-body decay of the inflaton condensate, neglecting in particular non-perturbative production effects. This approximation is justified by the quadratic character on the inflationary potential around its minimum and the Planck-suppressed character of the interactions induced by the non-minimal coupling to gravity. The former aspects constitute in fact a remarkable difference from scenarios involving Figure 1: Dependence of the inflaton mass on the non-minimal coupling \(\xi\) for \(N=50\)\(e\)-folds. heher monomial potentials, _viz.,_\(V(\phi)\propto\phi^{k}\), where, despite what is usually assumed in the literature [29; 30; 31; 32], non-perturbative effects cannot be generically ignored, leading almost universally to a radiation-like equation-of-state parameter [18; 19; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114] in clear contrast to the value \(w=(k-2)/(k+2)\) obtained in the homogeneous approximation. ### Producing the standard model bath At the end of inflation, the potential energy of \(\phi\) becomes comparable to its kinetic energy counterpart, leading effectively to a Hubble parameter of order \(H_{I}\equiv H(\phi_{\rm end})\simeq V^{1/2}(\phi_{\rm end})/M_{P}\sim 10^{13}\) GeV, with a very mild dependence on the non-minimal coupling \(\xi\). In order to recover the usual hot Big Bang evolution, this large energy contribution must be transferred to the SM degrees of freedom, heating the Universe and ensuring the onset of radiation domination. For the sake of generality, we will assume that the inflaton field can also decay into new-physics (NP) states beyond the SM, with a suppressed branching fraction \(\mathcal{B}\). The evolution of the SM energy density (\(\rho_{R}\)) and the inflaton (\(n_{\phi}\)) and NP (\(n_{\rm np}\)) number densities can be tracked with the set of coupled Boltzmann differential equations [105; 106] \[\frac{dn_{\phi}}{dt}+3\,H\,n_{\phi}=-\Gamma_{\phi}\,n_{\phi}\,, \tag{1}\] \[\frac{d\rho_{R}}{dt}+4\,H\,\rho_{R}=(1-\mathcal{B})\,\Gamma_{\phi }\,n_{\phi}\,m_{\phi}\,,\] (2) \[\frac{dn_{\rm np}}{dt}+3\,H\,n_{\rm np}=2\,\mathcal{B}\,\Gamma_{ \phi}\,n_{\phi}\,, \tag{3}\] with \(\Gamma_{\phi}\) and \(\rho_{\phi}\equiv n_{\phi}\,m_{\phi}\) the total decay width and energy density of the nonrelativistic inflaton, respectively, and \[H\simeq\sqrt{\frac{\rho_{R}+n_{\phi}\,m_{\phi}}{3\,M_{P}^{2}}} \tag{4}\] the Hubble expansion rate. In the following, the new state will be identified with the DM or the RHN responsible for leptogenesis. The SM radiation energy density as a function of the SM bath temperature \(T\) is given by \[\rho_{R}(T)=\frac{\pi^{2}}{30}\,g_{\star}(T)\,T^{4}\,, \tag{5}\] Figure 2: Dependence of the tensor-to-scalar ratio \(r\) and the spectral tilt \(n_{s}\) on the non-minimal coupling \(\xi\) for \(N=50\)\(e\)-folds. where \(g_{\star}(T)\) corresponds to the SM relativistic degrees of freedom contributing to \(\rho_{R}\)[107]. For the sake of simplicity, we restrict ourselves to instantaneous thermalization within the SM sector. More precisely, we will assume that the interaction rate between SM particles significantly exceeds the inflaton decay rate [108]. The heating temperature \(T_{\rm rh}\) can be defined as the temperature of the SM bath at which the equality \(\Gamma_{\phi}=H(T_{\rm rh})\) occurs and corresponds to \[T_{\rm rh}^{2}=\frac{3}{\pi}\sqrt{\frac{10}{g_{\star}}}\,M_{P}\left(1-{\cal B }\right)\Gamma_{\phi}\,. \tag{10}\] As the inflaton decay is not instantaneous, the maximum temperature [109, 74, 110] \[T_{\rm max}^{4}=\frac{60}{\pi^{2}\,g_{\star}}\left(\frac{3}{8}\right)^{8/5} \left(1-{\cal B}\right)\,M_{P}^{2}\,\Gamma_{\phi}\,H_{I} \tag{11}\] reached by the SM bath can be much higher than \(T_{\rm rh}\). Assuming a perturbative decay of the inflaton field, the partial decay widths into final-state particles of mass \(m\), different spins, and a _single_ degree of freedom is \[\Gamma_{\phi\to ii}=\begin{cases}\frac{\xi^{2}}{32\,\pi}\,\frac{m_{\phi}^{3} }{M_{P}^{2}}\,\sqrt{1-4\,y^{2}}&\text{for scalars},\\ \frac{\xi^{2}}{32\,\pi}\,\frac{m_{\phi}^{3}}{M_{P}^{2}}\,y^{2}\, \left(1-4\,y^{2}\right)^{3/2}&\text{for fermions},\\ \frac{\xi^{2}}{128\,\pi}\,\frac{m_{\phi}^{3}}{M_{P}^{2}}\,\sqrt{1-4\,y^{2}}\, \left(1-4\,y^{2}+12\,y^{4}\right)&\text{for vectors}\,,\end{cases} \tag{12}\] with \(y\equiv m/m_{\phi}\). Details of the calculation are reported in Appendix A. At high temperatures before the electroweak symmetry breaking, the SM particles are massless and the SM contains 4 scalar, 24 vector and 90 fermionic degrees of freedom. Therefore, the total decay width into SM final states becomes \[\Gamma_{\phi}\simeq\frac{5}{16\pi}\,\xi^{2}\,\frac{m_{\phi}^{3}}{M_{P}^{2}}\,. \tag{13}\] Using this perturbative expression for \(\Gamma_{\phi}\) and the inflaton mass in Eq. (18), we get \[T_{\rm rh}\simeq 3.1\times 10^{12}\ {\rm GeV}\,\left(\frac{\xi}{10}\right)^{7/ 4}, \tag{14}\] and \[T_{\rm max}\simeq 5.4\times 10^{13}\ {\rm GeV}\left(\frac{\xi}{10}\right)^{5/ 6}, \tag{15}\] for \({\cal B}=0\). The evolution of the inflaton (red) and the SM radiation (blue) energy densities as a function of the cosmic scale factor \(a\) is displayed in the left panel of Fig. 3, for \(\Gamma_{\phi}\simeq 5\) TeV (\(\xi\simeq 1\)) and only considering decays into the SM, i.e., \({\cal B}=0\). The vertical dashed line corresponds to \(a=a_{\rm rh}\equiv a(T_{\rm rh})\). During heating, that is, in the range \(a_{I}<a<a_{\rm rh}\) (with \(a_{I}\) corresponding to the scale factor and the end of inflation / beginning of heating), \(\rho_{\phi}(a)\propto a^{-3}\) while \(\rho_{R}(a)\propto a^{-3/2}\), as it is not just a free radiation component but rather a one sourced from the decay of the inflaton field. In addition, in the right panel, the evolution of the SM temperature as a function of the scale factor is shown. Horizontal dashed lines correspond to \(T=T_{\rm max}\) and \(T=T_{\rm rh}\) and delimit the heating duration. At the beginning of heating, the bath temperature rapidly increases as a result of the non-instantaneous decay of the inflaton field, reaching a temperature \(T_{\rm max}\sim 8\times 10^{12}\) GeV. During heating, the temperature decreases as \(T(a)\propto a^{-3/8}\), until \(T_{\rm rh}\sim 6\times 10^{10}\) GeV. Once SM radiation dominates the energy density of the Universe, it becomes a free radiation fluid and its energy density drops to \(\rho_{R}(a)\propto a^{-4}\), corresponding to \(T(a)\propto a^{-1}\). Figure 4 shows the values for \(T_{\rm rh}\) and \(T_{\rm max}\) as a function of \(\Gamma_{\phi}\) (or \(\xi\) in the perturbative regime), assuming again \({\cal B}=0\). The region between \(T_{\rm max}>T>T_{\rm rh}\) corresponds to the heating era. The minimal value of the non-minimal coupling \(\xi\gtrsim 0.1\) comes, mainly, from Figure 4: Values of the heating temperature \(T_{\rm rh}\) and the maximum temperature \(T_{\rm max}\) as a function of the non-minimal coupling \(\xi\) for \({\cal B}=0\). The gray-shaded region (\(T_{\rm rh}\leq T\leq T_{\rm max}\)) corresponds to the heating epoch. Figure 3: Left: Evolution of the radiation (red) and inflaton (blue) energy densities with the scale factor. Right: Bath temperature as a function of the scale factor. In both panels, we fix \({\cal B}=0\) and \(\Gamma_{\phi}\simeq 5\times 10^{3}\) GeV, which, in the perturbative regime, corresponds to \(\xi\simeq 1\). the inflationary tensor-to-scalar ratio; see Figs. 2 and (17). Additionally, an upper bound \[\xi\lesssim 250 \tag{20}\] on the non-minimal coupling appears by demanding \(T_{\rm max}>T_{\rm rh}\) or equivalently \[\Gamma_{\phi}<\frac{2}{3}\left(\frac{3}{8}\right)^{8/5}\frac{H_{I}}{1-\mathcal{ B}}\,. \tag{21}\] This corresponds to a minimum bound \(r\gtrsim 10^{-4}\) on the tensor-to-scalar ratio, that is, within the reach of future and planned CMB experiments [99; 100; 101]. Above the red dotted line, corresponding to \(T=m_{\phi}/2\), the SM bath had an energy high enough to generate inflatons through inverse decays. However, because of the high inflaton number density during heating, this process is subdominant.5 Footnote 5: In this case, additionally to the decay term \(\Gamma_{\phi}\,n_{\phi}\) in Eqs. (20) to (21), the production out of the SM bath has to be included and therefore \(\Gamma_{\phi}\,n_{\phi}\to\Gamma_{\phi}\,(n_{\phi}-n_{\phi}^{\rm eq})\), where \(n_{\phi}^{\rm eq}(T)\) corresponds to the equilibrium number density of the inflaton. ### Dark matter from inflaton decay In this section, we discuss the prospect of DM production from the decay of the non-minimally coupled inflaton field. We consider the simple scenario in which the inflaton condensate decays into a pair of DM particles of arbitrary spin. The evolution of the DM number density \(n_{\rm dm}\) can be tracked by solving Eqs. (20) to (21), where \(n_{\rm np}\) becomes \(n_{\rm dm}\). Equation (21) can be conveniently rewritten in terms of the scale factor \(a\) and the comoving number density \(N\equiv n_{\rm dm}\,\times a^{3}\) as \[\frac{dN}{da}=2\,\mathcal{B}\,\frac{a^{2}\,\Gamma_{\phi}}{H(a)}\,n_{\phi}(a) \simeq 6\,\mathcal{B}\,\frac{M_{P}^{2}\,\Gamma_{\phi}^{2}}{m_{\phi}}\,a_{\rm rh }^{3/2}\,a^{1/2}\,, \tag{22}\] where the scaling of the inflaton number density \(n_{\phi}(a)\propto a^{-3}\) during heating was used, with \(a_{\rm rh}\equiv a(T_{\rm rh})\). It is interesting to note that, due to the nature of gravitational couplings, the branching fraction \(\mathcal{B}\) is not a free parameter and depends only on the spin (and mildly the mass) of the decaying particle. The branching of the inflaton field into a couple of DM particles (with a single degree of freedom) in the final state follows from Eqs. (19) and (20), and is given by \[\mathcal{B}\simeq\begin{cases}\frac{1}{11}-\frac{20}{121}\,y^{2}+\mathcal{O} [y^{4}]&\text{ for scalars,}\\ \frac{1}{10}\,y^{2}+\mathcal{O}[y^{4}]&\text{ for fermions,}\\ \frac{1}{41}-\frac{240}{1681}\,y^{2}+\mathcal{O}[y^{4}]&\text{ for vectors,} \end{cases} \tag{23}\] for \(m\ll m_{\phi}\). We emphasize that due to the democratic gravitational interaction strength, the branching ratio is independent of the non-minimal coupling. Next, we note that Eq. (22) admits an analytical solution \[N(a_{\rm rh})\simeq 4\,\mathcal{B}\,\frac{M_{P}^{2}\,\Gamma_{\phi}^{2}}{m_{\phi} }\,a_{\rm rh}^{3}\left[1-\left(\frac{T_{\rm rh}}{T_{\rm max}}\right)^{4} \right]\,, \tag{24}\] where we have assumed that there is no initial population of DM at the end of inflation, that is, \(n_{\rm dm}(a_{I})\simeq 0\). In addition, one can define the DM yield \(Y\equiv n_{\rm dm}/s\), where \[s(T)=\frac{2\pi^{2}}{45}\,g_{\ast s}(T)\,T^{3} \tag{3.17}\] is the SM entropy density, and \(g_{\ast s}(T)\) is the number of relativistic degrees of freedom contributing to the SM entropy [107]. The value of the DM yield at present corresponds to the value at the end of the heating and is given by \[Y(a_{\rm rh})\simeq\frac{N(a_{\rm rh})}{a_{\rm rh}^{3}\,s(T_{\rm rh})}\simeq 4 \,\mathcal{B}\,\frac{M_{P}^{2}\,\Gamma_{\phi}^{2}}{m_{\phi}\,s(T_{\rm rh})} \left[1-\left(\frac{T_{\rm rh}}{T_{\rm max}}\right)^{4}\right], \tag{3.18}\] which in the perturbative case corresponds to \[Y(a_{\rm rh})\simeq\frac{\xi}{g_{\ast s}(T_{\rm rh})^{1/4}}\,\sqrt{\frac{m_{ \phi}}{M_{P}}}\times\begin{cases}1&\text{for scalar DM},\\ \left(\frac{m_{\rm dm}}{m_{\phi}}\right)^{2}&\text{for fermionic DM},\\ \frac{1}{2}\sqrt{\frac{11}{41}}&\text{for vector DM},\end{cases} \tag{3.19}\] featuring a linear dependence on the nonminimal coupling \(\xi\). To match the entire observed abundance, the DM yield must be fixed so that \(m_{\rm dm}\,Y(a_{\rm rh})=\Omega_{\rm DM}h^{2}\frac{1}{s_{0}}\,\frac{\rho_{c} }{h^{2}}\simeq 4.3\times 10^{-10}\) GeV, where \(m_{\rm dm}\) is the DM mass, \(\rho_{c}\simeq 1.1\times 10^{-5}\)\(h^{2}\) GeV/cm\({}^{3}\) is the critical energy density, \(s_{0}\simeq 2.9\times 10^{3}\) cm\({}^{-3}\) is the entropy density at present, and \(\Omega_{\rm DM}h^{2}\simeq 0.12\)[98]. In Fig. 5, the values of \(\xi\) required to make up the entire DM relic abundance at present are shown, as a function of the mass of the DM, for different spins of the DM. Using Eq. (3.19), together with Eq. (2.18), one finds that for bosonic cases the yield \(Y(a_{\rm rh})\propto\xi^{5/4}\), while for the fermionic case, because of helicity suppression, \(Y(a_{\rm rh})\propto m_{\rm dm}^{2}\,\xi^{1/4}\). For the same reason, for fermionic DM, \(m_{\rm dm}\) varies very steeply with \(\xi\), as compared to bosonic cases. As a result, for a given \(\xi\), the fermionic DM needs to be heavier than the bosonic one in order to produce the right amount of relic. The DM mass required to fit the whole observed abundance is therefore \[m_{\rm dm}\simeq\begin{cases}5.3\times 10^{-6}\text{ GeV}\times\xi^{-5/4}&\text{ for scalar DM},\\ 2.7\times 10^{7}\text{ GeV}\times\xi^{-1/12}&\text{ for fermionic DM},\\ 1.9\times 10^{-5}\text{ GeV}\times\xi^{-5/4}&\text{ for vector DM},\end{cases} \tag{3.20}\] for \(g_{\ast s}(T_{\rm rh})=106.75\). In Fig. 5, we also show constraints on the viable parameter space from Lyman-\(\alpha\) flux power spectra on a warm DM mass [111, 112, 113, 114, 115, 116] that allows \(m_{\rm dm}\gtrsim 4\) keV [117, 118], on-shell decay of the inflaton that requires \(m_{\rm dm}<m_{\phi}/2\), and successful heating followed by inflation; cf. Fig. 4. Once these constraints are taken into account, the allowed mass range for scalar and vector DM turns out to be \(4\text{ keV}\lesssim m_{\rm dm}\lesssim 1\) MeV, while for fermionic DM \(m_{\rm dm}\simeq 10^{7}\) GeV. Before closing, we would like to mention that, apart from direct decay of the inflaton field into DM final states, pure gravitational production of DM unavoidably takes place from the 2-to-2 scattering of the bath particles via \(s\)-channel mediation of massless graviton [70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 100, 101, 109, 110, 111, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 289, 291, 280, 281, 284, 286, 288, 287, 289, 292, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 320, 313, 321, 333, 334, 335, 336, 337, 338, 340, 308, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 42, 435, 444, 455, 466, 471, 481, 492, 44, 493, 411, 44, 45, 46, 472, 494, 44, 483, 495, 496, 500, 512, 52, 53, 54, 55, 56, 57, 58, 59, 60, 52, 59, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 133, 134, 135, 136, 137, 138, ### Leptogenesis from inflaton decay Aside DM, the dynamical generation of the observed BAU demands the introduction of new physics. To be more accurate, neutrino masses and mixings require at least two (heavy) right-handed neutrino (RHN) states to realize the seesaw mechanism [120; 121; 122; 123]. One of these, if produced and remains out-of-equilibrium until its decay, can leave a nonzero lepton asymmetry. This asymmetry in the leptonic sector can eventually be converted into an asymmetry in the baryonic sector following the well-known mechanism of leptogenesis [124; 87; 125]. In the present context, such a framework can be realized by considering inflaton decays into a pair of RHNs, which further undergoes CP-violating out-of-equilibrium decays into a SM lepton and a Higgs. As a concrete example, we introduce SM gauge singlet RHNs \(N_{i}\) (with \(i=1,\,2,\,3\)) with an interaction Lagrangian density of the form \[\mathcal{L}\supset-\frac{1}{2}\,m_{N}\,\overline{N^{c}}\,N-y_{N}\,\overline{N }\,\widetilde{H}^{\dagger}\,L+\text{H.c.}\,, \tag{3.21}\] ignoring the generational indices, where \(L\) is the SM lepton doublet, \(H\) the SM complex Higgs doublet (\(\widetilde{H}\equiv i\,\sigma^{2}\,H^{\star}\), \(\sigma^{2}\) is the Pauli spin matrix), and \(y_{N}\) a Yukawa coupling. Additionally, \(m_{N_{i}}\) are the Majorana masses, assumed to be hierarchical \(m_{N_{1}}\ll m_{N_{2,3}}\). Note that the trilinear Yukawa term in Eq. (3.21) is responsible for generating light neutrino masses via the Type-I seesaw mechanism. Due to the democratic coupling of the inflaton field to all SM and NP fields, it inevitably decays into a pair of such RHNs, which subsequently undergo a CP-violating decay into a SM lepton and a Higgs doublet via their Yukawa interaction, producing a non-zero lepton asymmetry. The produced lepton asymmetry is eventually converted to baryon asymmetry via electroweak sphalerons. The final BAU can then be estimated via [125; 126] \[Y_{B}^{0}=\frac{28}{79}\,|\epsilon_{\Delta L}|\,Y_{N_{1}}(a_{\text{rh}})\,, \tag{3.22}\] Figure 5: Magnitude of the non-minimal coupling \(\xi\) needed to fit the entire observed DM abundance, for different DM spins. The red-shaded regions are disallowed by BBN (bottom), Lyman-\(\alpha\) (left), kinematical condition \(m_{\text{dm}}>m_{\phi}/2\) (right) and nonviable heating (top). where [30; 127] \[|\epsilon_{\Delta L}|\simeq\frac{3\,\delta_{\rm eff}}{16\,\pi}\,\frac{m_{N_{1}}\,m _{\nu,\rm max}}{v^{2}}\,, \tag{3.23}\] is the lepton asymmetry (see Appendix B for details), \(\langle H\rangle\equiv v\simeq 174\) GeV is the SM Higgs vacuum expectation value, \(\delta_{\rm eff}\) is the effective CP violating phase in the neutrino mass matrix with \(0\leq\delta_{\rm eff}\leq 1\), and we take \(m_{\nu,\rm max}\simeq 0.05\) eV as the heaviest active neutrino mass [126]. The RHN yield at the end of heating is given by the fermionic part of Eq. (3.19). For the decay of heavier RHNs \(N_{2,3}\), we consider lepton-number-violating interactions of \(N_{1}\) rapid enough to wash out the lepton-number asymmetry originated by the other two. Therefore, only the CP-violating asymmetry from the decay of \(N_{1}\) survives and is relevant for leptogenesis. To match the observed BAU at present, it is required to have \(Y_{B}^{0}\simeq 8.7\times 10^{-11}\)[39]. Away from the kinematical thresholds, this implies that \(m_{N_{1}}\propto\xi^{-1/8}\). In Fig. 6 we show with black lines the effective CP violating phase \(\delta_{\rm eff}\) required to fit the data, in the \([m_{N_{1}},\,\xi]\) plane. The contour shows a cutoff point at \(m_{N_{1}}\simeq m_{\phi}/2\), which is the kinematical threshold for 2-body decay (red area on the right). To avoid the washout of the produced asymmetry, one also needs to ensure that the production is nonthermal,6 which requires \(m_{N_{1}}>T_{\rm rh}\) (shaded red on top). Therefore, within the white area corresponding to \(5\times 10^{12}\) GeV \(\lesssim m_{N_{1}}\lesssim 5\times 10^{14}\) GeV and \(10^{-1}\lesssim\xi\lesssim 2\times 10^{2}\), the BAU can be reproduced successfully. Footnote 6: To realize non-thermal leptogenesis _during_ heating, we compute the thermalization rate \(\Gamma_{\rm th}\simeq y_{N}^{2}\,T/(8\,\pi)\), with \(y_{N}\simeq m_{\nu,\rm max}\,m_{N_{1}}/v^{2}\), and compare it with the Hubble rate. We find that \(\Gamma_{\rm th}\) stays below the Hubble rate \(H(T)\simeq(T/T_{\rm max})^{4}\)\(H_{I}\) during heating for \(\mathcal{O}(10^{-1})\lesssim\xi\lesssim\mathcal{O}(10^{2})\). ## 4 Conclusions The precise mechanism of heating after inflation remains largely unknown, opening up different possibilities of production of the Standard Model content, along with new physics species Figure 6: Contours corresponding to the observed BAU for different choices of CP-violating phases \(\delta_{\rm eff}\), as shown with different patterns. The shaded regions are prohibited from inflation [cf. Fig. 2], maximum temperature during heating [cf. Fig. 4], kinematical constraint on 2-body decay and non-thermal leptogenesis that requires \(m_{N_{1}}>T_{\rm rh}\). once cosmic inflation ends. In this paper we discuss one such possibility by considering a _linear_ coupling between the inflaton field and gravity. Such a non-minimal coupling triggers the decay of the inflaton condensate into pairs of all particles in the SM and beyond. Contrary to the widely discussed gravitational heating scenario, mediated by graviton exchange, in the present case one can have successful inflation together with heating for a _quadratic_ inflaton potential, i.e., the simplest chaotic inflation scenario. We find, in order to adhere to the Planck data and reheat the Universe prior to the onset of BBN, the non-minimal coupling needs to be \(\mathcal{O}(10^{-1})\lesssim\xi\lesssim\mathcal{O}(10^{2})\). We extend our discussion to the production of new-physics states. On the one hand, the whole observed dark-matter (DM) abundance can be successfully fitted for different spins. In particular, fermionic DM must have a mass \(m_{\rm dm}\sim 10^{7}\) GeV, while bosonic DM (scalar or vector) must be in the keV to MeV range. On the other hand, we also discuss the generation of baryon asymmetry of the Universe via nonthermal leptogenesis, due to the CP-violating decay of a heavy right-handed neutrino produced from the inflaton decay. The observed baryon asymmetry, along with the light neutrino masses via the type-I seesaw mechanism, can be produced from out-of-equilibrium decay of a heavy right-handed neutrino in the mass range \(10^{12}\) GeV \(\lesssim m_{N_{1}}\lesssim 10^{15}\) GeV. All in all, we have demonstrated that, contrary to the case of a quadratic non-minimal coupling of the inflaton to gravity discussed in the literature, a linear non-minimal coupling can give rise to successful inflation and efficient heating of the Universe in the case of a quadratic inflationary potential. Additionally, the gravitationally induced decay of the inflaton field can also source the whole observed DM abundance and baryon asymmetry of the Universe within simple particle physics frameworks. BB would like to acknowledge '\(\mathcal{C}\)osmo \(\mathcal{B}\)eer' (IFT-UW) members for all their help and support through thick and thin. NB received funding from the Spanish FEDER / MCIU-AEI under the grant FPA2017-84543-P. JR is supported by a Ramon y Cajal contract of the Spanish Ministry of Science and Innovation with Ref. RYC2020-028870-I. This work was supported by the MINECO (Spain) project PID2022-139841NB-I00 (AEI/FEDER, UE). ## Appendix A Inflaton Interactions and Decays In order to compute the gravity-induced inflaton decays, it is important to couple the inflaton to SM (and possible NP) fields, and therefore, the total action reads \[S=S_{\phi}+S_{\rm SM}+S_{\rm np}\,, \tag{10}\] where \(S_{\phi}\) was defined in Eq. (9), the SM part is \[S_{\rm SM}= \int d^{4}x\,\sqrt{-g}\,\Bigg{[}-\frac{1}{4}\,g^{\mu\nu}\,g^{ \lambda\rho}\,\mathcal{V}^{(a)}_{\mu\lambda}\,\mathcal{V}^{(a)}_{\nu\rho}+ \frac{1}{F^{2}}\left(\mathcal{L}_{Y}-V(H)\right)\] \[\qquad\qquad\qquad+\frac{1}{F}\left|D_{\mu}H\right|^{2}+\frac{i} {F^{3/2}}\,\overline{f}\,\not{\partial}\,f+\frac{3\,i}{F^{2}}\,\overline{f}\, \big{(}\not{\partial}\,\Omega\big{)}\,f\Bigg{]}, \tag{11}\] where \(\mathcal{V}^{(a)}\) denotes the SM gauge bosons (Abelian and non-Abelian) and \(f\) stands for all SM fermions (quarks and leptons). The covariant derivative is defined as \(i\,(g_{1}/2)\,Y\,B_{\mu}\), where \(W_{\mu}\) and \(B_{\mu}\) are the \(SU(2)_{L}\) and \(U(1)_{Y}\) gauge bosons, respectively, with corresponding \(g_{2}\) and \(g_{1}\) gauge coupling strengths, \(Y\) is the hypercharge and \(\tau^{a}=\sigma^{a}/2\) are the Pauli matrices. The potential of the Higgs doublet \(H\) reads \(V(H)=-\mu_{H}^{2}\,|H|^{2}+\lambda_{H}\,|H|^{4}\). Additionally, the new physics sector is encoded in \(S_{\rm np}\), and may consist of a singlet scalar \(S\) with mass \(m_{S}\), a singlet Majorana neutrino \(N\) with mass \(m_{N}\), a Dirac fermion \(\psi\) with mass \(m_{\psi}\), or an Abelian gauge boson \(X_{\mu}\) with mass \(m_{X}\). In each case, the corresponding action in the Einstein frame reads \[S_{S}=\int d^{4}x\,\sqrt{-g}\,\left[\frac{1}{2\,F}\,g^{\mu\nu}\, \partial_{\mu}S\,\partial_{\nu}S-\frac{1}{F^{2}}\,V(S)\right] \text{for scalar}, \tag{10}\] \[S_{N}=\int d^{4}x\,\sqrt{-g}\left[\frac{i}{F^{3/2}}\,\overline{ N}\,\gamma^{\mu}\partial_{\mu}N-\frac{1}{2\,F^{2}}\,m_{N}\,\overline{N^{c}}\,N- \left(\frac{y_{N}}{F^{2}}\,\overline{N}\,\widetilde{H}^{\dagger}\,L+{\rm H.c} \right)\right] \text{for Majorana},\] (11) \[S_{\psi}=\int d^{4}x\,\sqrt{-g}\,\left[\frac{i}{F^{3/2}}\,\bar{ \psi}\,\gamma^{\mu}\partial_{\mu}\psi-\frac{1}{F^{2}}\,m_{\psi}\,\bar{\psi}\, \psi\right] \text{for Dirac},\] (12) \[S_{X}=\int d^{4}x\,\sqrt{-g}\,\left[-\frac{1}{4}\,X_{\mu\nu}\,X^ {\mu\nu}+\frac{m_{X}^{2}}{2\,F^{2}}\,X_{\mu}\,X^{\mu}\right] \text{for vector}. \tag{13}\] For the scalar \(S\) we disregard possible trilinear and quartic self-interactions, and also the mixing to the SM Higgs boson. In addition, for the vector \(X^{\mu}\) we ignore its kinetic mixing with the SM \(U(1)_{Y}\) gauge boson. Finally, if the new state is associated with DM, a \(\mathbb{Z}_{2}\) parity is imposed under which only DM is odd to make it stable. All relevant vertices were computed using LanHEP[128] and are summarized in Table 1. The corresponding partial decay widths computed using CalcHEP[129] are reported in Eq. (10). ## Appendix B CP Asymmetry The CP asymmetry generated from \(N_{1}\) decay is given by [125] \[\epsilon_{\Delta L}\equiv\frac{\Gamma_{N_{1}\to\ell_{i}\,H}-\Gamma_{N_{1}\to \tilde{\ell}_{i}\,H}}{\Gamma_{N_{1}\to\ell_{i}\,H}+\Gamma_{N_{1}\to\tilde{\ell }_{i}\,\tilde{H}}}\simeq\frac{1}{8\,\pi}\,\frac{1}{(y_{N}^{\dagger}\,y_{N})_{ 11}}\,\sum_{j=2,3}\text{Im}\left(y_{N}^{\dagger}\,y_{N}\right)_{1j}^{2}\times \mathcal{F}\left(\frac{M_{j}^{2}}{M_{1}^{2}}\right), \tag{14}\] where \[\mathcal{F}(x)\equiv\sqrt{x}\,\left[\frac{1}{1-x}+1-(1+x)\,\log\left(\frac{1+ x}{x}\right)\right]. \tag{15}\] \begin{table} \begin{tabular}{|c|c|} \hline Interaction & Vertex \\ \hline \hline \(\phi\,\varphi\,\varphi\) & \(\frac{\xi}{M_{P}}\left(p_{i}\times p_{j}+2\,m_{\varphi}^{2}\right)\) \\ \hline \(\phi\,\Psi\,\Psi\) & \(\frac{\xi}{2\,M_{P}}\left(4\,m_{\Psi}-3\,\not{p}_{i}\right)\) \\ \hline \(\phi\,V^{\mu}\,V^{\nu}\) & \(\frac{-2\,\xi}{M_{P}}\,m_{V}^{2}\,\eta^{\mu\nu}\) \\ \hline \end{tabular} \end{table} Table 1: Vertices for the inflaton-matter interactions, for scalars (\(\varphi\)), fermions (\(\Psi\)), and vectors (\(V\)). For \(x\gg 1\,,\mathcal{F}\simeq-3/\left(2\,\sqrt{x}\right)\), and Eq. (101) becomes \[\epsilon_{\Delta L}\simeq-\frac{3}{16\,\pi}\,\frac{1}{\left(y_{N}^{\dagger}\,y_{ N}\right)_{11}}\left[\mathrm{Im}\left(y_{N}^{\dagger}\,y_{N}\right)_{12}^{2}\frac{m_{ N_{1}}}{m_{N_{2}}}+\mathrm{Im}\left(y_{N}^{\dagger}\,y_{N}\right)_{13}^{2}\frac{m_{ N_{1}}}{m_{N_{3}}}\right]. \tag{103}\] If we consider \(\mathrm{Im}\left(y_{N}^{\dagger}\,y_{N}\right)_{13}^{2}\gg\mathrm{Im}\left(y_ {N}^{\dagger}\,y_{N}\right)_{12}^{2}\) and \(m_{N_{1}}\ll m_{N_{2,3}}\), then \[\epsilon_{\Delta L}\simeq-\frac{3\,\delta_{\mathrm{eff}}}{16\,\pi}\,\frac{ \left|(y_{N})_{13}\right|^{2}m_{N_{1}}}{m_{N_{3}}}\,, \tag{104}\] while the effective CP violating phase is given by \[\delta_{\mathrm{eff}}=\frac{1}{(y_{N})_{13}^{2}}\,\frac{\mathrm{Im}(y_{N}^{ \dagger}\,y_{N})_{13}^{2}}{(y_{N}^{\dagger}\,y_{N})_{11}}\,. \tag{105}\] In order to simultaneously generate the tiny active neutrino mass, one has to impose the seesaw relation \[m_{\nu_{3}}=\frac{\left|(y_{N})_{13}\right|^{2}v^{2}}{m_{N_{3}}}\,, \tag{106}\] that leads to \[\epsilon_{\Delta L}\simeq-\frac{3\,\delta_{\mathrm{eff}}}{16\,\pi}\,\frac{m_{ N_{1}}\,m_{\nu_{3}}}{v^{2}}\,. \tag{107}\] Instead, if \(\mathrm{Im}\left(y_{N}^{\dagger}\,y_{N}\right)_{13}^{2}\ll\mathrm{Im}\left(y_ {N}^{\dagger}\,y_{N}\right)_{12}^{2}\), the CP asymmetry parameter becomes \[\epsilon_{\Delta L}\simeq-\frac{3\,\delta_{\mathrm{eff}}}{16\,\pi}\,\frac{m_{ N_{1}}\,m_{\nu_{2}}}{v^{2}}\,. \tag{108}\] In general, one can then write \[\epsilon_{\Delta L}\simeq-\frac{3\,\delta_{\mathrm{eff}}}{16\,\pi}\,\frac{m_{ N_{1}}\,m_{\nu_{i}}}{v^{2}}\,, \tag{109}\] where \(i=2\,,3\) for normal hierarchy. On a similar note, the CP-asymmetry parameter can be obtained for the inverted hierarchy with \(i=1\,,2\). In either case, we consider \(m_{\nu_{i}}\) to be the heaviest active neutrino mass \(m_{\nu,\mathrm{max}}\) in Eq. (3.23).
2303.06101
An Assessment of the Supremizer and Aggregation Methods of Stabilization for Reduced-Order Models
We explore the features of two methods of stabilization, aggregation and supremizer methods, for reduced-order modeling of parametrized optimal control problems. In both methods, the reduced basis spaces are augmented to guarantee stability. For the aggregation method, the reduced basis approximation spaces for the state and adjoint variables are augmented in such a way that the spaces are identical. For the supremizer method, the reduced basis approximation space for the state-control product space is augmented with the solutions of a supremizer equation. We implement both of these methods for solving several parametrized control problems and assess their performance. Results indicate that the number of reduced basis vectors needed to approximate the solution space to some tolerance with the supremizer method is much larger, possibly double, that for aggregation. There are also some cases where the supremizer method fails to produce a converged solution. We present results to compare the accuracy, efficiency, and computational costs associated with both methods of stabilization which suggest that stabilization by aggregation is a superior stabilization method for control problems.
Kayla D. Davie, Howard C. Elman
2023-03-10T17:36:25Z
http://arxiv.org/abs/2303.06101v2
# An Assessment of the Supremizer and Aggregation Methods of Stabilization for Reduced-Order Models ###### Abstract We explore the features of two methods of stabilization, aggregation and supremizer methods, for reduced-order modeling of parametrized optimal control problems. In both methods, the reduced basis spaces are augmented to guarantee stability. For the aggregation method, the reduced basis approximation spaces for the state and adjoint variables are augmented in such a way that the spaces are identical. For the supremizer method, the reduced basis approximation space for the state-control product space is augmented with the solutions of a supremizer equation. We implement both of these methods for solving several parametrized control problems and assess their performance. Results indicate that the number of reduced basis vectors needed to approximate the solution space to some tolerance with the supremizer method is much larger, possibly double, that for aggregation. There are also some cases where the supremizer method fails to produce a converged solution. We present results to compare the accuracy, efficiency, and computational costs associated with both methods of stabilization which suggest that stabilization by aggregation is a superior stabilization method for control problems. **Keywords.** reduced basis methods, model order reduction, saddle-point problems, PDE-constrained optimization, parametrized control problems, inf-sup condition ## 1 Introduction Consider the parametrized control problem subject to a constraint given by a partial differential equation (PDE) \[\min_{u,f}\ \ \mathcal{J}(u,f;\mu)\qquad\text{subject to}\qquad\mathcal{G}(u,f;\mu)=0. \tag{1.1}\] Here, \(\mathcal{J}\) is the cost functional to be minimized, \(u\) is the state variable, \(f\) is the control variable, \(\mu\) is a vector of parameters, and \(\mathcal{G}\) is the PDE constraint including boundary conditions. The algebraic system that results from the discretization of (1.1) is of large dimension and saddle-point form. Iterative Krylov subspace methods have been proposed for efficiently approximating the solution to deterministic versions of the problem [8, 9, 10, 14, 18]. In the parametrized setting, solutions to these problems are often required for a large number of parameters. This may be because a simulation is done to explore solutions for a large number of parameters, or to obtain statistical properties of solutions. Thus, this computationally expensive problem must be solved repeatedly, a major computational task. Reduced order modeling (ROM) is an efficient way to address this issue [11, 16]. This technique uses a reduced basis (RB) method to replace a problem of high dimension with one of reduced order without jeopardizing accuracy of the approximation. These reduced-order models require less storage and are significantly less computationally taxing than the "full-sized" version of the models. RB methods typically consist of two stages, referred to as an _offline_ stage and an _online_ stage. In the offline stage, the reduced basis is constructed. This stage requires obtaining a set of "snapshots" or "truth" approximations of the parametrized optimal control problem for various choices of parameters, consisting of high-dimensional discretizations of the solution that can be acquired using standard discretization methods such as the finite element method. This stage may be computationally expensive, but once the reduced basis is available, the online stage consists of simulation for multiple values of the parameter, which can be done at lower costs, so that the overhead of the online computations is amortized over these computations. For constrained problems such as (1.1), standard approaches for constructing reduced-order models require special treatment of the basis to ensure inf-sup stability of the reduced model. Two common ways of augmenting the reduced basis to satisfy the condition are through use of a supremizer [17, 20] and through aggregation [15]. Both approaches have been studied in depth and have been shown to produce reduced bases that are reliable. The goal of this paper is to study both of these methods of stabilization and compare the accuracy, efficiency, and computational costs associated with them. We look at two benchmark problems, parametrized versions of the diffusion control problem and the convection-diffusion control problem. Our main observation is that stabilization by aggregation results in a basis that requires significantly fewer snapshots with less work required to use the basis in the online stage. An outline of this paper is as follows. In Section 2, we will define the operators, matrix system, and spaces associated with the deterministic diffusion control problem, and we review the inf-sup condition for the deterministic problem. In Section 3, we discuss reduced-order modeling and describe the algorithm used to construct a reduced basis. In Section 4, we present the two different methods of stabilization and review how inf-sup stability is established for both RB systems. In Section 5, we present our numerical results for the two benchmark problems. ## 2 The Parametrized Optimal Control Problem Consider the parametrized diffusion control problem, \[\min_{u,f}\ \ \frac{1}{2}\big{\|}u(x,\mu)-\hat{u}(x,\mu)\big{\|}_{L_{ 2}(\Omega)}^{2}+\beta\big{\|}f(x,\mu)\big{\|}_{L_{2}(\Omega)}^{2} \tag{2.1}\] \[\text{subject to}\ \ -\bigtriangledown\cdot(\sigma(x,\bigtriangledown u (x,\mu))\ =\ f(x,\mu)\ \ \text{in}\ \ \Omega\times\Gamma,\] (2.2) \[\text{such that}\ \ \ \ \ \ \ \ u(x,\mu)=g(x)\ \ \ \ \text{on}\ \partial\Omega_{D}\times\Gamma,\] (2.3) \[\sigma(x,\mu)\frac{\partial u(x,\mu)}{\partial n}=0\ \ \ \ \text{on}\ \partial\Omega_{N}\times\Gamma. \tag{2.4}\] Here, \(\Omega\subset\mathbb{R}^{d}\) is the spatial domain, \(\mu\) is a vector of parameters, and \(\sigma(x,\mu)\) is a parameter-dependent diffusion coefficient. For each \(\mu\), \(\hat{u}(x,\mu)\) is a desired state and we seek a state, \(u(x,\mu)\), as close to \(\hat{u}(x,\mu)\) as possible (in the \(L_{2}\)-norm sense), and the control, \(f(x,\mu)\), can vary to achieve this. The regularization term (second term in cost functional) is added to make the problem well-posed [9, 18]. This term involves a regularization parameter, \(\beta\), based on Tikhonov regularization theory [22], and is typically set to be \(\beta\approx 10^{-2}\). The solution variables are state and control variables, \(u(x,\mu)\) and \(f(x,\mu)\) respectively, where \(\hat{u}(x;\mu)\) and \(g(x)\) are given. We will consider a two-dimensional spatial domain \(\Omega\) divided into \(N_{D}\) horizontal strips or subdomains with the \(k\)th subdomain denoted \(\Omega_{k}\). The diffusion coefficient is taken to be piecewise constant on each subdomain \(\sigma(x,\mu)|_{\Omega_{k}}=\mu_{k},k=1:N_{D}\), giving a parameter space of dimension \(N_{D}\) depending on the \(N_{D}\)-dimensional parameter vector \(\mu=[\mu_{1},...,\mu_{N_{D}}]^{T}\). ### Definition of Operators and Spaces In this section, we introduce the operators and spaces for the parameterized diffusion control problem with fixed \(\mu\) and present the structure for solving a problem of this type at the continuous level. The goal is to find state and control solutions, \(u(x,\mu)\) and \(f(x,\mu)\) respectively, for some desired state \(\hat{u}(x,\mu)\) and Dirichlet boundary data \(g(x)\). The state space, \(U\), is a Hilbert space for state function \(u(x,\mu)\) equipped with inner product and induced norm \[(u_{1},u_{2})_{U}=\int_{\Omega}\bigtriangledown u_{1}(x,\mu)\cdot\bigtriangledown u _{2}(x,\mu)dx,\ \ ||u||=(u,u)^{1/2}_{U}, \tag{2.5}\] the \(H^{1}\) seminorm. The solution space for \(u(x,\mu)\), \(H^{1}_{E}\), has the Dirichlet condition built into its definition, and functions in the test space \(H^{1}_{E_{0}}\) are zero on the Dirichlet part of boundary \(\Omega_{D}\). The control space, \(F\), for control function \(f(x,\mu)\), is another Hilbert space equipped with inner product and induced norm \[(f_{1},f_{2})_{F}=\int_{\Omega}f_{1}(x,\mu)f_{2}(x,\mu)dx,\ \ ||f||=(f,f)^{1/2}_{F},\] i.e. \(F=L_{2}(\Omega)\). A standard approach to solving constrained optimization problems is to apply first-order conditions for stationarity to the Lagrangian functional \[\text{L:=}\frac{1}{2}\int_{\Omega}(u(x,\mu)-\hat{u}(x,\mu))^{2}dx+\beta\int_{ \Omega}f(x,\mu)^{2}dx+\lambda\Big{(}\int_{\Omega}\sigma(x,\mu)\bigtriangledown u (x,\mu)\cdot\bigtriangledown v(x,\mu)dx-\int_{\Omega}f(x,\mu)v(x,\mu)dx\Big{)}. \tag{2.6}\] This gives rise to an adjoint/Lagrange multiplier function, \(\lambda(x,\mu)\), and the adjoint/Lagrange multiplier space, \(Q\). The Lagrange multiplier satisfies [8] \[-\bigtriangledown^{2}\lambda(x,\mu)=-u(x,\mu)+\hat{u}(x,\mu),\] with homogeneous Dirichlet boundary conditions, so that \(\lambda(x,\mu)\in H^{1}_{E_{0}}\) as well. Thus, the Hilbert space \(Q\) is equipped with inner product and induced norm (2.5) and \(Q\) and \(U\) are equivalent. It will be convenient to refer to certain product spaces for the purpose of defining operators and establishing stability. These include the product between control and state spaces, \(X=F\times U\), with elements \(\bar{x}=(f(x,\mu),u(x,\mu))\in X\), and the product spaces \(U\times Q\), \(F\times Q\), and \(X\times Q\). For given \(\mu\), let \(a(\cdot,\cdot;\mu):U\times Q\to\mathbb{R}\) be the linear elliptic operator \[a(z,q;\mu)=\int_{\Omega}\sigma(x,\mu)\bigtriangledown z(x,\mu)\cdot\bigtriangledown q (x,\mu)dx.\] We assume that the operator \(a\) is coercive, i.e., there exists \(\alpha_{0}>0\) such that \(\alpha(\mu)=inf_{z\in U}\frac{a(z,z;\mu)}{||z||_{U}^{2}}\geq\alpha_{0}\) for all \(\mu\). The action of the control \(f(x,\mu)\) is represented by the operator \(c(\cdot,\cdot;\mu):F\times Q\to\mathbb{R}\), \[c(f,q;\mu)=\int_{\Omega}f(x,\mu)q(x,\mu)dx.\] The weak formulation of (2.1)-(2.4) is to find the minimizers \(u(\cdot,\mu)\in H^{1}_{E}(\Omega)\), \(f(\cdot,\mu)\in F\) as in (2.1) satisfying (2.3)-(2.4) subject to the weak form of the constraint \[\int_{\Omega}\sigma(x,\mu)\bigtriangledown u(x,\mu)\cdot\bigtriangledown v(x,\mu)dx=\int_{\Omega}f(x,\mu)v(x,\mu)dx\ \ \forall v\in H^{1}_{E_{0}}(\Omega). \tag{2.7}\] First-order stationarity of the Lagrangian (2.6) results in a saddle-point system. It is well known that to guarantee the existence, uniqueness and stability of solutions, saddle-point systems of this form must satisfy an inf-sup condition. Specifically, for the bilinear form \(\mathrm{B}(\cdot,\cdot;\mu):X\times Q\to\mathbb{R}\), given by \[\mathrm{B}(\bar{w},q;\mu)=a(z,q;\mu)-c(v,q;\mu)\] where \(\bar{w}=\begin{bmatrix}v\\ z\end{bmatrix}\in X=F\times U\), it must hold that there exists there exists \(\beta_{0}>0\) such that \[\inf_{q\in Q}\sup_{\bar{w}\in X}\ \frac{\mathrm{B}(\bar{w},q;\mu)}{||\bar{w}||_{ X}||q||_{Q}}\geq\beta_{0}.\] It is shown in [15] that this condition holds for the control problem (2.1)-(2.4). ### Discrete Forms and Matrix Systems For the discrete version of the parametrized control problem, let \(S^{h}_{0}\) be a finite-dimensional subspace of \(H^{h}_{E_{0}}\), and let \(S^{h}_{E}\) augment \(S^{h}_{0}\) with a finite set of basis functions used to impose Dirichlet boundary conditions. The Galerkin finite element method is chosen for discretization of the optimal control problem. The weak formulation of the constraint is then to find \(u_{h}(x,\mu)\in S_{E}^{h}\) such that \[\int_{\Omega}\sigma(x;\mu)\bigtriangledown u_{h}(x,\mu)\cdot\bigtriangledown v_{h} (x,\mu)=\int_{\Omega}f_{h}(x,\mu)v_{h}(x,\mu)\ \ \forall v_{h}\in S_{0}^{h}\subset H_{E_{0}}^{1}(\Omega). \tag{2.8}\] Here, \(S_{E}^{h}\) is the solution space and \(S_{0}^{h}\) is a vector space containing test functions. Let the basis for the test functions be denoted \(\{\phi_{1},...,\phi_{n}\}\), and assume this basis is extended by \(\phi_{n+1},...,\phi_{n+\partial n}\), which ensures that the Dirichlet boundary conditions hold at certain points on \(\partial\Omega_{D}\); see [3, pp. 195ff.] for additional discussion of this. The discrete analog of the weak version of the minimization problem (2.1)-(2.4) is as follows: \[\min_{u_{h},f_{h}}\ \frac{1}{2}\left\|u_{h}(x,\mu)-\hat{u}_{h}(x,\mu) \right\|_{2}^{2}+\beta\left\|f_{h}(x,\mu)\right\|_{2}^{2} \tag{2.9}\] \[\text{subject to}\ \int_{\Omega}\sigma(x;\mu)\bigtriangledown u_{h}(x, \mu)\cdot\bigtriangledown v_{h}(x,\mu)dx=\int_{\Omega}f_{h}(x,\mu)v_{h}(x,\mu )dx \tag{2.10}\] subject to boundary conditions for \(u_{h}\) as in (2.3)-(2.4). For discretization, we use equal-order \(Q_{1}\) (piecewise bilinear) finite elements for all three (state, control and Lagrange multiplier) spaces, which is shown to be div-stable in [15]. In the state space, the finite element approximation of \(u(x,\mu)\), \(u_{h}(x,\mu)\), is represented in terms of basis functions \(u_{h}(x,\mu)=\sum_{j=1}^{n}\mathbf{u}_{\mu,j}\phi_{j}+\sum_{j=n+1}^{n+\partial n }\mathbf{u}_{\mu,j}\phi_{j}\) such that \(u_{h}(x,\mu)\) is determined uniquely by coefficient vector \(\mathbf{u}(\mu)=\mathbf{u}_{\mu}=(\mathbf{u}_{\mu,1},...,\mathbf{u}_{\mu,n+ \partial n})^{T}\). The coefficient vector \(\mathbf{f}(\mu)=\mathbf{f}_{\mu}=(\mathbf{f}_{\mu,1},...\mathbf{f}_{\mu,n})^{T}\) associated with the approximation \(f_{h}(x,\mu)\) of \(f(x,\mu)\) is defined similarly. The cost functional can be rewritten in matrix notation as \[\min_{\mathbf{u}_{\mu},\mathbf{f}_{\mu}}\ \frac{1}{2}\mathbf{u}_{\mu}^{T} \mathcal{M}\mathbf{u}_{\mu}-\mathbf{u}_{\mu}^{T}\mathbf{b}_{\mu}+\beta\mathbf{ f}_{\mu}^{T}\mathcal{M}\mathbf{f}_{\mu}\] where \(\mathbf{f}(\mu)=\mathbf{f}_{\mu}=(\mathbf{f}_{\mu,1},...\mathbf{f}_{\mu,n})^{T}\), \(\mathbf{b}(\mu)=\mathbf{b}_{\mu}=\{\int\hat{u}(x,\mu)\phi_{i}\}_{i=1,...,n}\), and the mass matrix, \(\mathcal{M}\), is defined as \(\mathcal{M}=\{\int\phi_{i}\phi_{j}\}_{i,j=1,...,n}.\) The constraint can be expressed as \[\mathcal{K}(\mu)\mathbf{u}_{\mu}=\mathcal{M}\mathbf{f}_{\mu}+ \mathbf{d}_{\mu} \tag{2.11}\] such that \(\mathcal{K}(\mu)=[k_{ij}^{\mu}]\) and \([k_{ij}^{\mu}]=\int_{\Omega}\sigma(x,\mu)\bigtriangledown\phi_{j}\cdot \bigtriangledown\phi_{i}=\sum_{q}\mu_{q}\int_{\Omega_{q}}\bigtriangledown \phi_{j}\cdot\bigtriangledown\phi_{i}\). Note that \(\mathbf{d}(\mu)=\mathbf{d}_{\mu}\) contains terms from the boundary values of the discretized \(u(x,\mu)\), such that \(\mathbf{d}_{\mu}=\{\mathbf{d}_{\mu,i}\}_{i=1,...,n}\) with \(\mathbf{d}_{\mu,i}=\int_{\Omega}g\phi_{i}\ d\Omega-\sum_{j=n+1}^{n+n_{\partial }}\mathbf{u}_{\mu,j}\int_{\Omega}\sigma(x,\mu)\bigtriangledown\phi_{i}\cdot \bigtriangledown\phi_{j}\ d\Omega\). The discrete Lagrangian is then \[\text{L}:=\frac{1}{2}\mathbf{u}_{\mu}^{T}\mathcal{M}\mathbf{u}_{ \mu}-\mathbf{u}_{\mu}^{T}\mathbf{b}_{\mu}+\beta\mathbf{f}_{\mu}^{T}\mathcal{M }\mathbf{f}_{\mu}+\mathbf{\lambda}_{\mu}^{T}(\mathcal{K}(\mu)\mathbf{u}_{\mu}- \mathcal{M}\mathbf{f}_{\mu}-\mathbf{d}_{\mu})\] where \(\mathbf{\lambda}_{\mu}\) is a vector of Lagrange multipliers associated with finite element approximation of \(\lambda(x,\mu)\), \(\lambda_{h}(x,\mu)\). Applying first-order conditions for stationarity gives a set of three coupled equations, a block system \[\begin{bmatrix}2\beta\mathcal{M}&0&-\mathcal{M}\\ 0&\mathcal{M}&\mathcal{K}(\mu)^{T}\\ -\mathcal{M}&\mathcal{K}(\mu)&0\end{bmatrix}\begin{bmatrix}\mathbf{f}_{\mu}\\ \mathbf{u}_{\mu}\\ \mathbf{\lambda}_{\mu}\end{bmatrix}=\begin{bmatrix}0\\ \mathbf{b}_{\mu}\\ \mathbf{d}_{\mu}\end{bmatrix}. \tag{2.12}\] The block \(3\times 3\) coefficient matrix \(\mathcal{G}(\mu)\) can also be represented in compact form, \[\mathcal{G}(\mu)=\begin{bmatrix}\mathcal{A}&\mathcal{B}(\mu)^{T}\\ \mathcal{B}(\mu)&0\end{bmatrix}, \tag{2.13}\] where \(\mathcal{B}(\mu)=[-\mathcal{M},\mathcal{K}(\mu)]\), and the matrix system can be denoted \(\mathcal{G}(\mu)\mathbf{v}(\mu)=\mathbf{b}(\mu)\). The discrete spaces are defined as follows. The state space, \(U_{h}\), has inner product \(\langle u_{h}(x,\mu),v_{h}(x,\mu)\rangle_{U_{h}}=\langle\mathcal{K}\mathbf{u} _{\mu},\mathbf{v}_{\mu}\rangle=\mathbf{v}_{\mu}^{T}\mathcal{K}\mathbf{u}_{\mu}\) and induced norm \(||u_{h}||_{U_{h}}=\langle\mathcal{K}\mathbf{u}_{\mu},\mathbf{u}_{\mu}\rangle^ {1/2}\) where \(\mathcal{K}\) is defined like \(\mathcal{K}(\mu)\) above with \(\sigma\equiv 1\). The control space, \(F_{h}\), is equipped with inner product \(\langle f_{h}(x,\mu),v_{h}(x,\mu)\rangle_{F_{h}}=\langle\mathcal{M}\mathbf{f} _{\mu},\mathbf{v}_{\mu}\rangle=\mathbf{v}_{\mu}^{T}\mathcal{M}\mathbf{f}_{\mu}\) and induced norm \(||f_{h}(x,\mu)||_{F_{h}}=\langle\mathcal{M}\mathbf{f}_{\mu},\mathbf{f}_{\mu} \rangle^{1/2}.\) The Lagrange multiplier space \(Q_{h}\) is equipped with inner product \(\langle q_{h}(x,\mu),p_{h}(x,\mu)\rangle_{Q_{h}}=\langle\mathcal{K}\mathbf{q} _{\mu},\mathbf{p}_{\mu}\rangle=\mathbf{p}_{\mu}^{T}\mathcal{K}\mathbf{q}_{\mu}\) and induced norm \(||\lambda_{h}(x,\mu)||_{Q_{h}}=\langle\mathcal{K}\mathbf{\lambda}_{\mu}, \mathbf{\lambda}_{\mu}\rangle^{1/2}.\) The state-control space, \(X_{h}=F_{h}\times U_{h}\), has elements \(\bar{x}_{h}\) with associated coefficient vector \(\bar{\mathbf{x}}(\mu)=\begin{bmatrix}\mathbf{f}_{\mu}\\ \mathbf{u}_{\mu}\end{bmatrix}\). Because the matrix system (2.12) is of saddle point form (2.13), satisfaction of the inf-sup condition is required to guarantee stability. The discrete inf-sup condition requires that there exists \(\beta_{0}>0\) such that \[\min_{\mathbf{q}\in Q_{h}}\ \max_{\bar{\mathbf{w}}=\begin{bmatrix}\mathbf{v} \\ \mathbf{z}\end{bmatrix}\in X_{h}}\frac{\langle\mathcal{K}(\mu)\mathbf{z}, \mathbf{q}\rangle-\langle\mathcal{M}\mathbf{v},\mathbf{q}\rangle}{||\bar{ \mathbf{w}}||_{X}||\mathbf{q}||_{Q}}\geq\beta_{0},\] as proven in [15]. Recall also [8] that this stability bound is equivalent to the bound on the Rayleigh quotient \[\frac{\langle\mathcal{B}(\mu)\mathcal{A}^{-1}\mathcal{B}(\mu)^{T}\mathbf{q}, \mathbf{q}\rangle^{1/2}}{\langle\mathcal{K}\mathbf{q},\mathbf{q}\rangle^{1/2 }}\geq\beta_{0}. \tag{2.14}\] ## 3 Reduced-Order Modeling ROM uses a relatively small set of full-order discrete solutions \((\mathbf{f}_{\mu},\mathbf{u}_{\mu})\), called "snapshots," for various values of the parameter(s) \(\mu\), to approximate the solution to the problem for other values of the parameter by projecting on the space spanned by a subset of these snapshots, where it is assumed that the solution manifold can be approximated well by a subset of snapshots [12, 15, 19]. Computations are separated into an "offline" step in which the reduced basis is constructed, and an "online" step in which simulations are done. We outline this process here, with additional details given in Section 4. In the offline step, full solutions \((\mathbf{f}_{\mu},\mathbf{u}_{\mu})\) are computed for multiple values of the parameter vector \(\mu\), and these solutions are used to generate the reduced basis approximation spaces using a greedy algorithm [2, 4]. Let \(T\) be a set of \(N_{max}\)_training parameters_, where we assume that \(N_{max}\) is large enough so that the span of the solution set \(\left\{\begin{bmatrix}\mathbf{f}_{\mu}\\ \mathbf{u}_{\mu}\end{bmatrix}\middle|\mu\in T\right\}\) represents a good approximation of the entire solution space. Starting with a single parameter from \(T\) and corresponding snapshot, an initial reduced basis approximation space consists of the single snapshot. At each step of the greedy algorithm for building the reduced basis, an approximation to the full solution, \(\mathbf{v}_{r}(\mu)\approx\mathbf{v}(\mu)\), from the reduced space is computed for each \(\mu\) in the training set, and for the parameter \(\mu\) for which an error indicator \(\eta(\mathbf{v}_{r}(\mu))\) is maximal, the snapshot, full solution \(\mathbf{v}(\mu)\), is added to the reduced basis. This continues until the values of the error indicator for all approximations from the reduced space are less than some prescribed tolerance for all parameters in \(T\). The resulting reduced basis approximation space is spanned by \(N\) snapshots corresponding to some parameters \(\mu^{(n)},1\leq n\leq N\) where \(N\leq N_{max}\). A generic statement of the greedy algorithm, for finding a subspace \(V_{i}\) of \(V_{h}\equiv F_{h}\times U_{h}\times Q_{h}\), is given in Algorithm 1. At each stage, approximate solutions \(\mathbf{v}_{r}(\mu)\) in the (current) reduced space \(V_{i-1}\) are computed for all parameters \(\mu\) in \(T\), together with the error indicator \(\eta(\mathbf{v}_{r}(\mu))\). For the parameter with the largest value of \(\eta(\mathbf{v}_{r}(\mu))\), \(\mu^{(i)}\), the full-system solution \(\mathbf{v}(\mu^{(i)})\) is computed and the reduced basis is augmented with this solution. ``` Randomly sample \(N_{max}\) parameters \(\mu\). Let \(T=\{\mu^{(k)}\}_{1}^{N_{max}}\). Choose \(\mu^{(1)}\). Solve the full system for \(\mathbf{v}(\mu^{1})\). Set \(V_{1}=\{\mathbf{v}(\mu^{1})\}\). Set \(N=1\). Set \(\eta^{*}=\infty\). while\(\eta^{*}>\) tolerance do for\(i=1:N_{max}\)do Solve reduced system for \(\tilde{\mathbf{v}}_{r}(\mu^{(i)})\), compute \(\mathbf{v}_{r}(\mu^{(i)})=\mathcal{Q}\tilde{\mathbf{v}}_{r}(\mu^{(i)})\). Compute \(\eta_{i}\equiv\eta(\mathbf{v}_{r}(\mu^{(i)}))\). endfor Find \(\eta^{*}=\max_{i}\eta_{i},\mu^{*}=\arg\max_{i}\eta(\mathbf{v}_{r}(\mu^{(i)}))\). if\(\eta^{*}>\) tolerance then Set \(N=N+1\). Solve the full system for \(\mathbf{v}(\mu^{*})\). Update \(V_{N}=\text{span}(V_{N-1}\bigcup\{\mathbf{v}(\mu^{*})\})\). endif endwhile ``` **Algorithm 1** Greedy sampling algorithm for constructing the reduced basis space \(V_{N}\) We comment on several things that need further discussion. First, we will consider two ways to specify the reduced system required for Algorithm 1: \[\text{Galerkin projection: }[\mathcal{Q}^{T}\mathcal{G}(\mu) \mathcal{Q}]\tilde{\mathbf{v}}_{r}(\mu)=\mathcal{Q}^{T}\mathbf{b}(\mu) \tag{3.1}\] \[\text{Petrov-Galerkin projection: }[(\mathcal{G}(\mu) \mathcal{Q})^{T}(\mathcal{G}(\mu)\mathcal{Q})]\tilde{\mathbf{v}}_{r}(\mu)=( \mathcal{G}(\mu)\mathcal{Q})^{T}\mathbf{b}(\mu) \tag{3.2}\] We will discuss the details and impact of these choices in Section 5. Second, as given, Algorithm 1 does not address the issue of inf-sup stability, which is not automatically satisfied using only snapshots. We will elaborate on this in Section 4. The basis for the reduced space will ultimately be represented by the columns of a matrix \(\mathcal{Q}\) such that \(\mathcal{Q}\tilde{\mathbf{v}}_{r}=\mathbf{v}_{r}\approx\mathbf{v}\). \(\mathcal{Q}\) has block-diagonal structure consisting of components for the state, control and adjoint variables: \[\mathcal{Q}=\begin{bmatrix}\mathcal{Q}_{f}&&\\ &\mathcal{Q}_{u}&\\ &&\mathcal{Q}_{\lambda}\end{bmatrix}\quad\text{or}\quad\mathcal{Q}=\begin{bmatrix} \mathcal{Q}_{\tilde{x}}&\\ &&\mathcal{Q}_{\lambda}\end{bmatrix} \tag{3.3}\] The Galerkin formulation then results in one of the following scenarios: \[\begin{bmatrix}\mathcal{Q}_{f}^{T}&&\\ &\mathcal{Q}_{u}^{T}&\\ &&\mathcal{Q}_{\lambda}^{T}\end{bmatrix}\begin{bmatrix}2\beta\mathcal{M}&0&- \mathcal{M}\\ 0&\mathcal{M}&\mathcal{K}(\mu)^{T}\\ -\mathcal{M}&\mathcal{K}(\mu)&0\end{bmatrix}\begin{bmatrix}\mathcal{Q}_{f}&&\\ &\mathcal{Q}_{u}&\\ &&\mathcal{Q}_{\lambda}\end{bmatrix}=\begin{bmatrix}\mathcal{Q}_{f}^{T}(2 \beta\mathcal{M})\mathcal{Q}_{f}&0&-\mathcal{Q}_{f}^{T}\mathcal{M}\mathcal{Q}_ {\lambda}\\ 0&\mathcal{Q}_{u}^{T}\mathcal{M}\mathcal{Q}_{u}&\mathcal{Q}_{u}^{T}\mathcal{K}( \mu)^{T}\mathcal{Q}_{\lambda}\\ -\mathcal{Q}_{\lambda}^{T}\mathcal{M}\mathcal{Q}_{f}&\mathcal{Q}_{\lambda}^{T} \mathcal{K}(\mu)\mathcal{Q}_{u}&0\end{bmatrix} \tag{3.4}\] \[\begin{bmatrix}\mathcal{Q}_{\bar{x}}^{T}&&\\ &\mathcal{Q}_{\lambda}^{T}\end{bmatrix}\begin{bmatrix}\mathcal{A}&\mathcal{B}( \mu)^{T}\\ \mathcal{B}(\mu)&0\end{bmatrix}\begin{bmatrix}\mathcal{Q}_{\bar{x}}&&\\ &\mathcal{Q}_{\lambda}\end{bmatrix}=\begin{bmatrix}\mathcal{Q}_{\bar{x}}^{T} \mathcal{A}\mathcal{Q}_{\bar{x}}&\mathcal{Q}_{\bar{x}}^{T}\mathcal{B}(\mu)^{T }\mathcal{Q}_{\lambda}\\ \mathcal{Q}_{\lambda}^{T}\mathcal{B}(\mu)\mathcal{Q}_{\bar{x}}&0\end{bmatrix} \tag{3.5}\] In the online step, simulations are carried out and the matrix \(\mathcal{Q}\) projects the full problem onto the reduced space using one of the two methods used to specify the reduced system, Galerkin or Petrov-Galerkin projection (3.1)-(3.2). ## 4 Stabilization The discussion above describes the construction of reduced spaces \(X_{N}=F_{N}\times U_{N}\) and \(Q_{N}\), which (so far) give rise to matrices \(\mathcal{Q}_{u}=[\mathbf{u}(\mu^{1}),...,\mathbf{u}(\mu^{N})]\), and \(\mathcal{Q}_{f}\), \(\mathcal{Q}_{\bar{x}}\) and \(\mathcal{Q}_{\lambda}\) defined analogously. A question remains about inf-sup stability of the saddle point systems (3.5) and (3.4), i.e., whether \[\inf_{q\in Q_{N}}\sup_{\bar{w}\in X_{N}}\frac{\mathrm{B}(\bar{w},q;\mu)}{|| \bar{w}||_{X}||q||_{Q}}=\min_{\mathbf{q}}\max_{\bar{\mathbf{w}}}\frac{\langle \mathcal{B}(\mu)\bar{\mathbf{w}},\mathbf{q}\rangle}{||\bar{\mathbf{w}}||_{X} ||\mathbf{q}||_{Q}}\geq\beta_{0} \tag{4.1}\] Note that \(X_{N}=\)span\(\{\bar{x}(\mu^{(k)})\}\) where, for homogeneous Dirichlet boundary conditions, \(\bar{\mathbf{x}}(\mu^{(k)})=\begin{bmatrix}\mathbf{f}_{\mu^{(k)}}\\ \mathbf{u}_{\mu^{(k)}}\end{bmatrix}\) satisfies the weak constraint (2.11) with \(\mathbf{d}_{\mu}=0\). Consequently, the numerator in (4.1) is \(0\) for all \(\bar{w}\in X_{N}\) and the reduced problem is not inf-sup stable. We now discuss two techniques designed to address this by enriching \(X_{N}\) and/or \(Q_{N}\). ### Stabilization by Supremizer Let \(\mathcal{Y}=[\bar{\mathbf{x}}(\mu^{(1)}),...,\bar{\mathbf{x}}(\mu^{(N)})]\) and \(\mathcal{L}=[\boldsymbol{\lambda}(\mu^{(1)}),...,\boldsymbol{\lambda}(\mu^{(N)})]\), where \(\{\bar{\mathbf{x}}(\mu^{(j)})\}\) and \(\{\boldsymbol{\lambda}(\mu^{(j)})\}\) are the basis vectors in \(V_{N}=X_{N}\times Q_{N}\) constructed by Algorithm 1. Following [1], we describe two ways to construct supremizers to enrich the space determined by \(\mathcal{Y}\). The first of these, called an "exact supremizer" in [1], produces what is needed but not in a practical way. Let \(\mu\) be a parameter arising in an online simulation, and let \(\mathbf{r}_{j}\) be the solution of \(\mathcal{A}\mathbf{r}_{j}=\mathcal{B}(\mu)^{T}\boldsymbol{\lambda}_{j}\), \(j=1,...,N\). (It can be shown that \(\mathbf{r}_{j}=\arg\max_{\bar{\mathbf{w}}}\frac{\langle\mathcal{B}(\mu)\bar{ \mathbf{w}},\boldsymbol{\lambda}_{j}\rangle}{||\bar{\mathbf{w}}||_{X}}\), whence the name _supremizer_.) Let \(\mathcal{R}=[\mathbf{r}_{1},...,\mathbf{r}_{N}]\), and let the enriched reduced state space be defined as the span of \([\mathcal{Y},\mathcal{R}]\). The operators satisfy \(\mathcal{A}\mathcal{R}=\mathcal{B}(\mu)^{T}\mathcal{L}\), and any member of the enriched space has the form \(\bar{\mathbf{w}}=\mathcal{Y}\xi+\mathcal{R}\omega\). Therefore, on the enriched reduced spaces, we seek a lower bound on \[\min_{\mathbf{q}\in\mathrm{range}(\mathcal{L})}\max_{\bar{\mathbf{w}}\in \mathrm{range}[\mathcal{Y},\mathcal{R}]}\frac{\langle\mathcal{B}(\mu)\bar{ \mathbf{w}},\mathbf{q}\rangle}{||\bar{\mathbf{w}}||_{X}||\mathbf{q}||_{Q}}= \min_{\alpha}\max_{\xi,\omega}\frac{\langle\mathcal{B}(\mu)[\mathcal{Y}\xi+ \mathcal{R}\omega],\mathcal{L}\alpha\rangle}{\langle\mathcal{A}[\mathcal{Y}\xi+ \mathcal{R}\omega],[\mathcal{Y}\xi+\mathcal{R}\omega]\rangle^{1/2}\langle \mathcal{L}\alpha,\mathcal{L}\alpha\rangle^{1/2}}\] where \(\mathbf{q}=\mathcal{L}\alpha\). But \[\max_{\xi,\omega}\frac{\langle\mathcal{B}(\mu)[\mathcal{Y}\xi+ \mathcal{R}\omega],\mathcal{L}\alpha\rangle}{\langle\mathcal{A}[\mathcal{Y}\xi+ \mathcal{R}\omega],[\mathcal{Y}\xi+\mathcal{R}\omega]\rangle^{\frac{1}{2}}} \geq\max_{\omega}\frac{\langle\mathcal{B}(\mu)\mathcal{R} \omega,\mathcal{L}\alpha\rangle}{\langle\mathcal{A}\mathcal{R}\omega,\mathcal{R }\omega\rangle^{1/2}}\quad\quad(\xi=0)\] \[=\max_{\omega}\frac{\langle\omega,\mathcal{R}^{T}\mathcal{B}(\mu )^{T}\mathcal{L}\alpha\rangle}{\langle(\mathcal{R}^{T}\mathcal{A}\mathcal{R}) ^{1/2}\omega,(\mathcal{R}^{T}\mathcal{A}\mathcal{R})^{1/2}\omega\rangle^{1/2}}\] \[=\max_{\theta}\frac{\langle\theta,(\mathcal{R}^{T}\mathcal{A} \mathcal{R})^{-1/2}\mathcal{R}^{T}\mathcal{B}(\mu)^{T}\mathcal{L}\alpha\rangle }{||\theta||}\qquad\text{where}\quad\theta=(\mathcal{R}^{T}\mathcal{A} \mathcal{R})^{-1/2}\omega\] \[=||(\mathcal{R}^{T}\mathcal{A}\mathcal{R})^{-1/2}R^{T}\mathcal{B }(\mu)^{T}\mathcal{L}\alpha||=\langle(\mathcal{R}^{T}\mathcal{A}\mathcal{R}) \alpha,\alpha\rangle^{1/2},\] where the last inequality follows from the fact that \(\mathcal{B}(\mu)^{T}\mathcal{L}=\mathcal{A}\mathcal{R}\). Thus, \[\max_{\bar{\mathbf{w}}=\mathcal{Y}\xi+\mathcal{R}\omega}\frac{ \langle\mathcal{B}(\mu)\bar{\mathbf{w}},\mathbf{q}\rangle}{||\bar{\mathbf{w}} ||_{X}||\mathbf{q}||_{Q}}\geq\frac{\langle(\mathcal{R}^{T}\mathcal{A}\mathcal{R })\alpha,\alpha\rangle^{1/2}}{\langle\mathcal{K}\mathcal{L}\alpha,\mathcal{L} \alpha\rangle^{1/2}}=\Big{[}\frac{\langle(\mathcal{R}^{T}\mathcal{A}\mathcal{R })\alpha,\alpha\rangle^{1/2}}{\langle\mathcal{A}^{-1}\mathcal{B}(\mu)^{T} \mathcal{L}\alpha,\mathcal{B}(\mu)^{T}\mathcal{L}\alpha\rangle^{1/2}}\Big{]} \Big{[}\frac{\langle\mathcal{A}^{-1}\mathcal{B}(\mu)^{T}\mathcal{L}\alpha, \mathcal{B}(\mu)^{T}\mathcal{L}\alpha\rangle^{1/2}}{\langle\mathcal{K} \mathcal{L}\alpha,\mathcal{L}\alpha\rangle^{1/2}}\Big{]}.\] The first of these factors is \[\frac{\langle(\mathcal{R}^{T}\mathcal{A}\mathcal{R})\alpha,\alpha\rangle^{1/2} }{\langle\mathcal{A}^{-1}\mathcal{B}(\mu)^{T}\mathcal{L}\alpha,\mathcal{B}( \mu)^{T}\mathcal{L}\alpha\rangle^{1/2}}=\frac{\langle(\mathcal{R}^{T}\mathcal{ A}\mathcal{R})\alpha,\alpha\rangle^{1/2}}{\langle(\mathcal{R}^{T}\mathcal{A} \mathcal{R})\alpha,\alpha\rangle^{1/2}}=1.\] From inequality (2.14), the second factor is bounded below by \(\beta_{0}\). Thus, this version of the supremizer produces a div-stable reduced basis. For reduced-basis methods to be practical, the reduced basis should be constructed in the offline step. The method just described does not meet this requirement, as the supremizers depend on the parameter \(\mu\) used in the online simulation and a new enriched reduced basis must be constructed for each new parameter. A practical variant constructs a set of supremizers \(\{\mathbf{r}_{j}\}\) that satisfy \(\mathcal{A}\mathbf{r}_{j}=\mathcal{B}(\mu^{(j)})^{T}\boldsymbol{\lambda}_{j}\), where \(\{\mu^{(j)}\}\) is the set of parameters chosen during the search, Algorithm 1. This computation can be done in the offline step. The resulting quantities satisfy \[\mathcal{A}\mathcal{R}=[\mathcal{B}(\mu^{(1)})^{T}\boldsymbol{\lambda}_{1},...,\mathcal{B}(\mu^{(N)})^{T}\boldsymbol{\lambda}_{N}],\] but there is no longer a relation of the form \(\mathcal{A}\mathcal{R}=B^{T}\mathcal{L}\). Consequently, the argument above establishing inf-sup stability of the reduced spaces is not applicable. This approach is described in [1] as constructing _approximate supremizers_, and it is the method we explore in experiments. Let \(\mathcal{Q}_{sup}\) denote the block-diagonal reduced basis matrix with augmentation by supremizer. \(\mathcal{Q}_{sup}\) has the form in the right side of (3.3). As the basis is constructed, at each step that a snapshot is added, \(\mathcal{Q}_{\lambda}\) is updated with snapshot \(\boldsymbol{\lambda}_{\mu}\) and \(\mathcal{Q}_{\bar{x}}\) is updated with snapshot \(\bar{\mathbf{x}}_{\mu}\) and supremizer \(\mathbf{r}_{\mu}=\mathcal{A}^{-1}\mathcal{B}(\mu)^{T}\boldsymbol{\lambda}_{\mu}\). Thus, the matrix \(\mathcal{Q}_{\lambda}\) from the naive RB spaces is left unaugmented and \(\mathcal{Q}_{\bar{x}}\) is augmented so that its range is span \(\{\bar{\mathbf{x}}_{\mu^{1}},\mathbf{r}_{\mu^{1}},...,\bar{\mathbf{x}}_{\mu^{N }},\mathbf{r}_{\mu^{N}}\}\). For computations, we use a Gram-Schmidt process so that both \(\mathcal{Q}_{\bar{x}}\) and \(\mathcal{Q}_{\lambda}\) are forced to have orthonormal columns. The matrix \(\mathcal{Q}_{\bar{x}}\) has twice as many columns and rows as \(\mathcal{Q}_{\lambda}\), and the entire matrix \(\mathcal{Q}_{sup}\) has \(3N\) columns, where \(N\) is the number of snapshots used in Algorithm 1. Stabilizing in this manner has been considered for both the parameterized optimal control problem as well as for stabilizing PDEs themselves. In [5], stabilization by supremizer was employed for solving the Stokes control problem. In [7], this method of stabilization is used for stabilizing the Navier-Stokes equations. ### Stabilization by Aggregation Another method of enriching the spaces to ensure inf-sup stability of the RB approximation spaces is to augment by aggregation [15]. Because equivalence of \(U\) and \(Q\) ensured inf-sup stability at the continuous and discrete levels, both \(U_{N}\) and \(Q_{N}\) are enriched so that they are equivalent and are each updated with both \(\mathbf{u}(\mu)\) and \(\boldsymbol{\lambda}(\mu)\)) at each step the RB spaces are updated. That is, the updated spaces and matrices determined by Algorithm 1 are given by the following rules: * Let \(Z_{N}=\) span \(\{u_{h}(\mu^{(n)}),\lambda_{h}(\mu^{(n)}),n=1,...,N\}\) * state: \(U_{N}=Z_{N}\) such that \(\text{range}(\mathcal{Q}_{u})=\text{range}([\mathbf{u}_{\mu^{(1)}}, \boldsymbol{\lambda}_{\mu^{(1)}},...,\mathbf{u}_{\mu^{(N)}},\boldsymbol{ \lambda}_{\mu^{(N)}}])\) * control: \(F_{N}=\) span \(\{f(\mu^{(n)}),n=1,...,N\}\) such that \(\text{range}(\mathcal{Q}_{f})=\text{range}([\mathbf{f}_{\mu^{(1)}},..., \mathbf{f}_{\mu^{(N)}}])\) * adjoint/Lagrange multiplier: \(Q_{N}=Z_{N}\) such that \(\mathcal{Q}_{\lambda}=\mathcal{Q}_{u}\) Inf-sup stability is established as follows [15]; \[\min_{\mathbf{q}\in Q_{N}}\ \max_{(\mathbf{v},\mathbf{z})\in F_{N} \times U_{N}}\ \frac{\langle\mathcal{K}(\mu)\mathbf{z},\mathbf{q} \rangle-\langle\mathcal{M}\mathbf{v},\mathbf{q}\rangle}{((2\beta\mathcal{M} \mathbf{v},\mathbf{v})+\langle\mathcal{M}\mathbf{z},\mathbf{z}\rangle)^{1/2} ||\mathbf{q}||_{Q}}\geq\min_{\mathbf{q}\in Q_{N}}\ \max_{(\mathbf{0},\mathbf{z})\in F_{N} \times U_{N}}\ \frac{\langle\mathcal{K}(\mu)\mathbf{z},\mathbf{q}\rangle}{ \langle\mathcal{M}\mathbf{z},\mathbf{z}\rangle^{1/2}||\mathbf{q}||_{Q}}\\ \geq\min_{\mathbf{q}\in Q_{N}}\ \frac{\langle\mathcal{K}(\mu) \mathbf{q},\mathbf{q}\rangle}{\langle\mathcal{M}\mathbf{q},\mathbf{q} \rangle^{1/2}||\mathbf{q}||_{Q}}\geq\min_{\mathbf{q}}\ \ c_{\Omega}\ \frac{\langle\mathcal{K}(\mu)\mathbf{q},\mathbf{q}\rangle}{||\mathbf{q}||_{Q} ^{2}}\geq c_{\Omega}\ \tilde{\alpha}_{0},\] where \(c_{\Omega}\), \(\tilde{\alpha}_{0}\) come from the Poincare inequality and coercivity of \(a(\cdot,\cdot;\mu)\), respectively. Note that this argument depends on the assumption that \(Q_{N}=U_{N}\). The resulting reduced basis is fully constructed in the offline step. Let \(\mathcal{Q}_{agg}\) denote the block-diagonal reduced basis matrix with augmentation by aggregation, which has the structure shown in the left side of (3.3). The RB matrix \(\mathcal{Q}_{f}\) from the definition of naive RB spaces is left unaugmented. The RB matrices \(\mathcal{Q}_{u}\) and \(\mathcal{Q}_{\lambda}\) are both augmented to ensure \(\mathcal{Q}_{u}=\mathcal{Q}_{\lambda}\). It follows that \(\mathcal{Q}_{u}\) and \(\mathcal{Q}_{\lambda}\) each have twice as many columns as \(\mathcal{Q}_{f}\) and the total number of columns in \(\mathcal{Q}_{agg}\) is \(5N\). For computations, as for the supremizer, we use a Gram-Schmidt process so that \(\mathcal{Q}_{u}\), \(\mathcal{Q}_{f}\) and \(\mathcal{Q}_{\lambda}\) have orthonormal columns. Stabilization by aggregation, also referred to as integration, has been considered strictly for using reduced order modeling to solve parameterized optimal control problems. In [13, 15], this method is used for solving the parameterized elliptic optimal control problems, specifically parameterized diffusion and convection-diffusion control problems. In [23], aggregation is used along with supremizers for reduced velocity and pressure spaces when solving optimal flow control pipeline problems. ## 5 Numerical Results In this section, we present experimental results with the two stabilization methods. We study two benchmark problems, the parametrized diffusion control problem (2.1)-(2.4) and a parametrized convection-diffusion control problem. For the diffusion problem, the spatial domain is \(\Omega=[0,1]^{2}\), with Dirichlet boundary \(\Omega_{D}:=[0,1]\times\{1\}\) and Neumann boundary \(\Omega_{N}:=\Omega\setminus\Omega_{D}\). The desired or target state is uniformly equal to one everywhere, \(\hat{u}(x,\mu)=1\), and \(g(x)=0\). Thus, the target is inconsistent with the Dirichlet boundary condition. The state \(u(x,\mu)\) cannot match the target on the Dirichlet part of the boundary and the control \(f(x,\mu)\) will require more energy to produce a good approximation of the target state. The domain \(D\) is divided into \(N_{D}\) equal-sized horizontal subdomains, where the stratified domain consists of \(N_{D}\) horizontal strips, as shown in Figure 1 for \(N_{D}=3\). The diffusion coefficient is piecewise constant on each subdomain \(\sigma(x,\mu)|_{\Omega_{k}}=\mu_{k},k=1:N_{D}\), where the parameters \(\mu=[\mu_{1},...,\mu_{N_{D}}]^{T}\) are taken to be independently and uniformly distributed random variables in \(\Gamma:=[0.01,1]^{N_{D}}\). The training set (of size \(N_{max}\)) is used in Algorithm 1 to build snapshots that form the basis. The greedy search is implemented with error indicator \[\eta(\mathbf{v}_{r})=\frac{||\mathcal{G}(\mu)\mathbf{v}_{r}-\mathbf{b}||_{2}}{ ||\mathbf{b}||_{2}}\] (where \(\mathbf{v}_{r}\) refers to any element in \(V_{h}\)) and is carried out until \(\eta(\mathbf{v}_{r}(\mu))\) is less than some desired tolerance for all \(\mu\in T\). The basis is tested in the online stage using a set of parameters not in the training set \(T\). In all experiments, we used \(N_{max}=2000\) randomly chosen training points, regularization parameter \(\beta=10^{-2}\) in (2.1), and spatial discretization consisting of piecewise bilinear finite elements on a uniform grid with \((2^{nc}+1)\times(2^{nc}+1)\) elements. Computations were done on a Macbook Pro with an Intel 2.2 GHz i7 processor and 16 GB RAM, using MATLAB R2022a, or on a Dell Precision 7820 Tower with an Intel Xeon Silver 1102.1 GHz processor and 64 GB RAM, using MATLAB R2018b. Finite element matrices for the full models were generated using IFISS 3.6 [21]. We begin by examining how training, i.e., Algorithm 1, behaves for the two methods of stabilization, using both Galerkin and Petrov-Galerkin formulations of the reduced problem. Figure 2 shows the maximum relative error indicator for the supremizer over the training set \(T\) as \(\mathcal{Q}_{sup}\) is constructed, for \(nc=4\) and \(N_{D}=3\). The tolerance for the greedy algorithm is \(10^{-7}\). Figure 3 shows the analogous results for aggregation. Note that the Petrov-Galerkin formulation (3.2) corresponds to the normal equations associated with the (residual) error indicator \(\eta(\mu)\), so that as the reduced basis grows, the error indicator monotonically decreases. This is not true for Galerkin formulation. It can be seen from Figure 2 that with the given error indicator, the Petrov-Galerkin method is more effective than the Galerkin method when the supremizer is used for stabilization. In contrast, the behavior of the two reduced models is closer when aggregation is used. However, the behaviors of the two reduction strategies are very similar Figure 1: Subdivision of the spatial domain \(\Omega\) for the parameterized diffusion control problem when \(N_{D}=3\). In this case, \(\mu\in\mathbb{R}^{3}\). (and are close to identical for certain values of \(N\)) as the maximum error indicator values approach the desired tolerance. Similar results are shown in Figures 4 and 5 for a finer spatial mesh. For the supremizer, the Petrov-Galerkin formulation produces better maximum relative error indicator values for most values of \(N\). However, there is one significant difference between Figures 2 and 4. On the finer mesh (Figure 4), the Petrov-Galerkin formulation fails to meet the tolerance (of \(10^{-7}\)), whereas the Galerkin formulation succeeds. For aggregation, there is better behavior of the Petrov-Galerkin formulation initially and very similar behavior of the two formulations as the tolerance is reached. Both formulations are shown to be convergent in Figure 5, although we note that on a finer mesh (\(nc=7\)), the Petrov-Galerkin formulation also failed. This brings about a question of robustness of the Petrov-Galerkin formulation. Figures 6 and 7 show the maximum condition numbers of the reduced matrices \(\mathcal{Q}^{T}\mathcal{G}(\mu)\mathcal{Q}\) (Galerkin) and \((\mathcal{G}(\mu)\mathcal{Q})^{T}(\mathcal{G}(\mu)\mathcal{Q})\) (Petrov-Galerkin) over the training set \(T\). The latter matrices become severely ill-conditioned as the search proceeds, and we attribute its failure to produce a suitable reduced basis to ill-conditioning. Note that this is a common issue in linear algebra: a matrix \(\mathcal{A}^{T}\mathcal{A}\) associated with the normal equations has condition number equal to the square of the condition number of \(\mathcal{A}\). The trends seen in Figures 6 and 7, for the two stabilization methods, are consistent as the basis size varies. In Table 1, results for the maximum condition number of the reduced systems over the training set for all \(N\) are given for both formulations for various spatial meshes. Both the supremizer and aggregation lead to lack of convergence of the greedy algorithm for the Petrov-Galerkin formulation when \(nc=7\). The Galerkin formulation never fails to converge. Figure 4: Maximum relative error indicator over the training set as \(\mathcal{Q}_{sup}\) is being built with both a Galerkin (G) and Petrov-Galerkin (PG) solve for the parameterized diffusion control problem. The number of basis vectors is \(N\). Here, the stopping tolerance for the greedy algorithm is \(10^{-7}\), the spatial discretization has \((2^{nc}+1)\times(2^{nc}+1)\) elements where \(nc=6\), and the number of subdomains \(\Omega_{k}\subset\Omega\) is \(N_{D}=3\). The PG reduction fails to converge. Figure 5: Maximum relative error indicator over the training set as \(\mathcal{Q}_{agg}\) is being built with both a Galerkin (G) and Petrov-Galerkin (PG) solve for the parameterized diffusion control problem. The number of basis vectors is \(N\). Here, the stopping tolerance for the greedy algorithm is \(10^{-7}\), the spatial discretization has \((2^{nc}+1)\times(2^{nc}+1)\) elements where \(nc=6\), and the number of subdomains \(\Omega_{k}\subset\Omega\) is \(N_{D}=3\). Figure 6: Maximum condition number of the reduced system \(\mathcal{Q}^{T}\mathcal{G}(\mu)\mathcal{Q}\) in the Galerkin (G) case and \((\mathcal{G}(\mu)\mathcal{Q})^{T}(\mathcal{G}(\mu)\mathcal{Q})\) in the Petrov-Galerkin (PG) case over all parameters in the training set as \(\mathcal{Q}_{sup}\) is being built for the parameterized diffusion control problem. The number of basis vectors is \(N\). Here, the stopping tolerance for the greedy algorithm is \(10^{-7}\), the spatial discretization has \((2^{nc}+1)\times(2^{nc}+1)\) elements where \(nc=6\), and the number of subdomains \(\Omega_{k}\subset\Omega\) is \(N_{D}=3\). Figure 7: Maximum condition number of the reduced system \(\mathcal{Q}^{T}\mathcal{G}(\mu)\mathcal{Q}\) in the Galerkin (G) case and \((\mathcal{G}(\mu)\mathcal{Q})^{T}(\mathcal{G}(\mu)\mathcal{Q})\) in the Petrov-Galerkin (PG) case over all parameters in the training set as \(\mathcal{Q}_{agg}\) is built for the parameterized diffusion control problem. The number of basis vectors is \(N\). Here, the stopping tolerance for the greedy algorithm is \(10^{-7}\), the spatial discretization has \((2^{nc}+1)\times(2^{nc}+1)\) elements where \(nc=6\), and the number of subdomains \(\Omega_{k}\subset\Omega\) is \(N_{D}=3\). meshes with fixed \(N_{D}=3\) and both Galerkin and Petrov-Galerkin formulations. Similar results are given for \(N_{D}=10\) in Table 3 where only results for the Galerkin formulation are shown. We summarize the trends observed in these experiments, as follows: * The number of snapshots, as well as the size of the reduced basis, is smaller for \(\mathcal{Q}_{agg}\) than for \(\mathcal{Q}_{sup}\). * The Petrov-Galerkin formulation is less robust than Galerkin formulation, in the sense that the greedy search failed to reach the stopping tolerance for finer spatial meshes. * The sizes of the reduced bases tend toward asymptotic limits as the spatial mesh is refined. For example, this limit is approximately \(N=25\) snapshots (reduced matrix of order \(125\)) for \(\mathcal{Q}_{agg}\) with Galerkin search. * The sizes of the reduced bases are larger for the larger number of parameters \(N_{D}=10\). (Compare Tables 2 and 3.1) Footnote 1: For certain tests in the online stage, shown in these tables, the relative error indicator values produced over the testing set of parameters were slightly greater than the prescribed tolerance. This can be explained by the size of the training set chosen. Increasing the number of training parameters to \(3000\) resulted in maximum relative error indicator values below the tolerance in all cases. For the second benchmark problem, we consider a variant of the Graetz \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Galerkin} & \multicolumn{2}{c|}{Petrov-Galerkin} \\ \hline \(nc\) & \(\mathcal{Q}_{sup}\) & \(\mathcal{Q}_{agg}\) & \(\mathcal{Q}_{sup}\) & \(\mathcal{Q}_{agg}\) \\ \hline 3 & 4.7e+04 & 4.7e+04 & 2.3e+09 & 2.2e+09 \\ \hline 4 & 2.0e+05 & 2.0e+05 & 2.2e+10 & 3.8e+10 \\ \hline 5 & 6.0e+05 & 5.5e+05 & 2.5e+11 & 3.0e+11 \\ \hline 6 & 2.0e+06 & 1.7e+06 & - & 3.0e+12 \\ \hline 7 & 6.1e+06 & 5.0e+06 & - & - \\ \hline \end{tabular} \end{table} Table 1: Maximum condition number of the reduced system \(\mathcal{Q}^{T}\mathcal{G}(\mu)\mathcal{Q}\) in the Galerkin case and \((\mathcal{G}(\mu)\mathcal{Q})^{T}(\mathcal{G}(\mu)\mathcal{Q})\) in the Petrov-Galerkin case over all parameters in the training set and all steps of the greedy algorithm for the parameterized diffusion control problem as \(nc\) is refined, where the spatial mesh has \((2^{nc}+1)\times(2^{nc}+1)\) discrete elements, tolerance is \(10^{-7}\), \(N_{max}=2000\) and \(N_{D}=3\). Empty cells represent lack of convergence to desired tolerance. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c|}{Galerkin Projection} & \multicolumn{4}{c|}{Petrov-Galerkin Projection} \\ \hline & \multicolumn{2}{c|}{\(N\)} & \multicolumn{2}{c|}{Columns} & \multicolumn{2}{c|}{Maximum Relative} & \multicolumn{2}{c|}{\(N\)} & \multicolumn{2}{c|}{Columns} & \multicolumn{2}{c|}{Maximum Relative} \\ & \multicolumn{2}{c|}{Error Indicator} & & & & & & & & & & Error Indicator \\ \hline \(nc\) & \(Q_{sup}\) & \(Q_{agg}\) & \(Q_{sup}\) & \(Q_{agg}\) & \(Q_{sup}\) & \(Q_{agg}\) & \(Q_{sup}\) & \(Q_{agg}\) & \(Q_{sup}\) & \(Q_{agg}\) \\ \hline 3 & 15 & 8 & 45 & 40 & 4.4e–14 & 4.2e–14 & 15 & 8 & 45 & 40 & 1.8e–10 & 4.3e–12 \\ \hline 4 & 31 & 16 & 93 & 80 & 6.6e–12 & 2.0e–13 & 30 & 16 & 90 & 80 & 1.1e–07 & 4.0e–11 \\ \hline 5 & 57 & 23 & 171 & 115 & 8.7e–08 & 6.9e–08 & 48 & 23 & 144 & 115 & 3.1e–08 & 7.0e–08 \\ \hline 6 & 64 & 24 & 192 & 120 & 3.5e–07 & 4.8e–08 & - & 25 & - & 125 & - & 4.2e–08 \\ \hline 7 & 67 & 25 & 201 & 125 & 2.8e–07 & 4.8e–08 & - & - & - & - & - & - \\ \hline \end{tabular} \end{table} Table 2: Comparison of number of snapshots \(N\), number of columns in the reduced basis, and maximum relative error indicator over the verification set for \(\mathcal{Q}_{sup}\) and \(\mathcal{Q}_{agg}\) with Galerkin and Petrov-Galerkin formulations for the parameterized diffusion control problem. Here, the verification set has \(500\) parameters, the domain \(\Omega\) has \(N_{D}=3\) subdomains, the spatial mesh has \((2^{nc}+1)\times(2^{nc}+1)\) discrete elements, \(N_{max}=2000\) and the stopping tolerance for the greedy algorithm is \(10^{-7}\). Empty cells correspond to cases where the greedy search failed to reach this tolerance. presented in [15]: find state \(u\) and control \(f\) such that \[\min_{u,f}\ \frac{1}{2}\big{\|}u(x,\mu)-\hat{u}(x,\mu)\big{\|}_{L_{2}( \Omega)}^{2}+\frac{\beta}{2}\big{\|}f(x,\mu)\big{\|}_{L_{2}(\Omega)}^{2}\] \[\text{subject to }\quad-\mu_{1}\bigtriangleup u(x,\mu)+\mathbf{w} \cdot\bigtriangledown u(x,\mu)\ =\ f(x,\mu)\ \text{ in }\ \Omega\times\Gamma,\] \[\text{such that }\qquad\qquad u(x,\mu)=1\quad\text{ on }\partial \Omega_{D_{1}}\times\Gamma.\] \[u(x,\mu)=2\quad\text{ on }\partial\Omega_{D_{2}}\times\Gamma.\] \[\mu_{1}\frac{\partial u(x,\mu)}{\partial n}=0\quad\text{ on } \partial\Omega_{N}\times\Gamma.\] Here, \(\Omega=[0,1]^{2}\subset\mathbb{R}^{2}\) is the spatial domain (shown in Figure 8) subdivided into \(\Omega_{1}=[0,1]\times[0,0.3]\), \(\Omega_{2}=[0,1]\times(0.3,1]\). The parameter vector \(\mu\in\Gamma:=[\frac{1}{20},\frac{1}{3}]\times[0.5,1.5]\times[1.5,2.5]\in \mathbb{R}^{3}\) is associated with the diffusion coefficient and the desired state \(\hat{u}\) such that \(\mu_{1}\) is the diffusion coefficient, \(\hat{u}=\mu_{2}\) in \(\Omega_{1}=[0,1]\times[0,0.3]\) and \(\hat{u}=\mu_{3}\) in \(\Omega_{2}=[0,1]\times[0.3,1]\). Here, \(\mathbf{w}=[x_{2}(1-x_{2}),0]^{T}\). The performance of the reduced basis in the online stage is reported for the parameterized convection-diffusion benchmark problem. Table 4 is the analog of Table 2 for the parameterized diffusion control problem. As before, stabilizing with aggregation produces a smaller reduced basis, making it more efficient than stabilizing with the supremizer in all cases. In all cases, the basis sizes tend to asymptotic limits but the overall size of the bases are smaller than seen for diffusion control. For this example, the \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{N} & \multicolumn{2}{c|}{Columns} & \multicolumn{2}{c|}{Maximum Relative Error Indicator} \\ \hline \(nc\) & \(\mathcal{Q}_{sup}\) & \(\mathcal{Q}_{agg}\) & \(\mathcal{Q}_{sup}\) & \(\mathcal{Q}_{agg}\) & \(\mathcal{Q}_{sup}\) & \(\mathcal{Q}_{agg}\) \\ \hline 3 & 15 & 8 & 45 & 40 & 3.9e–13 & 3.9e–14 \\ \hline 4 & 31 & 16 & 93 & 80 & 5.4e–12 & 2.1e–13 \\ \hline 5 & 63 & 32 & 189 & 160 & 1.3e–10 & 1.0e–12 \\ \hline 6 & 126 & 52 & 372 & 260 & 2.2e–08 & 4.6e–08 \\ \hline 7 & 173 & 56 & 519 & 280 & 4.6e–07 & 7.9e–08 \\ \hline \end{tabular} \end{table} Table 3: Comparison of columns and maximum relative error indicator over the verification set for \(\mathcal{Q}_{sup}\) and \(\mathcal{Q}_{agg}\) with Galerkin projection for the parameterized diffusion control problem. Here, the verification set had 500 parameters, the domain \(\Omega\) has \(N_{D}=10\) subdomains, the spatial mesh has \((2^{nc}+1)\times(2^{nc}+1)\) discrete elements, \(N_{max}=2000\) and the stopping tolerance for the greedy algorithm is \(10^{-7}\). Figure 8: Picture of the spatial domain \(\Omega\) and subdomains \(\Omega_{1}\) and \(\Omega_{2}\) for the convection-diffusion control problem. ## 6 Concluding Remarks Stabilization of reduced basis models to solve optimal control problems can be handled in multiple ways. When these models are implemented with block diagonal reduced basis matrices, enrichment of the reduced basis spaces is required to ensure well-posedness. Two ways of handling this enrichment and stabilizing the reduced bases are stabilization by aggregation and stabilization using the supremizer function. While both are suitable for ensuring stability, augmenting by aggregation is a superior choice in numerous ways. In particular, we showed that for several examples, aggregation leads to smaller reduced bases than the supremizer, and it is also more robust with respect to convergence. We also note that this study considers these approaches for one class of problems, arising from optimal control with PDE constraints. Reduced basis methods are also useful in other settings, for example, for parametrized versions of models of computational fluid dynamics with an incompressibility constraint, such as the Stokes and Navier-Stokes equations. One drawback to augmentation by aggregation is that it has only been implemented for stabilizing reduced order models for optimal control problems, whereas augmenting using the supremizer function has proven useful for solving PDEs like the Stokes equations as well. We will consider these issues in a follow-on study [6].
2304.04969
Poisson Equation and Application to Multi-Scale SDEs with State-Dependent Switching
In this paper, we study the averaging principle and central limit theorem for multi-scale stochastic differential equations with state-dependent switching. To accomplish this, we first study the Poisson equation associated with a Markov chain and the regularity of its solutions. As applications of the results on the Poisson equations, we prove three averaging principle results and two central limit theorems results. The first averaging principle result is a strong convergence of order $1/2$ of the slow component $X^{\varepsilon}$ in the space $C([0,T],\mathbb{R}^n)$. The second averaging principle result is a weak convergence of $X^{\varepsilon}$ in $C([0,T],\mathbb{R}^n)$. The third averaging principle result is a weak convergence of order $1$ of $X^{\varepsilon}_t$ in $\mathbb{R}^n$ for any fixed $t\ge 0$. The first central limit theorem type result is a weak convergence of $(X^{\varepsilon}-\bar{X})/\sqrt{\varepsilon}$ in $C([0,T],\mathbb{R}^n)$, where $\bar{X}$ is the solution of the averaged equation. The second central limit theorem type result is a weak convergence of order $1/2$ of $(X^{\varepsilon}_t-\bar{X}_t)/\sqrt{\varepsilon}$ in $\mathbb{R}^n$ for fixed $t\ge 0$. Several examples are given to show that all the achieved orders are optimal.
Xiaobin Sun, Yingchao Xie
2023-04-11T04:30:50Z
http://arxiv.org/abs/2304.04969v4
# The Poisson equation and application to multi-scale Sdes with state-dependent switching ###### Abstract. This paper study the Poisson equation associated with a Markov chain. By investigating the differentiability of the corresponding transition probability matrix with respect to parameters, we establish the regularity of the Poisson equation solution. As an application, we further study the averaging principle for a class of multi-scale stochastic differential equations with state-dependent switching, ultimately achieving an optimal strong convergence order of \(1/2\). Key words and phrases:Poisson equation; Stochastic differential equations with state-dependent switching; Averaging principle; Strong convergence rate 2000 Mathematics Subject Classification: Primary 60H10 ## 1. Introduction In this paper, we study the Poisson equation on a countable space \(\mathbb{S}=\{1,2,\ldots,m_{0}\}\) with \(m_{0}\leqslant\infty\), that is \[-Q(x)\Phi(x,\cdot)(i)=F(x,i),\quad x\in\mathbb{R}^{n},i\in\mathbb{S},\] where \(Q(x)\) is the generator of a Markov chain \(\{\alpha_{t}^{x}\}_{t\geqslant 0}\) taking values in \(\mathbb{S}\). Assume \(F(x,i)\) satisfies the "_central condition_", that is \[\sum_{i\in\mathbb{S}}F(x,i)\mu_{i}^{x}=0,\quad\forall x\in\mathbb{R}^{n},\] where \(\mu^{x}\) is the unique invariant measure of Markov chain \(\{\alpha_{t}^{x}\}_{t\geqslant 0}\). Under some suitable conditions, the above Poisson equation admits a solution \[\Phi(x,i)=\int_{0}^{\infty}\mathbb{E}F(x,\alpha_{t}^{x,i})dt,\] where \(\{\alpha_{t}^{x,i}\}_{t\geqslant 0}\) means the Markov chain \(\{\alpha_{t}^{x}\}_{t\geqslant 0}\) with initial value \(\alpha_{0}^{x}=i\). In recent decades, there has been intensive investigation into the Poisson equation. It has been confirmed as a powerful technique that has successfully been used to obtain optimal convergence orders in both strong and weak senses, as well as for diffusion approximation and central limit type theorems in various stochastic systems. For example, Pardoux and Veretennikov [17] conducted research on the regularity of solutions to the following Poisson equation: \[-\mathscr{L}(x)\Phi(x,\cdot)(y)=b(x,y),\] where \(b\) satisfies the similar "_central condition_" and \[\mathscr{L}(x)h(x,\cdot)(y)=\sum_{i,j}a_{ij}(x,y)\frac{\partial^{2}h(x,y)}{ \partial_{y_{i}}\partial_{y_{j}}}+\sum_{i}f_{i}(x,y)\frac{\partial h(x,y)}{ \partial_{y_{i}}}.\] The obtained results were further applied to a study on the diffusion approximation for two-scaled diffusion processes. More interesting results on this topic can be found in [5, 16, 21, 25, 26] and related references. Note that the literature mentioned above mainly focuses on the Poisson equation associated with a diffusion process, and thus some techniques (such as taking advantage of properties of transition probability density) used there do not work in our discrete setting. Therefore, the first purpose of this paper is to establish the regularity of the solution \(\Phi(x,i)\) of the Poisson equation associated with a Markov chain. Specifically, under suitable conditions on \(Q(x)\) and \(F(x,i)\), we will study the first derivative \(\partial_{x}\Phi(x,i)\) and second derivative \(\partial_{x}^{2}\Phi(x,i)\) of \(\Phi(x,i)\) with respect to \(x\). As an application, we will further study the averaging principle for the following stochastic system: \[dX_{t}^{\varepsilon}=b(X_{t}^{\varepsilon},\alpha_{t}^{\varepsilon})dt+ \sigma(X_{t}^{\varepsilon})dW_{t},\quad(X_{0}^{\varepsilon},\alpha_{0}^{ \varepsilon})=(x,\alpha)\in\mathbb{R}^{n}\times\mathbb{S}, \tag{1.1}\] where \(\varepsilon\) is a small and positive parameter, \(\{W_{t}\}_{t\geq 0}\) is a standard \(d\)-dimensional Brownian motion on \((\Omega,\mathscr{F},\mathbb{P})\), and \(\{\alpha_{t}^{\varepsilon}\,,t\geq 0\}\) is a right-continuous \(\mathbb{S}\)-valued Markov chain on a complete probability space\((\Omega,\mathscr{F},\mathbb{P})\) described by \[\mathbb{P}\left(\alpha_{t+\Delta}^{\varepsilon}=j|\alpha_{t}^{\varepsilon}=i,X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon},s\leqslant t\right)=\left\{ \begin{array}{ll}\varepsilon^{-1}q_{ij}(X_{t}^{\varepsilon})\Delta+o( \Delta),\quad i\neq j,\\ 1+\varepsilon^{-1}q_{ii}(X_{t}^{\varepsilon})\Delta+o(\Delta),\quad i=j, \end{array}\right. \tag{1.2}\] and \(Q(x)=(q_{ij}(x))_{1\leq i,j\leq m_{0}}\) is a Borel measurable and conservative matrix. The measurable maps \(b=b(x,i)\), \(\sigma=\sigma(x)\) are given: \[b:\mathbb{R}^{n}\times\mathbb{S}\longrightarrow\mathbb{R}^{n},\quad\sigma: \mathbb{R}^{n}\longrightarrow\mathbb{R}^{n}\times\mathbb{R}^{d}.\] The stochastic system described above is known as stochastic differential equations (SDEs) with state-dependent switching. In these models, the switching process is typically modeled using a Markov chain, which describes how the system transitions between different states, each of which is associated with a different set of SDEs which are characterized by random fluctuations or noise that are typically modeled using Brownian motion. SDEs with state-dependent switching are commonly employed in various fields, including physics, biology, finance, engineering, control, and optimization. For instance, they can serve as models for financial markets, where the switching process may depend on the current state of the market or on the behavior of other traders. Similarly, they can be applied to illustrate the dynamics of biological systems, where the switching process may rely on the concentration of certain molecules or the activity of specific genes. Due to the presence of both continuous dynamics and discrete events, such systems are capable of representing complex systems and their inherent uncertainty and randomness in the environment. The well-posedness of the solutions, existence and uniqueness of the invariant measure, stability, numerical approximation, and other significant properties have been thoroughly analyzed in several references. Interested readers can refer to [9, 12, 14, 22, 23, 24, 29, 30, 33, 34] and their cited works for further insight into this topic. Note that the current model (1.1) involves a parameter \(\varepsilon>0\), which characterizes the ratio of time scales between the slow component \(X^{\varepsilon}\) and fast component \(\alpha^{\varepsilon}\). Such systems are commonly known as multi-scale or slow-fast systems, and find wide applications in fields such as nonlinear oscillations, chemical kinetics, biology, and climate dynamics, see e.g. [7, 18]. Relevant mathematical methods include averaging and homogenization, for which see e.g. [4, 28]. The averaging principle describes the behavior of \(X^{\varepsilon}\) in (1.1) and (1.2) as \(\varepsilon\) approaches zero. In the situation of state-independent, that is the switching process \(\alpha^{\varepsilon}\) does not depend on \(X^{\varepsilon}\), there are many results in the averaging principle (e.g., [2, 19, 20, 31]). However, there seem to be few results for the "full coupled" case in the current model (1.1). Faggionato et al. [8] studied its averaging principle for \(\sigma\equiv 0\) in terms of convergence in probability, while Mao and Shao [13] investigated its averaging principle for an infinite countability state space \(\mathbb{S}\). However, these references did not achieve optimal convergence order in the strong sense. Therefore, the second aim of this paper is to demonstrate that \(X^{\varepsilon}\) strongly converges to the solution \(\bar{X}\) of the corresponding averaged equation as \(\varepsilon\) approaches zero. Specifically, we prove that for any \(p\geqslant 2\) and \(T>0\), there exists a \(C_{p,T}>0\) that depends on \(p\) and \(T\) such that \[\mathbb{E}\left(\sup_{t\in[0,T]}\left|X^{\varepsilon}_{t}-\bar{X}_{t}\right|^{ p}\right)\leq C_{p,T}(1+|x|^{4p})\varepsilon^{p/2}. \tag{1.3}\] The above outcome implies that the convergence order is \(1/2\), which is optimal (see [10, Example 3.4]). The optimal convergence rate is critical for diffusion approximation and central limit type theorems. For example, when \(\sigma\equiv 0\), Pakdaman et al. [15] used an asymptotic expansion method to study its averaging principle and central limit theorem. However, we will use a Poisson equation method to prove (1.3), which differs from the literature mentioned above. For the convenience of readers, we introduce a brief overview of the Poisson equation technique. The difference between \(X^{\varepsilon}_{t}\) and \(\bar{X}_{t}\) can usually be controlled by \[\left|\int_{0}^{t}b(X^{\varepsilon}_{s},\alpha^{\varepsilon}_{s})-\bar{b}(X^{ \varepsilon}_{s})ds\right|.\] The key idea is to replace the term \(b(X^{\varepsilon}_{s},\alpha^{\varepsilon}_{s})-\bar{b}(X^{\varepsilon}_{s})\) with \(-Q(X^{\varepsilon}_{s})\Phi(X^{\varepsilon}_{s},\cdot)(\alpha^{\varepsilon}_ {s})\). Since \(b(x,y)-\bar{b}(x)\) satisfies the "_central condition_," it is natural to consider the following Poisson equation: \[-Q(x)\Phi(x,\cdot)(i)=b(x,i)-\bar{b}(x).\] Using Ito's formula on \(\Phi(X^{\varepsilon}_{t},\alpha^{\varepsilon}_{t})\), one can obtain the new term \[\int_{0}^{t}Q(X^{\varepsilon}_{s})\Phi(X^{\varepsilon}_{s},\cdot)(\alpha^{ \varepsilon}_{s})ds\] expressed as a function of the solution \(\Phi\) of the Poisson equation (see (3.23) below). Therefore, the remaining work is devoted to studying the regularity estimations of the solution \(\Phi\). Although the first author and his collaborators [10] recently obtained the optimal strong convergence order \(1/2\) in the state-independent case, studying the regularity of the solution of the Poisson equation with the parameter is much more complicated in the "fully coupled" case. The remainder of the paper is structured as follows. In section 2, we explore the regularity estimations of the solution to the Poisson equation. As an outcome, we examine the averaging principle in the strong sense and achieve the optimal convergence rate of \(1/2\) in section 3. Throughout this paper, we use \(C\), \(C_{T}\), and \(C_{p,T}\) to represent constants whose values may vary from line to line. We use \(C_{T}\) and \(C_{p,T}\) to emphasize that the constants depend on \(T\) and \(p,T\), respectively. ## 2. Poisson equation associated to a Markov chain We use \(|\cdot|\) and \(\|\cdot\|\) to represent the standard Euclidean vector norm and matrix norm, respectively. Specifically, for \(x=(x_{k})_{1\leqslant k\leqslant n}\in\mathbb{R}^{n}\) and \(\sigma=(\sigma_{kl})_{1\leqslant k\leqslant n,1\leqslant l\leqslant d}\in \mathbb{R}^{n}\times\mathbb{R}^{d}\), \[|x|:=\left(\sum_{k=1}^{n}|x_{k}|^{2}\right)^{1/2},\quad\|\sigma\|:=\left(\sum_{ k=1}^{n}\sum_{l=1}^{d}|\sigma_{kl}|^{2}\right)^{1/2}.\] Denote \(\mathbb{S}=\{1,2,\ldots,m_{0}\}\) with \(m_{0}\leqslant\infty\). For \(l_{1},l_{2}\in\mathbb{N}_{+}\), let \(\mathscr{B}_{b}(\mathbb{S},\mathbb{R}^{l_{1}}\otimes\mathbb{R}^{l_{2}})\) be the space of all map \(f(i):\mathbb{S}\to\mathbb{R}^{l_{1}}\otimes\mathbb{R}^{l_{2}}\) satisfying \(\|f\|_{\infty}:=\sup_{i\in\mathbb{S}}\|f(i)\|<\infty\). For a matrix \(M=(m_{ij})_{i,j\in\mathbb{S}}\), denote \(\|M\|_{\ell}:=\sup_{i\in\mathbb{S}}\sum_{j\in\mathbb{S}}|m_{ij}|\). Let \(C^{2}(\mathbb{R}^{n})\) be the space of all \(\mathbb{R}\)-valued continuous functions \(\varphi(x)\) on \(\mathbb{R}^{n}\) such that its first-order derivative (also known as gradient) \(\partial_{x}\varphi(x)=(\partial_{x_{k}}\varphi(x))_{1\leqslant k\leqslant n}\) and second-order derivative (also known as Hessian) \(\partial_{x}^{2}\varphi(x)=(\partial_{x_{k}}\partial_{x_{l}}\varphi(x))_{1 \leqslant k,l\leqslant n}\) are continuous. Let \(C^{2}_{b}(\mathbb{R}^{n}\times\mathbb{S})\) be the space of all maps \(\varphi(x,i):\mathbb{R}^{n}\times\mathbb{S}\to\mathbb{R}\) satisfying \(\varphi(\cdot,i)\in C^{2}(\mathbb{R}^{n})\) for any \(i\in\mathbb{S}\) as well as \(\partial_{x}\varphi(x,\cdot)\in\mathscr{B}_{b}(\mathbb{S},\mathbb{R}^{n})\) and \(\partial_{x}^{2}\varphi(x,\cdot)\in\mathscr{B}_{b}(\mathbb{S},\mathbb{R}^{n} \otimes\mathbb{R}^{n})\) for any \(x\in\mathbb{R}^{n}\). Let \(C^{2}_{b}(\mathbb{R}^{n}\times\mathbb{S},\mathbb{R}^{n})\) be the space of all maps \(\varphi(x,i):\mathbb{R}^{n}\times\mathbb{S}\to\mathbb{R}^{n}\) satisfying all components \(\{\varphi^{l}\}_{1\leqslant l\leqslant n}\) belong \(C^{2}_{b}(\mathbb{R}^{n}\times\mathbb{S})\). if \(\varphi\in C^{2}_{b}(\mathbb{R}^{n}\times\mathbb{S},\mathbb{R}^{n})\), we denote \(\|\partial_{x}\varphi(x,\cdot)\|_{\infty}=\sum_{k=1}^{n}\|\partial_{x_{k}} \varphi(x,\cdot)\|_{\infty}\) and \(\|\partial_{x}^{2}\varphi(x,\cdot)\|_{\infty}=\sum_{k,l=1}^{n}\|\partial_{x_{k }}\partial_{x_{l}}\varphi(x,\cdot)\|_{\infty}\). The total variation distance between two probability measures \(\mu\) and \(\nu\) on \(\mathbb{S}\) is defined as follows: \[\|\mu-\nu\|_{\rm var}:=2\sup_{A\in\mathscr{B}(\mathbb{S})}|\mu(A)-\nu(A)|=\sup _{\|f\|_{\infty}\leqslant 1}|\mu(f)-\nu(f)|.\] We assume the following conditions on \(Q(x)\): **A1**.: _Suppose that \(Q(x)=(q_{ij}(x))_{i,j\in\mathbb{S}}\) is conservative, i.e., for any \(x\in\mathbb{R}^{n}\),_ \[q_{ij}(x)\geqslant 0\quad\text{for}\quad\forall i\neq j\in\mathbb{S},\] \[\sum_{j\in\mathbb{S}}q_{ij}(x)=0\quad\text{for}\quad\forall i\in \mathbb{S}.\] _Moreover, \(q_{ij}(x)\in C^{2}(\mathbb{R}^{n})\) for any \(i,j\in\mathbb{S}\), meanwhile_ \[\sup_{x\in\mathbb{R}^{n}}\left[\sum_{k=1}^{n}\|\partial_{x_{k}}Q(x)\|_{\ell}+ \sum_{k,l=1}^{n}\|\partial_{x_{k}}\partial_{x_{l}}Q(x)\|_{\ell}\right]<\infty, \tag{2.1}\] _where \(\partial_{x_{k}}Q(x):=(\partial_{x_{k}}q_{ij}(x))_{i,j\in\mathbb{S}}\) and \(\partial_{x_{k}}\partial_{x_{l}}Q(x):=(\partial_{x_{k}}\partial_{x_{l}}q_{ij}( x))_{i,j\in\mathbb{S}}\)._ **A2**.: _Suppose \(Q(x)\) is irreducible, that is for any \(x\in\mathbb{R}^{n}\), the system of equations_ \[\mu^{x}Q(x)=0,\quad\text{with}\quad\sum_{i\in\mathbb{S}}\mu_{i}^{x}=1\] _has a unique solution \(\mu^{x}=(\mu_{1}^{x},\mu_{2}^{x},\ldots,\mu_{m_{0}}^{x})\) satisfying \(\mu_{i}^{x}>0\) for all \(i\in\mathbb{S}\). Moreover, \(P^{x}(t)\) is exponential ergodicity uniformly in \(x\), i.e., there exist positive constants \(C,\lambda\) such that for any \(x\in\mathbb{R}^{n}\) and \(t>0\),_ \[\sup_{i\in\mathbb{S}}\|p_{i.}^{x}(t)-\mu^{x}\|_{\text{var}}\leqslant Ce^{- \lambda t}, \tag{2.2}\] _where \(P^{x}(t):=(p_{ij}^{x}(t))_{i,j\in\mathbb{S}}\) is the transition probability matrix associated with the generator \(Q(x)\)._ We will now investigate the differentiability of \(P^{x}(t)\) with respect to \(x\), which is crucial in examining the regularity of the Poisson equation's solution. **Proposition 2.1**.: _If the assumptions A1 and A2 hold. Then \(P^{x}(t)\) is twice differentiable with respective to \(x\) and its partial derivatives are uniformly bounded, that is for any \(f:\mathbb{S}\to\mathbb{R}\) with \(\|f\|_{\infty}\leqslant 1\) and \(k,l\in\{1,2\ldots,n\}\),_ \[\sup_{x\in\mathbb{R}^{n},t\geqslant 0,i\in\mathbb{S}}\left[|\partial_{x_{k}}P^{x}( t)f(i)|+|\partial_{x_{k}}\partial_{x_{l}}P^{x}(t)f(i)|\right]<\infty. \tag{2.3}\] _Moreover, there exists \(C>0\) such that for any \(i,i^{\prime}\in\mathbb{S}\) and \(t\geqslant 0\),_ \[\sup_{x\in\mathbb{R}^{n}}|\partial_{x_{k}}P^{x}(t)f(i)-\partial_{x _{k}}P^{x}(t)f(i^{\prime})|\leqslant Ce^{-\lambda t}t, \tag{2.4}\] \[\sup_{x\in\mathbb{R}^{n}}|\partial_{x_{k}}\partial_{x_{l}}P^{x}( t)f(i)-\partial_{x_{k}}\partial_{x_{l}}P^{x}(t)f(i^{\prime})|\leqslant Ce^{- \lambda t}(1+t^{2}). \tag{2.5}\] Proof.: By the integration by parts formula for the semigroups \(P^{x}_{t}\) and \(P^{y}_{t}\) (see [6, Theorem 13.40]), that is for any \(f:\mathbb{S}\to\mathbb{R}\) with \(\|f\|_{\infty}\leqslant 1\), \(x,y\in\mathbb{R}^{n}\) and \(t\geqslant 0\), \[P^{y}(t)f-P^{x}(t)f=\int_{0}^{t}P^{y}(s)[Q(y)-Q(x)]P^{x}(t-s)fds.\] Note that \(q_{ij}(x)\in C^{2}_{b}(\mathbb{R}^{n})\), thus for any \(k\in\{1,2\ldots,n\}\) and \(i\in\mathbb{S}\) \[\partial_{x_{k}}P^{x}(t)f(i)=\int_{0}^{t}P^{x}(s)\left[\partial_{x_{k}}Q(x) \right]P^{x}(t-s)f(i)ds. \tag{2.6}\] Then by the condition (2.1), we can obtain \[|\partial_{x_{k}}P^{x}(t)f(i)| = \left|\int_{0}^{t}P^{x}(s)\left[\partial_{x_{k}}Q(x)\right][P^{x }(t-s)(f-\mu^{x}(f))]\left(i\right)ds\right|\] \[\leqslant \int_{0}^{t}\sup_{j\in\mathbb{S}}|\partial_{x_{k}}Q(x)\left[P^{x }(t-s)(f-\mu^{x}(f))\right](j)|ds\] \[\leqslant \|\partial_{x_{k}}Q(x)\|_{\ell}\int_{0}^{t}\sup_{j\in\mathbb{S}} |P^{x}(t-s)f(j)-\mu^{x}(f)|ds\] \[\leqslant C\int_{0}^{t}\sup_{j:}(t-s)-\mu^{x}\|_{\mathrm{var}}ds\] \[\leqslant C\int_{0}^{t}e^{-\lambda(t-s)}ds\leqslant\frac{C}{\lambda},\] and for any \(i,i^{\prime}\in\mathbb{S}\), \[|\partial_{x_{k}}P^{x}(t)f(i)-\partial_{x_{k}}P^{x}(t)f(i^{ \prime})| \tag{2.7}\] \[= \Big{|}\int_{0}^{t}\left\{P^{x}(s)\left[\partial_{x_{k}}Q(x) \right]\left[P^{x}(t-s)(f-\mu^{x}(f))\right](i)\right.\] \[\qquad\quad\left.-P^{x}(s)\left[\partial_{x_{k}}Q(x)\right]\left[ P^{x}(t-s)(f-\mu^{x}(f))\right](i^{\prime})\right\}ds\Big{|}\] \[\leqslant \int_{0}^{t}\|p^{x}_{i:}(s)-p^{x}_{i^{\prime}.}(s)\|_{\mathrm{var }}\sup_{j\in\mathbb{S}}|\partial_{x_{k}}Q(x)\left[P^{x}(t-s)(f-\mu^{x}(f)) \right](j)|ds\] \[\leqslant \|\partial_{x_{k}}Q(x)\|_{\ell}\int_{0}^{t}\|p^{x}_{i:}(s)-p^{x }_{i^{\prime}.}(s)\|_{\mathrm{var}}\sup_{j\in\mathbb{S}}\|p^{x}_{j:}(t-s)-\mu^ {x}\|_{\mathrm{var}}ds\] \[\leqslant C\int_{0}^{t}e^{-\lambda s}e^{-\lambda(t-s)}ds\leqslant Ce^{- \lambda t}t.\] Similarly, by (2.6) we have for any \(k,l\in\{1,2\ldots,n\}\), \[\partial_{x_{k}}\partial_{x_{l}}P^{x}(t)f(i)= \int_{0}^{t}\partial_{x_{l}}P^{x}(s)\left[\partial_{x_{k}}Q(x) \right]P^{x}(t-s)f(i)ds \tag{2.8}\] \[+\int_{0}^{t}P^{x}(s)\left[\partial_{x_{k}}\partial_{x_{l}}Q(x) \right]P^{x}(t-s)f(i)ds\] \[+\int_{0}^{t}P^{x}(s)\left[\partial_{x_{k}}Q(x)\right]\partial_{x _{l}}P^{x}(t-s)f(i)ds.\] Thus using the condition (2.1), (2.2) and (2.7), we can obtain \[\left|\partial_{x_{k}}\partial_{x_{l}}P^{x}(t)f(i)\right|\leqslant C\int_{0}^{t}\sup_{j\in\mathbb{S}}\left|\left[\partial_{x_{k}}Q(x) \right]\left[P^{x}(t-s)(f-\mu^{x}(f))\right](j)\right|ds\] \[+C\int_{0}^{t}\sup_{j\in\mathbb{S}}\left|\left[\partial_{x_{k}} \partial_{x_{l}}Q(x)\right]\left[P^{x}(t-s)(f-\mu^{x}(f))\right](j)\right|ds\] \[+C\int_{0}^{t}\sup_{j\in\mathbb{S}}\left|\left[\partial_{x_{k}}Q( x)\right]\left[\partial_{x_{l}}P^{x}(t-s)f-\partial_{x_{l}}P^{x}(t-s)f(i)\right](j) \right|ds\] \[\leqslant C\int_{0}^{t}\sup_{j\in\mathbb{S}}\left\|p_{j\cdot}^{x}(t-s)- \mu^{x}\right\|_{\text{var}}ds\] \[+C\int_{0}^{t}\sup_{j\in\mathbb{S}}\left|\partial_{x_{l}}P^{x}(t- s)f(j)-\partial_{x_{l}}P^{x}(t-s)f(i)\right|ds\] \[\leqslant C\int_{0}^{t}e^{-\lambda s}(1+s)ds\leqslant\frac{2C}{\lambda}\] and for any \(i,i^{\prime}\in\mathbb{S}\), \[\left|\partial_{x_{k}}\partial_{x_{l}}P^{x}(t)f(i)-\partial_{x_{k }}\partial_{x_{l}}P^{x}(t)f(i^{\prime})\right|\] \[\leqslant C\int_{0}^{t}se^{-\lambda s}\sup_{j\in\mathbb{S}}\left|\left[ \partial_{x_{k}}Q(x)\right]\left[P^{x}(t-s)(f-\mu^{x}(f))\right](j)\right|ds\] \[+C\int_{0}^{t}\left\|p_{i\cdot}^{x}(s)-p_{i^{\prime}\cdot}^{x}(s) \right\|_{\text{var}}\sup_{j\in\mathbb{S}}\left|\left[\partial_{x_{k}}Q(x) \right]\left[\partial_{x_{l}}P^{x}(t-s)f-\partial_{x_{l}}P^{x}(t-s)f(i)\right](j )\right|ds\] \[\leqslant C\int_{0}^{t}se^{-\lambda s}\sup_{j\in\mathbb{S}}\left\|p_{j \cdot}^{x}(t-s)-\mu^{x}\right\|_{\text{var}}ds\] \[+C\int_{0}^{t}\left\|p_{i\cdot}^{x}(s)-p_{i^{\prime}\cdot}^{x}(s) \right\|_{\text{var}}\sup_{j\in\mathbb{S}}\left\|p_{j\cdot}^{x}(t-s)-\mu^{x} \right\|_{\text{var}}ds\] \[+C\int_{0}^{t}\left\|p_{i\cdot}^{x}(s)-p_{i^{\prime}\cdot}^{x}(s) \right\|_{\text{var}}\sup_{j\in\mathbb{S}}\left|\partial_{x_{l}}P^{x}(t-s)f(j )-\partial_{x_{l}}P^{x}(t-s)f(i)\right|ds\] \[\leqslant C\int_{0}^{t}(1+s)e^{-\lambda s}e^{-\lambda(t-s)}\left[1+(t-s) \right]ds\leqslant Ce^{-\lambda t}\left(1+t^{2}\right).\] The proof is complete. For any \(F\in C_{b}^{2}(\mathbb{R}^{n}\times\mathbb{S},\mathbb{R}^{n})\) satisfying the "_central condition_", that is \[\sum_{i\in\mathbb{S}}F(x,i)\mu_{i}^{x}=0,\quad\forall x\in\mathbb{R}^{n} \tag{2.9}\] and matrix \(Q(x)=(q_{ij}(x))_{i,j\in\mathbb{S}}\) satisfies assumptions A1 and A2. Considering the following Poisson equation on \(\mathbb{S}\): \[-Q(x)\Phi(x,\cdot)(i)=F(x,i), \tag{2.10}\] which is equivalent to \[-Q(x)\Phi^{l}(x,\cdot)(i)=F^{l}(x,i),\quad l=1,2,\ldots,n.\] where \(\Phi(x,i)=(\Phi^{1}(x,i),\ldots,\Phi^{n}(x,i))\) and \(F(x,i)=(F^{1}(x,i),\ldots,F^{n}(x,i))\). Now, we state our first main result about the well-posedness of the solution of equation (2.10) and its regularity estimates. **Proposition 2.2**.: _For any \(F\in C^{2}_{b}(\mathbb{R}^{n}\times\mathbb{S},\mathbb{R}^{n})\) satisfying the "central condition" (2.9). Define_ \[\Phi(x,i)=\int_{0}^{\infty}\mathbb{E}F(x,\alpha^{x,i}_{t})dt, \tag{2.11}\] _where \(\{\alpha^{x,i}_{t}\}_{t\geqslant 0}\) is the unique \(\mathbb{S}\)-valued Markov chain generated by generator \(Q(x)\) with initial value \(\alpha^{x,i}_{0}=i\in\mathbb{S}\). Then \(\Phi(x,i)\) is a solution of the Poisson equation (2.10). Moreover, there exists a constant \(C>0\) such that for any \(x\in\mathbb{R}^{n}\),_ \[\|\Phi(x,\cdot)\|_{\infty}\leq C\|F(x,\cdot)\|_{\infty}, \tag{2.12}\] \[\|\partial_{x}\Phi(x,\cdot)\|_{\infty}\leq C\left[\|F(x,\cdot)\|_ {\infty}+\|\partial_{x}F(x,\cdot)\|_{\infty}\right],\] (2.13) \[\|\partial_{x}^{2}\Phi(x,\cdot)\|_{\infty}\leq C\left[\|F(x,\cdot )\|_{\infty}+\|\partial_{x}F(x,\cdot)\|_{\infty}+\|\partial_{x}^{2}F(x,\cdot) \|_{\infty}\right]. \tag{2.14}\] Proof.: The detailed proofs are divided into three steps. **Step 1:** In this step, we prove that \(\Phi(x,i)\) is a solution of the Poisson equation (2.10), it is sufficient to show that for any \(l=1,2,\ldots,n\), \[\lim_{s\to 0}\frac{P^{x}(s)\Phi^{l}(x,\cdot)(i)-\Phi^{l}(x,i)}{s}=-F^{l}(x,i),\] where \(P^{x}(t)\) is the corresponding transition probability matrix of \(Q(x)\). In fact, by the Chapman-Kolmogorov equation, it follows \[P^{x}(s)\Phi^{l}(x,\cdot)(i)-\Phi^{l}(x,i)= \sum_{j\in\mathbb{S}}p^{x}_{ij}(s)\Phi^{l}(x,j)-\Phi^{l}(x,i)\] \[= \sum_{j\in\mathbb{S}}p^{x}_{ij}(s)\int_{0}^{\infty}\sum_{k\in \mathbb{S}}F^{l}(x,k)p^{x}_{jk}(t)dt-\sum_{k\in\mathbb{S}}\int_{0}^{\infty}F^{ l}(x,k)p^{x}_{ik}(t)dt\] \[= \sum_{k\in\mathbb{S}}\int_{0}^{\infty}F^{l}(x,k)p^{x}_{ik}(t+s) dt-\sum_{k\in\mathbb{S}}\int_{0}^{\infty}F^{l}(x,k)p^{x}_{ik}(t)dt\] \[= \sum_{k\in\mathbb{S}}\int_{s}^{\infty}F^{l}(x,k)p^{x}_{ik}(t)dt- \sum_{k\in\mathbb{S}}\int_{0}^{\infty}F^{l}(x,k)p^{x}_{ik}(t)dt\] \[= -\sum_{k\in\mathbb{S}}\int_{0}^{s}F^{l}(x,k)p^{x}_{ik}(t)dt,\] which implies \[\lim_{s\to 0}\frac{P^{x}(s)\Phi^{l}(x,\cdot)(i)-\Phi^{l}(x,i)}{s}=-\lim_{s \to 0}\sum_{k\in\mathbb{S}}F^{l}(x,k)p^{x}_{ik}(s)=-F^{l}(x,i).\] By (2.9) and (2.2), we have \[|\Phi(x,i)| \leqslant \int_{0}^{\infty}|\mathbb{E}F(x,\alpha_{t}^{x,i})|dt\] \[= \int_{0}^{\infty}|P_{t}^{x}F(x,\cdot)(i)-\mu^{x}(F(x,\cdot))|\,dt\] \[\leqslant \int_{0}^{\infty}\|F(x,\cdot)\|_{\infty}\|p_{i}^{x}(t)-\mu^{x}\|_ {\rm var}dt\] \[\leqslant C\|F(x,\cdot)\|_{\infty}\int_{0}^{\infty}e^{-\lambda t}dt\] \[\leqslant \frac{C}{\lambda}\|F(x,\cdot)\|_{\infty}.\] Now, we define \[\tilde{F}_{t_{0}}(x,i,t):=\hat{F}(x,i,t)-\hat{F}(x,i,t+t_{0}),\] where \(\hat{F}(x,i,t):=\mathbb{E}F(x,\alpha_{t}^{x,i})\). The ergodicity condition (2.2) implies \[\lim_{t_{0}\to+\infty}\tilde{F}_{t_{0}}(x,i,t)=\mathbb{E}F(x,\alpha_{t}^{x,i}).\] In order to prove (2.13) and (2.14), it is sufficient to prove that there exists \(C>0\) such that for any \(t_{0}>0\), \(t>0\), \(x\in\mathbb{R}^{n}\), \(k,l\in\{1,2,\ldots,n\}\) and \(i\in\mathbb{S}\), \[|\partial_{x_{k}}\tilde{F}_{t_{0}}(x,i,t)|\leq C\left[\|F(x,\cdot)\|_{\infty} +\|\partial_{x}F(x,\cdot)\|_{\infty}\right]e^{-\lambda t}t, \tag{2.15}\] \[|\partial_{x_{k}}\partial_{x_{l}}\tilde{F}_{t_{0}}(x,i,t)|\leq C\left[\|F(x, \cdot)\|_{\infty}+\|\partial_{x}F(x,\cdot)\|_{\infty}+\|\partial_{x}^{2}F(x, \cdot)\|_{\infty}\right]e^{-\lambda t}(1\lor t^{2}), \tag{2.16}\] which will be proved in Step 2 and Step 3, respectively. **Step 2:** In this step, we indent to prove (2.15). By the Markov property, \[\tilde{F}_{t_{0}}(x,i,t) = \hat{F}(x,i,t)-\mathbb{E}F(x,\alpha_{t+t_{0}}^{x,i}) \tag{2.17}\] \[= \hat{F}(x,i,t)-\mathbb{E}\Big{[}\mathbb{E}[F(x,\alpha_{t+t_{0}}^ {x,i})|\mathscr{F}_{t_{0}}]\Big{]}\] \[= \hat{F}(x,i,t)-\mathbb{E}\hat{F}(x,\alpha_{t_{0}}^{x,i},t),\] which implies for any \(k\in\{1,2,\ldots,n\}\), \[\partial_{x_{k}}\tilde{F}_{t_{0}}(x,i,t) = \partial_{x_{k}}\hat{F}(x,i,t)-\mathbb{E}\partial_{x_{k}}\hat{F}( x,\alpha_{t_{0}}^{x,i},t)\] \[-\left\{\partial_{x_{k}}\mathbb{E}\left[\hat{F}(z,\alpha_{t_{0}}^ {x,i},t)\right]\right\}|_{z=x}.\] On one hand, note that \[\partial_{x_{k}}\hat{F}(x,i,t)=\partial_{x_{k}}\mathbb{E}F(x,\alpha_{t}^{x,i}) =\mathbb{E}\partial_{x_{k}}F(x,\alpha_{t}^{x,i})+\left\{\partial_{x_{k}} \mathbb{E}F(z,\alpha_{t}^{x,i})\right\}|_{z=x}.\] Then for any \(i,j\in\mathbb{S}\), \[|\partial_{x_{k}}\hat{F}(x,i,t)-\partial_{x_{k}}\hat{F}(x,j,t)| \leqslant |\mathbb{E}\partial_{x_{k}}F(x,\alpha_{t}^{x,i})-\mathbb{E} \partial_{x_{k}}F(x,\alpha_{t}^{x,j})|\] \[+\left|\partial_{x_{k}}\mathbb{E}F(z,\alpha_{t}^{x,i})|_{z=x}- \partial_{x_{k}}\mathbb{E}F(z,\alpha_{t}^{x,j})|_{z=x}\right|.\] Then by (2.4) and (2.2), we have \[|\partial_{x_{k}}\hat{F}(x,i,t)-\partial_{x_{k}}\hat{F}(x,j,t)| \leqslant \|\partial_{x}F(x,\cdot)\|_{\infty}\|p_{i}^{x}(t)-p_{j}^{x}(t)\|_ {\rm var}\] \[+|\partial_{x_{k}}P^{x}(t)F(z,\cdot)(i)|_{z=x}-\partial_{x_{k}}P^ {x}(t)F(z,\cdot)(j)|_{z=x}\] \[\leqslant C\|\partial_{x}F(x,\cdot)\|_{\infty}e^{-\lambda t}. \tag{2.21}\] Finally, Combining (2.17), (2.19) and (2.21), it is easy to see (2.15) holds. **Step 3:** In this step, we indent to prove (2.16). Recall that (2.17), then the chain rule yields for any \(k,l\in\{1,2,\ldots,n\}\), \[\partial_{x_{k}}\partial_{x_{l}}\tilde{F}_{t_{0}}(x,i,t)= \left[\partial_{x_{k}}\partial_{x_{l}}\hat{F}(x,i,t)-\mathbb{E} \partial_{x_{k}}\partial_{x_{l}}\hat{F}(x,\alpha_{t_{0}}^{x,i},t)\right]\] \[-\left\{\partial_{x_{l}}\mathbb{E}[\partial_{x_{k}}\hat{F}(z, \alpha_{t_{0}}^{x,i},t)]\right\}|_{z=x}\] \[-\partial_{x_{l}}\left\{\partial_{x_{k}}\mathbb{E}\left[\hat{F}( z,\alpha_{t_{0}}^{x,i},t)\right]|_{z=x}\right\}\] \[=: \sum_{i=1}^{3}J_{i}.\] (i) For the term \(J_{1}\). Note that \[\partial_{x_{k}}\partial_{x_{l}}\hat{F}(x,i,t)= \left.\mathbb{E}\partial_{x_{k}}\partial_{x_{l}}F(x,\alpha_{t}^{ x,i})+\left\{\partial_{x_{l}}\mathbb{E}\left[\partial_{x_{k}}F(z,\alpha_{t}^{ x,i})\right]\right\}|_{z=x}\right.\] \[\left.+\partial_{x_{l}}\left[\partial_{x_{k}}\mathbb{E}F(z, \alpha_{t}^{x,i})|_{z=x}\right],\] which implies for any \(i,j\in\mathbb{S}\), \[|\partial_{x_{k}}\partial_{x_{l}}\hat{F}(x,i,t)-\partial_{x_{k}} \partial_{x_{l}}\hat{F}(x,j,t)| \tag{2.22}\] \[\leqslant \left.\left|\mathbb{E}\partial_{x_{k}}\partial_{x_{l}}F(x, \alpha_{t}^{x,i})-\mathbb{E}\partial_{x_{k}}\partial_{x_{l}}F(x,\alpha_{t}^{ x,j})\right|\right.\] \[\left.+\left|\left\{\partial_{x_{l}}\mathbb{E}\left[\partial_{x_ {k}}F(z,\alpha_{t}^{x,i})\right]\right\}\right.\right|_{z=x}-\left\{\partial_{ x_{l}}\mathbb{E}\left[\partial_{x_{k}}F(z,\alpha_{t}^{x,j})\right]\right\}|_{z=x}\right|\] \[+\left|\partial_{x_{l}}\left[\partial_{x_{k}}\mathbb{E}F(z, \alpha_{t}^{x,i})|_{z=x}\right]-\partial_{x_{l}}\left[\partial_{x_{k}}\mathbb{ E}F(z,\alpha_{t}^{x,j})|_{z=x}\right]\right|\] \[=: J_{11}+J_{12}+J_{13}.\] By the ergodicity condition (2.2), \[J_{11} \leqslant \|\partial_{x_{k}}\partial_{x_{l}}F(x,\cdot)\|_{\infty}\|p_{i}^{ x}(t)-p_{j}^{x}(t)\|_{\rm var} \tag{2.23}\] \[\leqslant C\|\partial_{x}^{2}F(x,\cdot)\|_{\infty}e^{-\lambda t}.\] Using (2.4) we have \[J_{12} \leqslant \|\partial_{x_{k}}F(x,\cdot)\|_{\infty}e^{-\lambda t}t \tag{2.24}\] \[\leqslant C\|\partial_{x}F(x,\cdot)\|_{\infty}e^{-\lambda t}t.\] Applying (2.4) and (2.5), we obtain \[J_{13}\leqslant |\partial_{x_{k}}P^{x}(t)\partial_{x_{l}}F(x,\cdot)(i)-\partial_{x_{k }}P^{x}(t)\partial_{x_{l}}F(x,\cdot)(j)| \tag{2.25}\] \[+\,|\partial_{x_{l}}\partial_{x_{k}}P^{x}(t)F(x,\cdot)(i)- \partial_{x_{l}}\partial_{x_{k}}P^{x}(t)F(x,\cdot)(j)|\] \[\leqslant C\|\partial_{x_{l}}F(x,\cdot)\|_{\infty}e^{-\lambda t}t+C\|F(x, \cdot)\|_{\infty}e^{-\lambda t}(1\lor t^{2}).\] Thus by (2.23)-(2.25), we obtain \[|J_{1}|\leqslant C\left[\|F(x,\cdot)\|_{\infty}+\|\partial_{x}F(x,\cdot)\|_{ \infty}+\|\partial_{x}^{2}F(x,\cdot)\|_{\infty}\right]e^{-\lambda t}(1\lor t ^{2}). \tag{2.26}\] (ii) For the term \(J_{2}\). By any coupling process \((\tilde{\alpha}_{t_{0}}^{x,i},\tilde{\alpha}_{t_{0}}^{y,i})\) of \((\alpha_{t_{0}}^{x,i},\alpha_{t_{0}}^{y,i})\), using (2.18), we have for any \(x,y\in\mathbb{R}^{n}\), \[|\mathbb{E}[\partial_{x_{k}}\hat{F}(z,\alpha_{t_{0}}^{x,i},t)]- \mathbb{E}[\partial_{x_{k}}\hat{F}(z,\alpha_{t_{0}}^{y,i},t)]|\] \[= |\mathbb{E}[\partial_{x_{k}}\hat{F}(z,\tilde{\alpha}_{t_{0}}^{x,i },t)]-\mathbb{E}[\partial_{x_{k}}\hat{F}(z,\tilde{\alpha}_{t_{0}}^{y,i},t)]|\] \[\leqslant C\left[\|F(z,\cdot)\|_{\infty}+\|\partial_{x}F(z,\cdot)\|_{ \infty}\right]e^{-\lambda t}(1\lor t)\mathbb{P}(\tilde{\alpha}_{t_{0}}^{x,i} \neq\tilde{\alpha}_{t_{0}}^{y,i}),\] thus we get \[|\mathbb{E}[\partial_{x_{k}}\hat{F}(z,\alpha_{t_{0}}^{x,i},t)]- \mathbb{E}[\partial_{x_{k}}\hat{F}(z,\alpha_{t_{0}}^{y,i},t)]|\] \[\leqslant C\left[\|F(z,\cdot)\|_{\infty}+\|\partial_{x}F(z,\cdot)\|_{ \infty}\right]e^{-\lambda t}(1\lor t)\mathbb{W}(p_{i}^{x}(t_{0}),p_{i}^{y}(t_ {0})),\] where \[\mathbb{W}(\mu,\nu):=\inf_{\pi}\sum_{i=1}^{m_{0}}\sum_{j=1}^{m_{0}}1_{\{i\neq j \}}\pi_{ij}\] here the infimum is taken over all coupling measures \(\pi=(\pi_{ij})\) on \(\mathbb{S}\times\mathbb{S}\) of \(\mu=(\mu_{1},\ldots,\mu_{m_{0}})\) and \(\nu=(\nu_{1},\ldots,\nu_{m_{0}})\), i.e., for any \(i\in\{1,\ldots,m_{0}\}\), \[\mu_{i}=\sum_{k=1}^{m_{0}}\pi_{ik}\quad\text{and}\quad\nu_{j}=\sum_{k=1}^{m_{0 }}\pi_{kj}.\] Refer to [27, Page 36], \[\mathbb{W}(\mu,\nu)=\frac{1}{2}\|\mu-\nu\|_{\rm var}.\] Then using (2.3), we get \[|\mathbb{E}[\partial_{x_{k}}\hat{F}(z,\alpha_{t_{0}}^{x,i},t)]- \mathbb{E}[\partial_{x_{k}}\hat{F}(z,\alpha_{t_{0}}^{y,i},t)]|\] \[\leqslant C\left[\|F(z,\cdot)\|_{\infty}+\|\partial_{x}F(z,\cdot)\|_{ \infty}\right]e^{-\lambda t}(1\lor t)\|p_{i:}^{x}(t_{0})-p_{i:}^{y}(t_{0})\|_{ \rm var}\] \[\leqslant C\left[\|F(z,\cdot)\|_{\infty}+\|\partial_{x}F(z,\cdot)\|_{ \infty}\right]e^{-\lambda t}(1\lor t)|x-y|,\] which implies that \[|J_{2}|\leqslant C\left[\|F(x,\cdot)\|_{\infty}+\|\partial_{x}F(x,\cdot)\|_{ \infty}\right]e^{-\lambda t}(1\lor t). \tag{2.27}\] (iii) For the term \(J_{3}\). note that \[\partial_{x_{l}}\left\{\partial_{x_{k}}\mathbb{E}\left[\hat{F}(z,\alpha_{t_{0}}^{x,i},t)\right]|_{z=x}\right\}= \left\{\partial_{x_{k}}\mathbb{E}\left[\partial_{x_{l}}\hat{F}(z, \alpha_{t_{0}}^{x,i},t)\right]\right\}|_{z=x}\] \[+\left\{\partial_{x_{k}}\partial_{x_{l}}\mathbb{E}\left[\hat{F}(z,\alpha_{t_{0}}^{x,i},t)\right]\right\}|_{z=x}=:J_{31}+J_{32}.\] By the same argument as in the estimating term \(J_{2}\), it is easy to see \[|J_{31}|\leqslant C\left[\|F(x,\cdot)\|_{\infty}+\|\partial_{x}F(x,\cdot)\|_{ \infty}\right]e^{-\lambda t}(1\lor t). \tag{2.28}\] By (2.5) and (2.20), we have \[|J_{32}| =\,\left|\partial_{x_{k}}\partial_{x_{l}}P^{x}(t_{0})\hat{F}(x, \cdot,t)\right|\] \[\leqslant\,C\|\hat{F}(x,\cdot,t)\|_{\infty}\] \[\leqslant\,C\|F(x,\cdot)\|_{\infty}e^{-\lambda t}.\] Hence, (2.28) and (2.29) yield that \[|J_{3}|\leqslant C\left[\|F(x,\cdot)\|_{\infty}+\|\partial_{x}F(x, \cdot)\|_{\infty}\right]e^{-\lambda t}(1\lor t). \tag{2.29}\] Finally, by (2.26), (2.27) and (2.29), we get (2.16). The proof is complete. **Remark 2.3**.: Assuming the "_central condition_" (2.9), the representation (2.11) is a solution of the Poisson equation (2.10). However, this solution is not usually unique, but it is enough for our purpose. In fact, if the solution \(\Phi\) also satisfies the "_central condition_", i.e., \(\sum_{i=1}^{m_{0}}\Phi(x,i)\mu_{i}^{x}=0\) for any \(x\in\mathbb{R}^{n}\), then the solution is unique (cf. [15, Lemma 4.2.6]). **Remark 2.4**.: In the situation that Markov chain \(\{\alpha_{t}^{x,i}\}_{t\geqslant 0}\) is independent of \(x\), that is, when the generator \(Q(x)\) is constant matrix and denoted by \(Q\), the same results can be easily obtained through computation, as shown in [10]. However in current situation, i.e., the "full dependent" case, the proof is much more complicated. ## 3. Averaging principle for stochastic system (1.1) and (1.2) In this section, we study the averaging principle of the stochastic system (1.1) and (1.2) as an application of Poisson equation. To do so, we assume the following conditions on the coefficients \(b(x,i)\), \(\sigma\) and \(Q(x)\). **B1**.: _Suppose that there exists \(C>0\) such that all \(i,j\in\mathbb{S}\), \(x,y\in\mathbb{R}^{n}\),_ \[|b(x,i)-b(y,j)|\leq C|x-y|+C1_{i\neq j}, \tag{3.1}\] \[\|\sigma(x)-\sigma(y)\|\leq C|x-y|. \tag{3.2}\] **B2**.: _Suppose that \(Q(x)\) satisfies the assumptions_ **A1** _and_ **A2**_. Moreover,_ \[K(x):=\sum_{i\in\mathbb{S}}\sum_{j\in\mathbb{S}\setminus\{i\}}q_ {ij}(x)\leqslant C(1+|x|),\quad\forall x\in\mathbb{R}^{n}. \tag{3.3}\] **Remark 3.1**.: Under the assumptions **B1** and **B2**, (1.1) and (1.2) admits a unique solution (see e.g. [23]), and it is easy to see that the coefficients \(b\) and \(\sigma\) exhibit linear growth, satisfying \[\sup_{i\in\mathbb{S}}|b(x,i)|\leqslant C(1+|x|),\quad\|\sigma(x) \|\leqslant C(1+|x|). \tag{3.4}\] Note that condition (3.1) is stronger than the following classical condition, which requires \[|b(x,i)-b(y,i)|\leq C|x-y|. \tag{3.5}\] Condition (3.1) implies the global Lipschitz continuity of \(b(x,\cdot)\) with respect to discrete distance, which is used to establish the global Lipschitz continuity of the averaged coefficient \(\bar{b}\). If \(b\) is uniformly bounded, conditions (3.1) and (3.5) are equivalent. **Remark 3.2**.: It is well-known (cf. [3]) that the process \(\alpha^{\varepsilon}\) can be described as follows. We introduce the function \(g:\mathbb{R}^{n}\times\mathbb{S}\times[0,\infty)\to\mathbb{R}\), defined by \[g^{\varepsilon}(x,i,z)=\sum_{j\in\mathbb{S}\setminus i}(j-i)1_{z\in\triangle_{ ij}^{\varepsilon}(x)},\quad i\in\mathbb{S}\,,\] where \(\triangle_{ij}^{\varepsilon}(x)\) are the consecutive (with respect to the lexicographic ordering on \(\mathbb{S}\times\mathbb{S}\)) left-closed, right-open intervals of \(\mathbb{R}_{+}\), each having length \(q_{ij}(x)\varepsilon^{-1}\), with \(\Delta_{12}^{\varepsilon}=[0,\varepsilon^{-1}q_{12}(x))\). Then, equation (1.2) can also be written as \[d\alpha_{t}^{\varepsilon}=\int_{[0,\infty)}g^{\varepsilon}(X_{t-}^{ \varepsilon},\alpha_{t-}^{\varepsilon},z)N(dt,dz), \tag{3.6}\] where \(N(dt,dz)\) is a Poisson random measure defined on \(\Omega\times\mathcal{B}(\mathbb{R}_{+})\times\mathcal{B}(\mathbb{R}_{+})\) with Lebesgue measure as its intensity measure, and \(N(dt,dz)\) is independent of \(W\). ### A priori estimates of \(X_{t}^{\varepsilon}\) and \(\bar{X}_{t}\) In this subsection, we first give the following estimate of the solution \(X_{t}^{\varepsilon}\), since its proof follows the same argument as in [12, (4.56)], we omit the detailed proof here. **Lemma 3.3**.: _For any \(T>0\) and \(p>0\), there exists \(C_{p,T}>0\) such that_ \[\mathbb{E}\left(\sup_{0\leqslant t\leqslant T}|X_{t}^{\varepsilon}|^{p} \right)\leq C_{p,T}(1+|x|^{p}). \tag{3.7}\] We consider the following averaged equation: \[\left\{\begin{array}{l}d\bar{X}_{t}=\bar{b}(\bar{X}_{t})dt+\sigma(\bar{X}_{t })dW_{t},\\ \bar{X}_{0}=x\in\mathbb{R}^{n},\end{array}\right. \tag{3.8}\] where \(\bar{b}(x):=\sum_{j\in\mathbb{S}}b(x,j)\mu_{j}^{x}\). The existence and uniqueness of the solution \(\{\bar{X}_{t}\}_{t\geqslant 0}\) and its priori estimate are stated in the following Lemma. **Lemma 3.4**.: _Equation (3.8) admits a unique solution \(\{\bar{X}_{t}\}_{t\geqslant 0}\). Moreover, for any \(T>0\) and \(p>0\), there exists \(C_{p,T}>0\) such that_ \[\mathbb{E}\left(\sup_{0\leqslant t\leqslant T}|\bar{X}_{t}|^{p}\right)\leq C _{p,T}(1+|x|^{p}). \tag{3.9}\] Proof.: It is sufficient to prove \(\bar{b}\) is Lipschitz continuous. Moreover, (3.9) holds by the same argument in Lemma 3.3. In fact, for any \(x_{1},x_{2}\in\mathbb{R}^{n}\), we have for any \(t\geqslant 0\), \[|\bar{b}(x_{1})-\bar{b}(x_{2})|= |\mu^{x_{1}}(b(x_{1},\cdot))-\mu^{x_{2}}(b(x_{2},\cdot))|\] \[\leq \big{|}\mu^{x_{1}}(b(x_{1},\cdot))-\mathbb{E}b(x_{1},\alpha_{t}^ {x_{1},i})\big{|}+\big{|}\mu^{x_{2}}(b(x_{2},\cdot))-\mathbb{E}b(x_{2},\alpha _{t}^{x_{2},i})\big{|}\] \[+\big{|}\mathbb{E}b(x_{1},\alpha_{t}^{x_{1},i})-\mathbb{E}b(x_{2},\alpha_{t}^{x_{2},i})\big{|}\] \[\leq \|b(x_{1},\cdot)\|_{\infty}\|p_{i}^{x_{1}}-\mu^{x_{1}}\|_{\rm var }+\|b(x_{2},\cdot)\|_{\infty}\|p_{i}^{x_{2}}-\mu^{x_{2}}\|_{\rm var}\] \[+\big{|}\mathbb{E}b(x_{1},\alpha_{t}^{x_{1},i})-\mathbb{E}b(x_{2},\alpha_{t}^{x_{2},i})\big{|}\] \[\leq C(1+|x|)e^{-\lambda t}+\big{|}\mathbb{E}b(x_{1},\alpha_{t}^{x_{1},i})-\mathbb{E}b(x_{2},\alpha_{t}^{x_{2},i})\big{|}\,.\] Using condition (3.1), by any coupling process \((\tilde{\alpha}_{t}^{x_{1},i},\tilde{\alpha}_{t}^{x_{2},i})\) of \((\alpha_{t}^{x,i},\alpha_{t}^{y,i})\), we have \[\big{|}\mathbb{E}b(x_{1},\alpha_{t}^{x_{1},i})-\mathbb{E}b(x_{2},\alpha_{t}^{x_ {2},i})\big{|}= \big{|}\mathbb{E}b(x_{1},\tilde{\alpha}_{t}^{x_{1},i})-\mathbb{E}b(x_{2}, \tilde{\alpha}_{t}^{x_{2},i})\big{|}\] \[\leq C|x_{1}-x_{2}|+C\mathbb{P}(\tilde{\alpha}_{t}^{x_{1},i}\neq\tilde{\alpha}_{t }^{x_{2},i}).\] Then by the same argument as used in **Step 3** in Proposition 2.2, we have \[\left|\mathbb{E}b(x_{1},\alpha_{t}^{x_{1},i})-\mathbb{E}b(x_{2}, \alpha_{t}^{x_{2},i})\right|\leq C|x_{1}-x_{2}|,\] Letting \(t\to\infty\), by ergodicity condition (2.2), it follows \[|\bar{b}(x_{1})-\bar{b}(x_{2})|\leq C|x_{1}-x_{2}|.\] The proof is complete. ### Strong averaging principle Please note that the coefficient \(b(x,i)\) is only Lipschitz continuous, which is not sufficiently smooth to apply Proposition 2.2. Therefore, a mollifying approximation argument is being considered here and will be used in the proof. To do this, assume \(F:\mathbb{R}^{n}\times\mathbb{S}\to\mathbb{R}^{n}\) satisfies \[|F(x,i)-F(y,j)|\leqslant C|x-y|+C1_{i\neq j},\quad\forall x,y\in\mathbb{R}^{n},i,j\in\mathbb{S}. \tag{3.10}\] Let \(\rho:\mathbb{R}^{n}\to[0,1]\) be a smooth function such that any \(k\in\mathbb{N}_{+}\), \[\int_{\mathbb{R}^{n}}\rho(z)dz=1,\quad\int_{\mathbb{R}^{n}}|z|^{k}\,\rho(z)dz \leq C_{k},\quad\left|\nabla^{k}\rho(z)\right|\leq C_{k}\rho(z). \tag{3.11}\] Define a mollifying approximation of \(F\) as follows: \[F_{k}(x,i):=\int_{\mathbb{R}^{n}}F(x-z,i)\rho^{k}(z)dz, \tag{3.12}\] where \(\rho^{k}(z):=k^{n}\rho(kz)\). The averaged coefficient of \(F_{k}\) is defined by \[\bar{F}_{k}(x):=\sum_{i\in\mathbb{S}}F_{k}(x,i)\mu_{i}^{x}.\] With the notations above, we have the following result: **Lemma 3.5**.: _Assume that \(F\) satisfies (3.10) and *central condition * (2.9). Then for any \(k\in\mathbb{N}_{+}\) and \(x\in\mathbb{R}^{n}\), we have_ \[\|F(x,\cdot)-F_{k}(x,\cdot)\|_{\infty}\leqslant Ck^{-1},\quad\| \partial_{x}F_{k}(x,\cdot)\|_{\infty}\leqslant C,\quad\|\partial_{x}^{2}F_{k} (x,\cdot)\|_{\infty}\leqslant Ck; \tag{3.13}\] \[\|F_{k}(x,\cdot)\|_{\infty}\leqslant C(1+|x|),\quad|\bar{F}_{k}(x )|\leqslant C(1+|x|)k^{-1};\] (3.14) \[\|\partial_{x}\bar{F}_{k}(x)\|\leqslant C(1+|x|),\quad\|\partial_ {x}^{2}\bar{F}_{k}(x)\|\leqslant C(1+|x|)k, \tag{3.15}\] _where \(C>0\) is a constant independent of \(k\) and \(x\)._ Proof.: Since the estimates in (3.13) have been proved in [10, Lemma 3.1]. It is sufficient to prove the estimates in (3.14) and (3.15). Note that by (3.10), it is easy to prove \[\|F(x,\cdot)\|_{\infty}\leqslant C(1+|x|),\quad\forall x\in\mathbb{S}. \tag{3.16}\] Then we have for any \(x\in\mathbb{S}\), \[\|F_{k}(x,\cdot)\|_{\infty}\leqslant\|F(x,\cdot)-F_{k}(x,\cdot)\|_{\infty}+ \|F(x,\cdot)\|_{\infty}\leqslant C(1+|x|).\] By (3.16) and \(F\) satisfies the *_central condition_* (2.9), we get \[|\bar{F}_{k}(x)|=\ \left|\int_{\mathbb{R}^{n}}\sum_{i\in\mathbb{S}}F(x-z,i)( \mu_{i}^{x}-\mu_{i}^{x-z})\rho^{k}(z)dz\right|\] \[\leqslant C_{p}\mathbb{E}\left\{\sup_{0\leqslant t\leqslant T}\left|\int_{0}^{t} \left[\varrho(X_{s}^{\varepsilon})-\sigma(\bar{X}_{s})\right]dW_{s}\right|^{p}\right\}\] \[\leqslant C_{p}\mathbb{E}\left\{\sup_{0\leqslant t\leqslant T}\left|\int_{0}^ {t}\left[b(X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon})-\bar{b}(X_{s}^{ \varepsilon})\right]ds\right|^{p}\right\}\] \[+C_{p,T}\int_{0}^{T}\mathbb{E}|X_{s}^{\varepsilon}-\bar{X}_{s}|^{ p}ds.\] Then by Gronwall's inequality, it follows \[\mathbb{E}\left(\sup_{0\leqslant t\leqslant T}|X_{t}^{\varepsilon}-\bar{X}_{t}|^{ p}\right)\leq\ C_{p,T}\mathbb{E}\left\{\sup_{0\leqslant t\leqslant T}\left|\int_{0}^{t} \left[b(X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon})-\bar{b}(X_{s}^{ \varepsilon})\right]ds\right|^{p}\right\}. \tag{3.18}\] In order to deal with the right term in (3.18), we define \[F(x,i):=b(x,i)-\bar{b}(x).\] Obviously, \(F\) satisfies the Lipschitz continuous property (3.10), thus using Lemma 3.5, there exist sequences \(\{F_{k}\}_{k\geqslant 1}\) and \(\{\bar{F}_{k}\}_{k\geqslant 1}\) satisfy (3.13)- (3.15). Then we get \[\mathbb{E}\left(\sup_{0\leqslant t\leqslant T}|X_{t}^{\varepsilon} -\bar{X}_{t}|^{p}\right)\leq C_{p,T}\mathbb{E}\left[\sup_{0\leqslant t\leqslant T} \left|\int_{0}^{t}\left[F(X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon})-F_{k} (X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon})+\bar{F}_{k}(X_{s}^{ \varepsilon})\right]ds\right|^{p}\right]\] \[+C_{p,T}\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}\left|\int _{0}^{t}\left[F_{k}(X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon})-\bar{F}_{k} (X_{s}^{\varepsilon})\right]ds\right|^{p}\right]\] \[\leq C_{p,T}(1+|x|^{p})k^{-p}+C_{p,T}\mathbb{E}\left[\sup_{0\leqslant t \leqslant T}\left|\int_{0}^{t}\left[F_{k}(X_{s}^{\varepsilon},\alpha_{s}^{ \varepsilon})-\bar{F}_{k}(X_{s}^{\varepsilon})\right]ds\right|^{p}\right].\] Note that \(F_{k}-\bar{F}_{k}\) belongs \(C_{b}^{2}(\mathbb{R}^{n}\times\mathbb{S},\mathbb{R}^{n})\) and satisfies the "_central condition_" (2.9), then we define \[\Phi_{k}(x,i)=\int_{0}^{\infty}\left[\mathbb{E}F_{k}(x,\alpha_{t}^{x,i})-\bar{ F}_{k}(x)\right]dt.\] By Proposition 2.2, \(\Phi_{k}=(\Phi_{k}^{1},\Phi_{k}^{2},\ldots,\Phi_{k}^{n})\) solves the following Poisson equation \[-Q(x)\Phi_{k}(x,\cdot)(i)=F_{k}(x,i)-\bar{F}_{k}(x). \tag{3.19}\] Moreover, using (3.13)-(3.15), we get the following estimates: \[\|\Phi_{k}(x,\cdot)\|_{\infty}\leq C\left(1+|x|\right), \tag{3.20}\] \[\|\partial_{x}\Phi_{k}(x,\cdot)\|_{\infty}\leq C\left(1+|x|\right),\] (3.21) \[\|\partial_{x}^{2}\Phi_{k}(x,\cdot)\|_{\infty}\leq C\left(1+|x| \right)k. \tag{3.22}\] Applying Ito's formula, see e.g. [33, (2.7)], we obtain \[\Phi_{k}(X_{t}^{\varepsilon},\alpha_{t}^{\varepsilon})= \Phi_{k}(x,\alpha)+\int_{0}^{t}\partial_{x}\Phi_{k}(X_{s}^{ \varepsilon},\alpha_{s}^{\varepsilon})\cdot b(X_{s}^{\varepsilon},\alpha_{s} ^{\varepsilon})ds+\int_{0}^{t}\partial_{x}\Phi_{k}(X_{s}^{\varepsilon},\alpha_ {s}^{\varepsilon})\cdot\sigma(X_{s}^{\varepsilon})dW_{s}\] \[+\frac{1}{\varepsilon}\int_{0}^{t}Q(X_{s}^{\varepsilon})\Phi_{k}( X_{s}^{\varepsilon},\cdot)(\alpha_{s}^{\varepsilon})ds+\frac{1}{2}\int_{0}^{t} \mathrm{Tr}\Big{[}\partial_{x}^{2}\Phi_{k}(X_{s}^{\varepsilon},\alpha_{s}^{ \varepsilon})\cdot\left(\sigma\sigma^{*}\right)(X_{s}^{\varepsilon})\Big{]}ds\] \[+\int_{0}^{t}\!\int_{[0,\infty)}\Big{[}\Phi_{k}(X_{s-}^{ \varepsilon},\alpha_{s-}^{\varepsilon}+g^{\varepsilon}(X_{s-}^{\varepsilon}, \alpha_{s-}^{\varepsilon},z))-\Phi_{k}(X_{s-}^{\varepsilon},\alpha_{s-}^{ \varepsilon})\Big{]}\tilde{N}(ds,dz),\] where \[\mathrm{Tr}\Big{[}\partial_{x}^{2}\Phi_{k}(x,i)\cdot\left(\sigma\sigma^{*} \right)(x)\Big{]}:=\left(\mathrm{Tr}\Big{[}\partial_{x}^{2}\Phi_{k}^{1}(x,i) \cdot\left(\sigma\sigma^{*}\right)(x)\Big{]},\ldots,\mathrm{Tr}\Big{[} \partial_{x}^{2}\Phi_{k}^{n}(x,i)\cdot\left(\sigma\sigma^{*}\right)(x)\Big{]} \right).\] Therefore, it is easy to see \[-\int_{0}^{t}Q(X_{s}^{\varepsilon})\Phi_{k}(X_{s}^{\varepsilon},\cdot)(\alpha_{s }^{\varepsilon})ds=\varepsilon\Bigg{[}\Phi_{k}(x,\alpha)-\Phi_{k}(X_{t}^{ \varepsilon},\alpha_{t}^{\varepsilon})\] \[+\int_{0}^{t}\partial_{x}\Phi_{k}(X_{s}^{\varepsilon},\alpha_{s}^{ \varepsilon})\cdot b(X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon})ds+\int_{0}^ {t}\partial_{x}\Phi_{k}(X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon})\cdot \sigma(X_{s}^{\varepsilon})dW_{s}\Bigg{]}\] \[+\varepsilon\int_{0}^{t}\frac{1}{2}\mathrm{Tr}\Big{[}\partial_{x}^{2} \Phi_{k}(X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon})\cdot(\sigma\sigma^{*}) \left(X_{s}^{\varepsilon}\right)\Big{]}ds\] \[+\varepsilon\int_{0}^{t}\int_{[0,\infty)}\Big{[}\Phi_{k}(X_{s-}^ {\varepsilon},\alpha_{s-}^{\varepsilon}+g^{\varepsilon}(X_{s-}^{\varepsilon}, \alpha_{s-}^{\varepsilon},z))-\Phi_{k}(X_{s-}^{\varepsilon},\alpha_{s-}^{ \varepsilon})\Big{]}\tilde{N}(ds,dz)\] \[=:\sum_{i=1}^{3}V_{i}^{\varepsilon}(t). \tag{3.23}\] According to (3.19) and (3.23), we get \[\mathbb{E}\left(\sup_{0\leqslant t\leqslant T}|X_{t}^{\varepsilon }-\bar{X}_{t}|^{p}\right)\leq C_{p,T}(1+|x|^{p})k^{-p}+C_{p,T}\mathbb{E}\left[\sup_{0 \leqslant t\leqslant T}\left|\int_{0}^{t}Q(X_{s}^{\varepsilon})\Phi_{k}(X_{s }^{\varepsilon},\cdot)(\alpha_{s}^{\varepsilon})ds\right|^{p}\right]\] \[\leq C_{p,T}(1+|x|^{p})k^{-p}+C_{p,T}\sum_{i=1}^{3}\mathbb{E}\left[ \sup_{0\leqslant t\leqslant T}|V_{i}^{\varepsilon}(t)|^{p}\right]. \tag{3.24}\] By Burkholder-Davis-Gundy's inequality and (3.20)-(3.22), we get \[\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|V_{1}^{ \varepsilon}(t)|^{p}\right]\leq \varepsilon^{p}C_{p,T}\left[1+|x|^{2p}+\mathbb{E}\left(\sup_{0 \leqslant t\leqslant T}|X_{t}^{\varepsilon}|^{2p}\right)\right]\] \[\leq C_{p,T}\varepsilon^{p}\Big{(}1+|x|^{2p}\Big{)} \tag{3.25}\] and \[\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|V_{2}^{ \varepsilon}(t)|^{p}\right]\leq C_{p,T}\varepsilon^{p}\mathbb{E}\int_{0}^{T}\sum_{l=1}^{ n}\|\partial_{x}^{2}\Phi_{k}^{l}(X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon}) \|^{p}\|\sigma\sigma^{*}(X_{s}^{\varepsilon})\|^{p}ds\] \[\leq C_{p,T}k^{p}\varepsilon^{p}\mathbb{E}\int_{0}^{T}(1+|X_{s}^{ \varepsilon}|^{3p})ds\] \[\leq C_{p,T}k^{p}\varepsilon^{p}(1+|x|^{3p}). \tag{3.26}\] Using Kunita's first inequality (see [1, Theorem 4.4.23]), (3.20) and condition (3.3), we have for any \(p\geqslant 2\), \[\mathbb{E}\left[\sup_{0\leqslant t\leqslant T}|V_{3}^{\varepsilon} (t)|^{p}\right]\] \[\leq C_{p}\varepsilon^{p}\mathbb{E}\left[\int_{0}^{T}\int_{[0,\infty )}|\Phi_{k}(X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon}+g^{\varepsilon}(X_{s }^{\varepsilon},\alpha_{s}^{\varepsilon},z))-\Phi_{k}(X_{s}^{\varepsilon}, \alpha_{s}^{\varepsilon})|^{2}dzds\right]^{p/2}\] \[+C_{p}\varepsilon^{p}\mathbb{E}\int_{0}^{T}\int_{[0,\infty)}| \Phi_{k}(X_{s}^{\varepsilon},\alpha_{s}^{\varepsilon}+g^{\varepsilon}(X_{s} ^{\varepsilon},\alpha_{s}^{\varepsilon},z))-\Phi_{k}(X_{s}^{\varepsilon}, \alpha_{s}^{\varepsilon})|^{p}dzds\] \[\leq C_{p}\varepsilon^{p}\mathbb{E}\left[\int_{0}^{T}(1+|X_{s}^{ \varepsilon}|^{2})\int_{[0,K(X_{s}^{\varepsilon})\varepsilon^{-1}]}dzds \right]^{p/2}+C_{p}\varepsilon^{p}\mathbb{E}\int_{0}^{T}(1+|X_{s}^{ \varepsilon}|^{p})\int_{[0,K(X_{\xi})\varepsilon^{-1}]}dzds\] \[\leq C_{p}\varepsilon^{p/2}\mathbb{E}\left[\int_{0}^{T}(1+|X_{s}^{ \varepsilon}|^{2})(1+|X_{s}^{\varepsilon}|)ds\right]^{p/2}+C_{p}\varepsilon^{p /2}\mathbb{E}\int_{0}^{T}(1+|X_{s}^{\varepsilon}|^{p})(1+|X_{s}^{\varepsilon }|)ds\] \[\leq C_{p,T}(1+|x|^{3p})\varepsilon^{p/2}. \tag{3.27}\] Hence, by (3.25)-(3.27), we obtain \[\mathbb{E}\left(\sup_{0\leqslant t\leqslant T}|X_{t}^{\varepsilon }-\bar{X}_{t}|^{p}\right)\leq C_{p,T}(1+|x|^{3p})\left(k^{-p}+k^{p}\varepsilon ^{p}+\varepsilon^{p/2}\right).\] Finally, it is easy to see (3.17) holds by taking \(k=\left[\varepsilon^{-1/2}\right]\), where \([s]\) means the integer parts of \(s\). The proof is complete. **Remark 3.7**.: The estimate (3.17) indicates a strong convergence order of \(1/2\), which is optimal (refer to [10, Example 3.4]). It is worth noting that the diffusion coefficient \(\sigma(X_{t}^{\varepsilon})\) is independent of the Markov chain \(\alpha_{t}^{\varepsilon}\). Otherwise, the strong convergence may fail (see a counter-example in [11, section 4.1]). **Acknowledgment**. This work is supported by the National Natural Science Foundation of China (Nos. 12271219, 11931004, 12090010, 1209001), the QingLan Project of Jiangsu Province and the Priority Academic Program Development of Jiangsu Higher Education Institutions.
2301.08755
Tidal Distortions as a Bottleneck on Constraining Exoplanet Compositions
Improvements in the number of confirmed planets and the precision of observations imply a need to better understand subtle effects that may bias interpretations of exoplanet observations. One such effect is the distortion of a short period planet by its host star, affecting its derived density. We extend the work of Burton et al., Correia, and others, using a gravitational potential formulation to a sample of nearly 200 planets with periods less than 3 days. We find five planets exhibiting density variations of >10% and as many as 20 planets with deviations >5%. We derive an analytic approximation for this deviation as a function of the orbital period, transit depth, and mass ratio between the planet and host star, allowing for rapid determination of such tidal effects. We find that current density error bars are typically larger than tidal deviations but that reducing the uncertainty on transit depth and radial velocity (RV) amplitude by a factor of 3 causes tidal effects to dominate density errors (>50%) in >40% of planets in our sample, implying that in the near future upgraded observational precision will cause shape deviations to become a bottleneck with regards to analysis of exoplanet compositions. These two parameters are found to dominate uncertainties compared to errors on stellar mass and radius. We identify a group of eight planets (including WASP-19 b, HAT-P-7 b, and WASP-12 b) for which current density uncertainties are as much as 4x smaller than the potential shift due to tides, implying a possible 4{\sigma} bias on their density estimates.
David Berardo, Julien de Wit
2023-01-20T19:00:00Z
http://arxiv.org/abs/2301.08755v2
# Tidal Distortions as a Bottleneck on Constraining Exoplanet Compositions ###### Abstract Improvements in the number of confirmed planets and the precision of observations implies a need to better understand subtle effects which may bias interpretations of exoplanet observations. One such effect is the distortion of a short period planet by its host star, affecting its derived density. We extend the work of Burton et al. (2014); Correia (2014) and others, using a gravitational potential formulation to a sample of nearly 200 planets with periods less than three days. We find five planets exhibiting density variations of \(>10\%\), and as many as twenty planets with deviations \(>5\%\). We derive an analytic approximation for this deviation as a function of the orbital period, transit depth, and mass ratio between the planet and host star, allowing for rapid determination of such tidal effects. We find that current density error-bars are typically larger than tidal deviations, but that reducing the uncertainty on transit depth and RV amplitude by a factor of three causes tidal effects to dominate density errors (\(>50\%\)) in \(>\)40% of planets in our sample, implying that in the near future upgraded observational precision will cause shape deviations to become a bottleneck with regards to analysis of exoplanet compositions. These two parameters are found to dominate uncertainties compared to errors on stellar mass and radius. We identify a group of eight planets (including WASP-19 b, HAT-P-7 b, and WASP-12 b) for which current density uncertainties are as much as four times smaller than the potential shift due to tides, implying a possible 4\(\sigma\) bias on their density estimates. ## 1 Introduction As the list of confirmed exoplanets grows we continuously expand the sampled space of known planetary parameters. Categories of planets such as those with ultra-short orbital periods have gone from containing a handful of planets to hundreds of planets thanks to missions such as _Kepler_(Borucki et al., 2010) and _TESS_(Ricker et al., 2014). In addition to this increase in population, the precision of instruments has continued to reach new heights, reducing the uncertainty in quantities such as transit depth or planetary mass. This trend will accelerate further with the next generation of observatories and instruments such as JWST and PLATO (Heras et al., 2020), as well as high precision RV instruments such as CARMENES (Reiners et al., 2018) and ESPRESSO (Schmidt et al., 2021). This increase in both the size and quality of our sample implies that subtle effects which in the past where either too small to be detectable or which affected a single digit number of planets may no longer be disregarded. An example of this behaviour is the 'Transit Light Source' effect (Rackham et al., 2018), in which variability of the stellar surface causes biases in atmospheric characterisation by mimicking or muting effects which produce similar results, acting a bottleneck towards properly understanding a planets atmosphere. The focus of this work is on effects which alter the shape of an exoplanet, which is often considered to be a perfect sphere such as in the commonly used models of Mandel & Agol (2002), implemented in the widely used batman package (Kreidberg, 2015). For short period planets close to their host star, one such effect are tidal distortions which can cause a planet to bulge out towards its host star (Leconte et al., 2011). This effect in particular has the potential to introduce a significant bias on the density of a planet since its sky projection remains close to a perfect circle. When considering for example a planet which is deformed due to rotation causing is equator to bulge, its projection becomes elliptical (Seager & Hui, 2002; Barnes & Fortney, 2003). In this case, subtle difference in the shape of ingress / egress of the transit lightcurve may be used to break the degeneracy between a spherical and oblate planet (Carter & Winn, 2010; Berardo & de Wit, 2022). For tidally deformed planets, phase curve observations which observe the planet from different directions could in principle determine these so called 'ellipsoidal variations' through lightcurve deviations (Correia, 2014; Kreidberg et al., 2018), however full phase curve observations require a significant amount of observing time to obtain, and at high precision there is likely to be a significant amount of degeneracy between the orbit, shape, and brightness distribution of a planet (de Wit et al., 2012). Tidal distortions imply an underestimate of the volume of a planet, which in turn implies an overestimate of its bulk density. Theoretical considerations of the effect of this have previously been studied in Leconte et al. (2011) This effect has already been considered, primarily in the work of Burton et al. (2014), which calculated the magnitude of the distortion and the degree to which it altered the density measurement for a sample of just over 30 planets. Additionally, Correia (2014) expanded on this work using a more detailed model to derive an analytic expression for the change in density as a function of distance to the host star. In this work we aim to expand on these efforts in several ways. Our primary effort is to increase the sample of planets analysed using a gravitational potential model, which has been found to provide similar results to more complicated structural models. In the time since these previous studies were published, roughly 6x as many planets have now been found to be in the space of parameters which are susceptible to tidal distortion effects (i.e. planets with orbital periods below three days on circular orbits). In section 2 we briefly outline the theory of tidal deformation and describe our method for calculating the effects of tidal interactions, and thus altered planetary densities. In section 3 we first highlight our sample of planets to be analysed, followed by the results of our analyses. We highlight trends as a function of various system parameters and derive an approximation which accurately describes the changes in density without the need for a full simulation. In section 4 we first highlight the biases that may be introduced when attempting to retrieve the interior composition of a planet using mass-radius relations under the assumption of being perfectly spherical. We then compare the changes in density to current density uncertainties, and we also analyse the relative contributions to these uncertainties from five parameters underlying parameters. This allows us to determine how upcoming improvements in quantities such as planet mass and stellar parameters will affect the ability to ignore such effects, for example through extreme precision radial velocity efforts (Crass et al., 2021). ## 2 Calculating the density of a tidally deformed planet ### Physical description of scenario To model the shape of the planet, we follow a similar methodology as that of Burton et al. (2014), where the surface of the planet is assumed to be on a gravitational equipotential. The value of the gravitational potential generated by a rotating planet and its host star is calculated using the Roche approximation (Chandrasekhar, 1987): \[\Phi_{1} =-\frac{GM_{1}}{\left((x+a)^{2}+y^{2}+x^{2}\right)^{1/2}} \tag{1}\] \[\Phi_{2} =-\frac{GM_{2}}{\left(x^{2}+y^{2}+x^{2}\right)^{1/2}}\] (2) \[\Phi_{3} =-\frac{1}{2}\Omega^{2}\left[(x+\mu_{1}a)^{2}+y^{2}\right] \tag{3}\] where \(G\) is the gravitational constant, \(M_{1}\) is the mass Figure 1: An illustration of the process by which the surface of the sphere is constructed. Starting from an icosahedron on the left, triangular faces are continually subdivided. Finally, the points are normalized to generate a uniformly sampled sphere. of the host star, \(M_{2}\) is the mass of the planet, \(a\) is the separation between the host star and planet (i.e. the semi-major axis of a circular orbit), \(\mu_{1}=M_{1}/(M_{1}+M_{2})\) and \(\Omega=2\pi/P\) where \(P\) is the orbital period of the planet. The coordinate system is such that the origin is placed at the center of the planet. The x coordinate points along the line connecting the center of masses of the two bodies, the z axis points along the orbital plane in the direction of motion of the planet, and the y axis points normal to the orbital plane. In order to use such an approximation to model the distortion of a planets surface, we assume the planet is both tidally locked as well on a non-eccentric orbit. As we shall see in later sections, the effect of the distortion is strongest for low period planets (p \(<\) 3 days) which are most likely to be tidally locked and be on circular orbits(Barnes, 2017). ### Calculating the volume of a deformed planet We first calculate the surface of a deformed planet and then'measure' its volume in order to determine the amount by which its density is altered. In order to generate the surface of our planet, we first construct a geodesic icosahedron as an approximation of a sphere. This is an object commonly used in computer graphics and 3D rendering software which has the benefit of having its points uniformly spread out across its surface. We begin with the vertices of an icosahedron and then iteratively subdivide each of its faces into smaller triangles (as shown in figure 1). After the last round of subdivisions we normalize the length of each vertex from the origin to generate a tiled sphere. This process leaves us with a collection of triangular faces which allows us to calculate two necessary quantities, the total projected surface area visible to an observer as well as the enclosed volume of each tetrahedron generated by the origin and any given triangular face. An additional benefit of this method is that we can adjust the number of iterations in order to achieve any level of precision we desire. We find that after 5 subdivisions the calculated volume of our icosphere differs from that of a perfect sphere by only 0.05%, while the calculated projected area varies by only 0.03%. We use this as a benchmark for the accuracy of our method and fix all further calculations to 5 subdivisions, which gives us a surface of 10242 triangular tiles. We next scale each vertex radially until all points have the same gravitational potential, which requires us to pick a value of the equipotential \(\Phi\). We choose \(\Phi\) such that the projected surface area matches the observed transit depth, similar to what is done in Burton et al. (2014). We first evaluate the equipotential function for a range of radii centered on the spherical planet radius. For each value of \(\Phi\) generated this way, we then calculate Figure 2: This figure shows two views of the surface of WASP-19 b. Points in black show the spherical planet which matches the observed transit depth, while points in red show the surface generated by fitting for an equipotential while also matching the observed transit depth. On the left we see a top down view of the orbital plane. On the right we see the view along the line of sight between the centers of mass of the planet and star. the radius of each vertex using a least squares regression in order to find the surface of constant potential. For this surface, we then calculate the projected planet area. This gives us a mapping between gravitational potential and transit depth, which we use to select the value of \(\Phi\) which corresponds to any depth value of our choosing. The result of this process is shown in figure 2, where we have calculated the deformation of WASP-19 b (Hebb et al., 2009) using the described process. This example highlights the potential for tidal deformation to alter a planets measured density. In the left panel we see a significant deviation from a pure sphere, as the planet is pulled towards its host star. However in the right panel we see that the observer-projected shape of the planet remains nearly perfectly circular. ## 3 Density variations of confirmed planets ### Planet Sample We begin with the full list of confirmed planets found in the exoplanet archive (NASA Exoplanet Archive, 2019) which currently contains just over 5000 exoplanets. As mentioned in the previous section, as well as motivated by the results of Burton et al. (2014), we focus our efforts on short period planets, specifically planets with orbital period of less than 3 days. We do also analyze planets with periods in the range of 3-5 days, but those were found to have negligible tidal distortion effects, consistent with expectations. We additionally focus only on planets which have reported mass values. In principle, relative variations in density can be measured based on just changes in planet volume which is the focus of this work. However we also consider the magnitude of such a difference relative to the uncertainty in the measured density, for which a mass value (along with an error-bar) is required. We also cut for planets with eccentricity values below e = 0.05. This leaves us with a final sample of 196 planets, just over 6 times larger than the sample of planets used in Burton et al. (2014). ### Density Variation Results We apply the process described in section 2.2 to each of the planets in our sample. For each planet, we calculate its volume under tidal deformation that produces a depth value which matches the median reported value to within 0.1% in order to minimize differences caused by truncation or any other numerical effects. All analyses in this section use these values in order to compare the spherical and tidal planet densities. #### 3.2.1 Absolute changes in density & trends We first look at the percent difference in the density of each planet under the assumptions of being perfectly spherical or tidally deformed \[\frac{\Delta\rho}{\rho_{sph}}=\frac{\rho_{sph}-\rho_{tide}}{\rho_{sph}} \tag{4}\] where we calculate the value of \(\rho_{sph}\) ourselves using the reported values of mass, depth, and stellar radius. This is done to ensure a fair comparison in order to accurately represent the amount by which density can shift due to changes in volume. As we will see in the next section, this quantity is often comparable to the density uncertainty, which is set by the underlying uncertainties from transit depth and RV semi-amplitude measurements used to calculate density. Using the reported value of planet density thus suffers the risk of including measurement uncertainty (depending on how density is reported which varies between analyses) when at this stage we only wish to determine intrinsic differences. Thus we ensure that both our measurements correspond to identical values of depth and planet mass. The results of this are shown in figure 3. We show the variation as a function of orbital period, where we note that the variation decreases as period increases. This is not surprising given the factor of \(1/p^{2}\) which appears in the potential equation, and acts as an additional confirmation that our code is accurately calculating planet deformations (a similar trend with fewer planets was also seen in Burton et al. (2014)). We truncate the plot at an upper limit of \(p=3\) days, but note that we calculated the deformation out to a Figure 3: Relative change in the density of planets with orbital periods less than three days. The curves show the functional dependence of equation 8 for representative values of planet mass and radius. period of 5 days and found that the trend continued, in particular the upper envelope which flattens out at a maximal deviation of \(\sim 2\%\). We find that for planets with orbital periods below 1.5 days, the tidal density may deviate by as much as 15% compared to the density which comes from assuming a perfectly spherical planet. #### 3.2.2 Functional Approximation of Density Variations The scatter in figure 3 implies that orbital period is not the sole factor in determining density variations, which is also apparent from the additional terms in equation 1. We attempt to derive a functional form of the variation in density by comparing the full tidal potential to that of an isolated spherically symmetric body, given by \(\Phi_{sph}=-GM_{p}/r\). We first assume that points along the surface of a tidally distorted planet are at a similar distance to the center of the planet as for a non-distorted planet, i.e. no part of the planet is distorted by a factor of say two or more. Thus in the Roche approximation we may replace quantities such as \(x^{2}+y^{2}+z^{2}\) with \(r_{p}^{2}\) or equivalently \(\sqrt{\delta}R_{s}\) where \(\delta\) is the observed transit depth and \(R_{s}\) is the stellar radius. We also assume that \(a>>R_{p}\) (in our sample we always have at least \(a/R_{p}>10\)), and also that that solar mass is much larger than the planetary mass (for our sample we always have \(M_{s}/M_{p}>10^{2}\)). Under these assumptions the three terms from equation 1 become: \[\Phi_{1}\sim-\frac{GM_{s}}{a},\ \Phi_{2}\sim-\frac{GM_{p}}{r_{p}},\ \Phi_{3}\sim- \frac{1}{2}\Omega^{2}a^{2} \tag{5}\] which we then combine and scale by \(\Phi_{sph}\) to get \[\frac{\Phi_{sph}-\Phi_{tide}}{\Phi_{sph}}=\frac{3}{2}\frac{M_{p}}{M_{s}}\frac{ r_{p}}{a} \tag{6}\] where we've used Kepler's 3rd law to combine the orbital period and semi-major axis terms. We now have an equation for the change in gravitational potential, which must be converted to a change in density. Given that the potential is treated as a radial 1D function, a reasonable assumption might be that the scaling term (rp/a) needs to be cubed in order to obtain a relationship for density. To confirm this, we parameterize the change in density as \[\frac{\Delta\rho}{\rho_{sph}}=\alpha\left(\frac{Mp}{Ms}\right)^{\beta}\left( \frac{r_{p}}{a}\right)^{\gamma} \tag{7}\] and fit for \(\alpha\), \(\beta\), \(\gamma\) against the calculated values for \(\Delta_{\rho}/\rho\). We do indeed find that \(\gamma\sim 3\), as well as \(\alpha\sim 2\) and \(\beta\sim 1\). We present the final effect on the change in density (having re-converted to orbital period) as \[\frac{\Delta\rho}{\rho_{sph}}=0.01428\left(\frac{P}{\rm day}\right)^{-2}\left( \frac{R_{p}}{\rm R_{J}}\right)^{3}\left(\frac{M_{p}}{\rm M_{J}}\right)^{-1} \tag{8}\] We plot this function for representative values of planet radius and planet mass in our sample in figure 3, where we find good agreement particularly in the upper envelope of the data points which closely follows an inverse square dependence on the period. We additionally compare this analytic description of the change in density directly to the values calculated in section 3.2.1 and show the results in figure 4. We find that the bulk of the data points follow a linear relationship with a slope of \(\sim\)2, although we do still note a certain amount of scatter above the line. Planets which deviate significantly from the trend tend to have smaller masses (closer to being earth sized). This implies that one or more of the assumption we have made in deriving this relationship breaks down for sufficiently low mass planets. At the scales involved, our approximation deviates by at most a factor of ten from the true relative density change. This implies an underestimate of the true volume change by at most 10%, which in turn corresponds to an error on the linear scale of the planet by \(~{}1.1^{1/3}\sim 3\%.\) The true deviation is almost always larger than our functional approximation. Thus equation 8 represents a fairly robust metric to determine if a planet may be susceptible to tidal deformations, without needing to run a full gravitational potential calculation. A similar metric was derived in Correia (2014) (eq. 27), using a different approach considering the Love number and fluid displacement of an exoplanet (Love Figure 4: We show here agreement between the functional form of the density perturbation we derive in section 3.2.2 (x-axis) against the values calculated in section 3.2.1 (y-axis). The black line a linear relationship with a slope of 2 passing through the origin. The coloring represents the mass of each planet on a log scale. 1911). The result they obtain is similar in that it is proportional to the ratio of planet to stellar mass, as well to the third power of planet size to orbital semi-major axis. While we find a constant scaling factor of two, they obtain a scaling factor of \(7h_{f}/4\), where \(h_{f}\) is the fluid second Love number. Estimating \(h_{f}\) using the Darwin-Radau relation (Bourda and Capitaine, 2004) and a value of \(\sim 0.27\) for the moment of inertia of Jupiter (Ni, 2018) gives a prefactor of 2.5. This difference of 25% in estimated tidal density is well below the measurement uncertainty on planet density, and using either equation would indicate weather or not the planet of density may be significantly different from that of a spherical planet. An additional consideration of Correia (2014) is the effect of inclination on the derived density, which introduces a correction term in their equation 27 proportional to \(cos^{2}i\). We find that for the planets in our sample, the effect of this correction term is at most 2% for a handful of planets, and more typically well below a 1% correction. Thus in our analysis we have chosen to neglect the effects of inclination, in order to provide a simple framework which still captures the bulk of the deviations. Even when considering the maximum inclination that would allow for a transit to occur, the correction term is at most 2% for the shortest period planets in our sample, and for most planets is much less than 1%. ## 4 Discussion In the previous section we considered the absolute changes in density a planet may experience under the effects of tidal forces. We now focus on contextualizing these results with regards to measurement accuracy, and biases that may be introduced in measuring planet compositions to high precision. ### Uncertainty in Mass-Radius Relations During transit a tidally distorted planet will still have a nearly circular projection while having a larger than expected volume as shown in figure 2. The implication of this is that a spherical transiting planet could have the same density as a deformed planet with a smaller projected area, due to the 'hidden' extra volume. Thus when considering mass-radius composition curves, there is in-fact a degeneracy wherein a single curve could actually correspond to a range of projected radii, which Figure 5: Mass-Radius relationships along with data points (and error-bars) for the sample of planets considered in this work. The left and right panels refer to low and high mass planets respectively. Black curves in the left figure are taken from Zeng et al. (2019). Colored bands represent variations of these curves by up to 4.7% in radius, corresponding to density variations of up to 15%. The right hand panel shows a similar phenomenon for high mass planets, where we show constant-density relations corresponding to solar system gas giants giant densities, as well as a planet with half the density of Saturn and a planet with a density of \(0.2g/cm^{3}\) representative of ‘super puffs’. The planets highlighted in red are those mentioned in Table 2 whose density uncertainty is less than the deviations caused by tides. we show in figure 5. The implication of this is that even if a planet had no error whatsoever on its transit depth, there would still remain uncertainty on its composition due to a lack of knowledge of its shape, becoming a bottleneck when attempting to measure planetary compositions to high precision. We separate our sample of planets into low mass (Earth-sized) and high mass (Jupiter-size) planets, and for each we show a selection of various composition curves. For the low mass planets we show curves taken from Zeng et al. (2019), for a range of iron fractions as well as an Earth-like composition and a planet with a 25% water composition. For gas giant planets, we show a range of densities corresponding to the solar system gas giants, as well as a lower density of \(0.2g/cm^{3}\) as a representative value of large planets with low densities, so called'super-puffs'(Masuda, 2014; Lopez and Fortney, 2014). For each curve, we a plot a range of values (the colored regions) corresponding to a radius difference of \(\sim 5\%\), which corresponds to a maximal density variation of \(\sim 15\%\). This represents the range of projected radii which could all correspond to the same density. The effect of these considerations is that, for example, a composition of 20% iron and one of pure-rock become a near continuous region of parameter space, and a planet such as Kepler-10 b, which we note in the next section has a relatively low measurement error, could now be equally described by either model. We additionally see planets which fall between models of 25% water and one of pure rock. While their own uncertainties make the distinction clear, with one model being two or three standard deviations away, it becomes much less obvious which model is correct once the additional uncertainty from shape variations (colored bands) is considered. ### Uncertainty of Density Measurements In the previous section we considered the limiting case of perfect transit depth and mass radius knowledge and their effect on compositional analysis. We now focus on current measurement errors, how they compare to changes induced by tidal variations, and how upcoming improvement in precision of the quantities used to calculate density, namely transit depth, stellar radius, and planet mass (which itself depends on the stellar mass, RV semi-major amplitude, and orbital period) will in turn affect the uncertainty on density. Figure 6: This figure illustrates the relative contribution of five underlying factors to the derived measurement error of a planets density. The x-axis represents a minimum amount that a given parameter contributes to the overall uncertainty on density. Solid colored lines represent directly observable quantities (period, transit depth and RV amplitude), while dashed colored lines refer to model-dependant quantities (stellar mass and radius). The black line (‘shape’) shows the ratio of tidally-induced variation to measurement error (\(\Delta_{\rho}/\sigma_{\rho}\)). The grey line shows a similar value, after having artificially reduced the overall uncertainty on density by a factor of three to highlight the effect of future measurement improvements. For a function \(f\) which depends on independent variables \(x_{i}\), we can write the uncertainty of \(f\) (denoted \(\sigma_{f}\)) as: \[\sigma_{f}^{2}=\sum_{i}\left(\frac{\partial f}{\partial x}\sigma_{x_{i}}\right) ^{2} \tag{9}\] which for a density calculated using planet mass (\(M_{p}\)), transit depth (\(\delta\)) and stellar radius (\(R_{s}\)) becomes \[\sigma_{\rho}=\sqrt{\left(\frac{\rho}{M_{p}}\sigma_{M_{p}}\right)^{2}+\left( \frac{3}{2}\frac{\rho}{\delta}\sigma_{\delta}\right)^{2}+\left(3\frac{\rho}{R_ {s}}\sigma_{R_{s}}\right)^{2}} \tag{10}\] We note that most of the planets in our sample have reported values for their density along with an error-bar in their entries in the exoplanet archive. We additionally calculate the uncertainty by ourselves using equation 10 and the reported uncertainties for the involved quantities, and find a good agreement between the two values. An additional consideration is that the planet mass itself is dependant on the radial velocity semi-major amplitude (K), stellar mass (\(M_{s}\)), and orbital period (p), which allows us to write the uncertainty on the planet mass as: \[\sigma_{M_{p}}=\sqrt{\left(\frac{M_{p}}{K}\sigma_{K}\right)^{2}+\left(\frac{3 }{2}\frac{M_{p}}{M_{s}}\sigma_{M_{s}}\right)^{2}+\left(\frac{1}{3}\frac{M_{p} }{P}\sigma_{P}\right)^{2}} \tag{11}\] The benefit of calculating the uncertainty directly in this way is that we are then able to compare the relative contribution of each term to the overall uncertainty. We quantify the relative contribution of a variable \(x_{i}\) as: \[\left(\frac{\partial\rho}{\partial x_{i}}\sigma_{x_{i}}\right)^{2}/\sigma_{ \rho}^{2} \tag{12}\] such that the sum of the contributions of each variable is 100%. The results of this breakdown are shown in figure 6, where we illustrate how often a given parameter contributes a minimum amount to the uncertainty. We find for example that in our sample of planets the orbital period never contributes more than 0.0001% relative to the other parameters, which is unsurprising given that the orbital period of a transiting planet is typically measured to extremely high precision. For the remaining four parameters we can categorise them as being either measurement dependant (transit depth and RV amplitude) or model dependant (stellar mass and stellar radius). We find that it is the measurement parameters which more often contribute the largest amount of uncertainty, with the RV amplitude alone contributing at least 60% of the relative uncertainty for \(\sim\)20% of the planets in our sample, and in some case it even contributes almost the entirety of the uncertainty. Transit depth similarly can contribute as much as 90% in some cases, whereas the model dependant parameters never contribute more than 80% of the relative uncertainty. In table 1 we show the ranked breakdown of error contributions, in order of largest to smallest contributor (regardless of which parameter it comes from). We find for example that for a given planet the largest source of uncertainty always contributes at least 31% of the error and potentially the entire uncertainty, with a median contribution of 59%. This indicates that for most planets there is a single parameter which contributes more than half of the density uncertainty. When we consider the overall uncertainty on the spherical density, we find that for most planets the calculated difference between the spherical and tidal density (\(\Delta_{\rho}\)) is smaller than the uncertainty (\(\sigma_{\rho}\)), illustrated by the black line in figure 6 which shows the ratio between the two. This implies that with current data precision assuming a planet to be spherical in most cases does not introduce a significant statistical bias, but may be causing density error uncertainties to be underestimated. Given this result, we can then ask by how much the error on planet density needs to be reduced before we have \(\sigma_{\rho}=\rho_{sphere}-\rho_{tidal}\), which we show in figure 7. We see that the peak lies around an improvement of roughly 3-10x, although for many planets the required improvement is much smaller. For planets where the radial velocity amplitude or transit depth are the largest contributing factor, this implies that at a reduction in their uncertainties by 3-10x, tidal effects on density will begin to become relevant and the planet can no longer safely assume to be spherical. This is shown by the grey curve in figure 6, where we reduce the density error by a factor of three and show the relative value of tidal effects, which is comparable to \(\sim\) 50% of the measurement error on density for \(>35\%\) of planets. \begin{table} \begin{tabular}{c|c c c} \hline Error Contributor & Min \% & Max \% & Median \% \\ \hline 1 & 31 & 100 & 59 \\ 2 & 0 & 45 & 25 \\ 3 & 0 & 28 & 9 \\ 4 & 0 & 18 & 3 \\ \hline \end{tabular} \end{table} Table 1: Summary of the ranked contributions to density error across all planets where 1 = largest contributing factor and 4 = smallest. We find that the largest source of error (regardless of which underlying parameter it comes from) comprises anywhere from 31%-100% of the uncertainty on a planets density, with a median value of 59%. We note that there is a small sample of planets for which current measurement errors are in fact less than the calculated deviation on their densities due to tidal effects (the highlighted orange part of Figure 7). We report these planets in Table 2, sorted by the multiplicative factor by which tidal deviations outweigh measurement uncertainties. In the worst case, we find that for WASP-19 b this is almost a factor of four, implying that the reported precision on its density is significantly underestimated. Again we note that grey curve of figure 6 shows that after reducing the total error on density by a factor of three, we find \(\sim 20\%\) of our sample or almost 40 planets for which tidal variations on density would become larger than measurement errors. ## 5 Conclusions Using a gravitational potential framework to determine the shape of a planet under the influence of tidal distortions, we have expanded on the work of Burton et al. (2014) and calculated the amount by which such effects may bias the estimated density of an exoplanet with orbital periods of less than three days. In comparison to an assumption of being perfectly spherical, tidal effects serve to increase the perceived volume of an exoplanet (and thus decrease its density) by an amount of up to 15% for the shortest period planets, which agrees with the values reported in Burton et al. (2014) and Leconte et al. (2011), which reported their results as a change in effective planet radius.Similarly, Akinansami et al. (2019) considered a framework of a planet with ellipsoidal variations how that would affect their lightcurves and identified many of the same planets as we do in table 2 as being those which would exhibit the strongest signal of shape deformation. We quantify this change more precisely in terms of the semi-major axis, planet to star mass radio, and planet radius for which we are able to derive a robust relationship (eq. 8). This allows for a rapid estimate of the magnitude of such variations, and whether or not an analysis of a planets density (and thus its internal composition) will be significantly biased by assuming the planet to be perfectly spherical. In Correia (2014) a similar expression was derived through an alternate analytic consideration, including the effect of inclination as well as the fluid Love number of the planet. We find the inclination effect to alter the density perturbation by at most 2% for the planets in our sample, although in most cases the effect is much less than 1%. The additional consideration of the fluid response of the planet implies potential variations of \(\sim 20\%\) between our results (i.e. a 15% density perturbation could change by a factor of 0.8-1.2), however this is strongly affected by uncertainty in the Love number. A more detailed analysis of the fluid response of planets in Wahl et al. (2021) identified WASP-12 b, WASP-103 b, and WASP-121b as those with the potential for the greatest variation in tidal response, which we also found to be among planets with the highest deviation in derived densities. For planets with orbital periods beyond 2.5 days we measured variations of no more than 2%, well below current measurement errors (\(\sigma_{\rho}/\rho>2.7\%\) for \(p>2.5\) days). We find also that for most planets, even those with shorter orbital periods, measured uncertainties are currently too large to be affected by such deviations, however we identify a sample of planets whose uncer \begin{table} \begin{tabular}{l c c c} \hline Planet Name & Period (days) & \(\Delta_{\rho}/\sigma_{\rho}\) & Reference \\ \hline WASP-19 b & 0.8 & 3.8 & Hebb et al. (2009) \\ HAT-P-7 b & 2.2 & 3.6 & Pál et al. (2008) \\ WASP-12 b & 1.1 & 3.0 & Hebb et al. (2009) \\ WASP-121 b & 1.3 & 1.9 & Bourrier et al. (2020) \\ WASP-4 b & 1.3 & 1.4 & Bouma et al. (2019) \\ WASP-103 b & 0.9 & 1.4 & Gillon et al. (2014) \\ Kepler-10 b & 0.8 & 1.4 & Esteves et al. (2015) \\ K2-141 b & 0.3 & 1.1 & Malavolta et al. (2018) \\ \hline \end{tabular} \end{table} Table 2: List of planets with density uncertainties less than the potential deviation due to tidal effects. Figure 7: The factor by which the uncertainty on a planets density needs to be improved such that it is equal to the change in density due to tidal deformations. The black dotted line highlights a decrease by a factor of 3 in the uncertainty on spherical planet density. The orange region highlights planets whose density uncertainty is currently less than the difference between the spherical and tidal density values. tainties may be as much as four times smaller than the potential change caused by shape distortions. One such planet, WASP-103 b, was recently found to show tentative tidal deformations using multiple transit observations (Barros et al., 2022), where it is reported that the volumetric radius of a fit derived using an ellipsoidal planet model is 5-6% larger than the radius derived from a spherical planet model. This further strengthens the notion that for such planets susceptible to tidal deformation, any attempts to characterise their interior composition based on their density derived using spherical planet models are likely to be under-estimating their errors, and that there is a wall of accuracy which is limited by a lack of knowledge of their true shape. For other short period planets however we calculate that an overall improvement by a factor of three in density error would cause \(\sim 25\) planets to have density errors comparable to tidal distortions, and for \(\sim 50\) planets tidal distortions would compare to at least 50% of the measured density uncertainty. With this in mind, we find that radius values in a range of up to a 5% deviation could in fact correspond to planets with the same density. This implies that composition curves are not just one-to-one functions but rather correspond to a family of mass-radius relationships, where there is a degeneracy induced by a lack of knowledge of the shape of a planet. This highlights a fundamental limit in the precision of characterising the composition of an exoplanet when disregarding tidal variations, which will become more severe as measurement errors decrease. Finally, we break down the uncertainty on a planets density further as a contribution of five underlying factors, three of which are directly observable and two which are model-derived. This breakdown highlights the fact that it is the directly observable quantities (specifically RV amplitude and transit depth) which are in most cases responsible for the bulk of the error in a planets density (in some cases contributing almost the entirety of the error budget). We also find that the median contribution of the largest piece of the uncertainty budget is 59%, implying that for most planets there is a single key parameter contributing the bulk of the uncertainty. Thus upcoming extreme precision RV measurements as well as high SNR transit observations such as those from JWST and PLATO imply that biases due to tidally-induced shape deformations will become a significant and unavoidable bottleneck when attempting to measure the density of planet to a high level of accuracy as the error in these key contributing factors is reduced. ## 6 Acknowledgements DB acknowledges support from an FRQNT Doctoral Research Scholarship.
2306.12972
Designing Individualized Policy and Technology Interventions to Improve Gig Work Conditions
The gig economy is characterized by short-term contract work completed by independent workers who are paid to perform "gigs", and who have control over when, whether and how they conduct work. Gig economy platforms (e.g., Uber, Lyft, Instacart) offer workers increased job opportunities, lower barriers to entry, and improved flexibility. However, growing evidence suggests that worker well-being and gig work conditions have become significant societal issues. In designing public-facing policies and technologies for improving gig work conditions, inherent tradeoffs exist between offering individual flexibility and when attempting to meet all community needs. In platform-based gig work, contractors pursue the flexibility of short-term tasks, but policymakers resist segmenting the population when designing policies to support their work. As platforms offer an ever-increasing variety of services, we argue that policymakers and platform designers must provide more targeted and personalized policies, benefits, and protections for platform-based workers, so that they can lead more successful and sustainable gig work careers. We present in this paper relevant legal and scholarly evidence from the United States to support this position, and make recommendations for future innovations in policy and technology.
Jane Hsieh, Oluwatobi Adisa, Sachi Bafna, Haiyi Zhu
2023-06-22T15:32:01Z
http://arxiv.org/abs/2306.12972v1
# Designing Individual Policy and Technology Interventions ###### Abstract. The gig economy is characterized by short-term contract work completed by independent workers who are paid to perform "gigs", and who have control over when, whether and how they conduct work. Gig economy platforms (e.g., Uber, Lyf, Instacart) offer workers increased job opportunities, lower barriers to entry, and improved flexibility. However, growing evidence suggests that worker well-being and gig work conditions have become significant societal issues. In designing public-facing policies and technologies for improving gig work conditions, inherent tradeoffs exist between offering individual flexibility and when attempting to meet all community needs. In platform-based gig work, contractors pursue the flexibility of short-term tasks, but policymakers resist segmenting the population when designing policies to support their work. As platforms offer an ever-increasing variety of services, we argue that policymakers and platform designers must provide more targeted and personalized policies, benefits, and protections for platform-based workers, so that they can lead more successful and sustainable gig work careers. We present in this paper relevant legal and scholarly evidence from the United States to support this position, and make recommendations for future innovations in policy and technology. 2023 J. Hsieh, Oluwatobi Adisa, Sachi Bafna, and Haiyi Zhu (2023). Designing Individual Policy and Technology Interventions to Improve Gig Work Conditions. In _Annual Symposium on Human-Computer Interaction for Work 2023 (CHIWORK 2023), June 13-16, 2023, Oldenburg, Germany_. ACM, New York, NY, USA, 9 pages. [https://doi.org/10.1145/3596671.3598576](https://doi.org/10.1145/3596671.3598576) + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyright: copyrighted 2023 + Footnote †: copyrighted: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyright: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyright: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: copyrighted: copyright: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: 2023 + Footnote †: copyrighted: copyrighted: copyrighted: copyrighted: 2023 + Footnote: copyrighted: copyrighted: copyrighted: 2023 + Footnote: copyrighted: copyrighted: copyrighted: copyright
2308.04464
Analysis of Insect-Plant Interactions Affected by Mining Operations, A Graph Mining Approach
The decline in ecological connections signifies the potential extinction of species, which can be attributed to disruptions and alterations. The decrease in interconnections among species reflects their susceptibility to changes. For example, certain insects and plants that rely on exclusive interactions with a limited number of species, or even a specific species, face the risk of extinction if they lose these crucial connections. Currently, mining activities pose significant harm to natural ecosystems, resulting in various adverse environmental impacts. In this study, we utilized network science techniques to analyze the ecosystem in a graph-based structure, aiming to conserve the ecosystem affected by mining operations in the northern region of Scotland. The research encompasses identifying the most vital members of the network, establishing criteria for identifying communities within the network, comparing, and evaluating them, using models to predict secondary extinctions that occur when a species is removed from the network, and assessing the extent of network damage. Our study's novelty is utilizing network science approaches to investigate the biological data related to interactions between insects and plants.
Ali Bayat, Mohammad Heydari, Amir Albadvi
2023-08-08T02:53:16Z
http://arxiv.org/abs/2308.04464v3
# Analysis of Insect-Plant Interactions Affected by Mining Operations, A Graph Mining Approach ###### Abstract The decline in ecological connections signifies the potential extinction of species, which can be attributed to disruptions and alterations. The decrease in interconnections among species reflects their susceptibility to changes. For example, certain insects and plants that rely on exclusive interactions with a limited number of species, or even a specific species, face the risk of extinction if they lose these crucial connections. Currently, mining activities pose significant harm to natural ecosystems, resulting in various negative environmental impacts. In this study, we utilized network science techniques to analyze the ecosystem a graph-based structure, aiming to conserve the ecosystem affected by mining operations in the northern region of Scotland. The research encompasses identifying the most vital members of the network, establishing criteria for identifying communities within the network, comparing, and evaluating them, using models to predict secondary extinctions that occur when a species is removed from the network, and assessing the extent of network damage. The novelty of our study is utilization of network science approaches to investigate the biological data related to interactions between insects and plants. Complex Networks, Prediction of Extinction; Diffusion of Extinction; Ecology; Bipartite Network ## I Introduction This research aims to investigate the interplay between insects and plants from a network perspective, identify crucial individuals that play significant roles in this interaction, and explore how communication is influenced by external factors such as the presence of mines and mining activities. Mining activities have a long history and are driven by the persistent human demand for minerals and energy. However, they are often associated with severe environmental problems, leading to the destruction of plant and animal life, loss of biodiversity, and various short-term and long-term economic and social consequences. Consequently, there is now a greater emphasis on undertaking restoration efforts in mining-affected lands compared to the past, which involves planting vegetation and rejuvenating the ecosystem. Nonetheless, existing methods for evaluating these effects have limitations, and environmental restoration activities can sometimes inadvertently cause further transformations and significant changes in the environment. By closely examining environmental changes and the alteration of communication networks between plants and insects, it becomes possible to restore the environment more effectively in areas impacted by mining activities. Merely planting vegetation in mining-affected environments is insufficient for comprehensive and proper restoration, as the extinction of an insect species in an area can potentially led to the extinction of a plant species. Restoration efforts should encompass a broader network of insects and their host plants. The substantial budgets allocated for revitalizing mining areas, if not utilized to foster sustainable changes, can perpetuate ongoing transformations over time, failing to facilitate true environmental restoration and potentially exacerbating the situation further. Hence, a critical question that deserves attention in the realm of mining activities is how to effectively transform the biological network and the animal and plant species impacted by mining to facilitate accurate environmental restoration. ## II Background In the Estercuel locality in northeastern Spain (Iberian Peninsula), Santos et al.[1] examined plant-insect interactions from the late Early Cretaceous (latest Albian). These interactions involve two types of land-dwelling angiosperms and Klitzschophyllites, a basal eudicro species considered one of the earliest potential members of aquatic Ranunculates discovered thus far. Sender et al.[2] study examines specimens of the extinct seed fern Sagenopteris from the Lower Cretaceous in Alcaine village, Teruel Province, northeastern Spain. The research focuses on categorizing arthropod-induced plant damage types (DTs) for 75 specimens of this plant species. These leaflets are found in deposits associated with coastal fluvial and lacustrine environments connected to an Albian delta-estuary system. Santos et al.[3] presented new evidence of plant-insect interactions discovered in the Late Pennsylvania period in the northern region of the Iberian Peninsula (Leon, Spain). Through their investigation of 216 fossil plant specimens, they have identified nine distinct Damage Types (DTs) indicative of these interactions. The interactions involve four Functional Feeding Groups (FFGs), including margin feeding (DT12 and DT13), hole feeding (DT09), galling (DT33, DT80, and DT116), and oviposition (DT67, DT100, and DT102), observed on Pteridophytes, Pteridospermatophytes, and Coniferophytes. Tamura et al.[4] made predictions and assessed plant-insect interactions using limited datasets and investigated the interaction between the crop plant rice (Oryza sativa) and two species of mirid bugs (Stenotus rubrovittatus and Trigonotylus caelestialium) by utilizing observational data. By employing adaptive network models, Maia et al[5] examined the ability of plant-pollinator and plant-herbivore networks to withstand species loss. Their focus was on understanding the impact of key differences in natural history between these systems, such as the demographic outcomes of interactions and the extent of generalization, which influence the potential for rewiring. They investigated how these factors contribute to the resilience of ecological networks to extinctions. Additionally, they developed a standardized measure to assess the influence of network structure on resilience by simulating extinctions in theoretical networks with controlled structures. Balmaki [6] study introduced methodological approaches for assessing parameters of insect-pollen networks using pollen samples obtained from insect specimens housed in museums. These methods offer valuable insights into the spatial and temporal dynamics of pollen-insect interactions. They serve as a complementary tool alongside other techniques used in the study of pollination, such as observing pollinator networks and conducting flower enclosure experiments. The article includes illustrative data from butterfly pollen networks spanning the last century in the Great Basin Desert and Sierra Nevada Mountains in the United States. Lewinsohn [7] focused on the long-term research conducted on the interactions between Asteraceae plants and flowerhead-feeding insects in Brazil. their research treated host species as independent entities to assess local and turnover components of herbivore diversity and expanded to investigate entire interactive communities of host plants and their associated herbivores across different localities and regions, leading to the exploration of new research avenues. Feng[8] presented findings of plant-insect interactions within the flora, obtained through a comprehensive analysis of insect-induced damage on plant specimens. In total, we identified 8 distinct types of damage caused by insects, categorized into 5 functional feeding groups, across 11 plant species. Meineke [9] utilized machine learning techniques to examine historical insect-plant interactions recorded on digitized herbarium specimens. Martins [10] conducted a global literature review on ecological methods and indicators used for the recovery and monitoring of ecosystems following mining activities. Guan [11] conducted a bibliometric analysis on the evolution of the field of ecological restoration over the past thirty years. Also Feng [12] reviewed effects of surface coal mining and land reclamation on soil properties. Joll [13] employed a network analysis methodology to measure the frequency of insect interactions with the flowers of plant species found in the Great Lakes dune ecosystem. This study by Moudry [14] compares the effectiveness of using unmanned aerial vehicle (UAV) imagery and airborne Light Detection and Ranging (LiDAR) data during both leaf-off and leaf-on seasons for evaluating the terrain and vegetation structure of a post-mining site. The goal is to assess the potential of these technologies in monitoring hazards and gauging the success of restoration efforts. The findings provide insights into the prospects of using UAV imagery and LiDAR for effective monitoring and restoration evaluation of post-mining landscapes. Cagua [15] proposed the concept of structural controllability, which provides a means to measure the degree to which network topology can be utilized to achieve a desired state. Their approach enables the quantification of a species' control capacity, indicating its relative significance within the network. Additionally, it helps identify the specific species that are crucial in this context due to their highest potential for control. To demonstrate its application, they examine ten plant-pollinator communities, comparing those that have not been invaded with those that have. Their findings indicate that the controllability of a community is determined by the asymmetrical nature of its mutual dependencies, rather than its invasion status. The decline in ecological connections signifies the potential extinction of species. One factor contributing to this decline is disturbances and alterations in the environment. The number of connections a species possesses demonstrates its sensitivity to changes. For instance, certain insects and plants that rely on exclusive interactions with a limited number of species, or even a specific species, face a significant risk of extinction if these connections are severed. Evaluating the robustness of the network and characteristics such as the extent of species interconnections can help assess the network's vulnerability to the loss of a particular species. Additionally, it highlights the potential for cascade effects, wherein the loss of one connection leads to the loss of other connections within the network [16]. Complex networks can offer quicker predictions for certain disturbances, such as changes in vegetation [17]. In the last two decades, people's attention to preserving the quality of the environment has gradually increased and this phenomenon has affected the design and activities of mining. ## III Research Methodology A very living organism relies on its environment and other organisms for essential biological processes and resources. Group cohesion is vital for survival, and the study of these interdependencies falls within the realm of ecology. Different species in nature must possess a proper understanding of their environment and other organisms to fulfill their fundamental needs and adapt accordingly. Therefore, comprehending the environment, the interactions among organisms, and their impacts on each other holds great practical and scientific significance. This research aims to utilize network analysis tools to explore and understand these relationships. The primary objective of this study is to examine how changes in land use affect the network structure of insects and plants. Each case in this research will focus on specific goals among them: * Gaining knowledge about and exploring the interconnectedness of insects and plants * Investigating the alterations that occur in the insect-plant network because of mining activities. * Offering a fresh perspective on the preservation and restoration of ecosystems that undergo changes because of mining operations. The data collection for this research is conducted through observational methods, employing simple random sampling and quantitative techniques. Inferential analysis will be employed to analyze the collected data. The implementation of this research has been carried out using the R programming language, while certain aspects have also been implemented using PHP programming to address the novelty of the covered topics. Also, The Gephi software has been employed for the visualization of graphs and networks. The research methodology will proceed as follows: Firstly, data on the relationships between insects and plants will be collected. Subsequently, a network representation of the data will be constructed, and a comprehensive examination and analysis will be conducted from a network perspective. Additionally, considering the specific characteristics of the available data, two distinct sites will be selected, one near the studied mine and the other situated at a considerable distance. These sites will be treated as separate networks, allowing for a comparative analysis of network factors to gain deeper insights into the relationships and identify crucial members from various viewpoints. The primary objectives of this study were as follows: 1) To assess the effects of mining activities on the interactions between insects and plants, 2) To identify ecologically significant species within the ecosystem and explore the possibility of categorizing them, 3) To determine which species, if removed from the ecosystem, would have the most detrimental impact on its current state, 4) To prioritize the preservation of specific species during and after mining operations, aiming to mitigate the ecological damage caused by mining activities, and 5) Detecting concealed patterns within hidden communities in an insect-plant network using various famous community detection algorithms. ## IV Data In graph analytics research, having a comprehensive dataset is crucial for drawing meaningful and insightful conclusions. Regrettably, research on the insects-plant interactions affected by mining operations using graph mining approaches, is limited, making it imperative to focus on this area and gather additional datasets to address this knowledge gap. The ecosystem encompasses the assemblage of living organisms within a given environment, along with all the elements and components of that environment. In essence, the ecosystem can be defined by the environment itself and its living organisms. Among the vital components of the ecosystem, the relationship between insects and plants, known as pollination, holds great significance. Pollination is a crucial process in both natural and agricultural ecosystems, playing a pivotal role in food production and sustaining human livelihood. Human dependency on natural ecosystems and agricultural systems underscores the importance of understanding and analyzing the bilateral relationship between insects and plants. In this research, an endeavor is made to employ graph mining methods to analyze this intricate relationship. To accomplish this, the available data pertaining to this relationship will be prepared and organized to establish a network representation. Prior to initiating the analysis, it is essential to establish precise definitions for the nodes and edges, or relationships, within the graph. In the specific problem discussed, there are two distinct groups: plants, consisting of ninety species, and insects. Communication is observed when insects interact with plants, whether for pollination or feeding. As these connections originate from insect-related nodes and extend towards plant-related nodes, a bipartite network naturally emerges, with plants forming one side and insects occupying the other side. ## V Graph Mining In this section, graph mining and network visualization methods will be employed to enhance our understanding of the research objectives. The network under consideration is inherently a bipartite network that encompasses ninety distinct types of plants and insects. Hence, the relationships governing this network can be effectively represented through the utilization of bipartite graph. One approach to identifying significant members of the network is by utilizing centrality measures, which are discussed in this section. Initially, the simplest centrality measure, known as degree centrality, will be computed. Degree centrality solely considers the number of direct neighbors each species has and can be calculated without modifying the bipartite graph of insects and plants. Once the degree centrality values are obtained, they are sorted in descending order, resulting in a ranking of species based on their number of connections. Species with high ranks are deemed important members of the ecosystem, as they interact with numerous other network members, and their presence in the network is integral to the communication of other species. However, the significance of species with very low degrees raises questions. The species at the lower end of the sorted list represent insects and plants that possess a limited number of connections. These species are particularly vulnerable and face the risk of endangerment if they lose their pollinators or host plants. Therefore, it is imperative to prioritize the conservation of species that interact with those with very low degree centrality. The accompanying figures depict the communication network of insects and plants in two sites, Backhilland Dalhaike, respectively. Remarkably, 68.75% and 55.17% of the connections in these networks are represented by species with only one connection, indicating a high degree of privacy. The tables below present the top ten species that play a crucial role in establishing network connections within the insect-plant network in the Backhill and Dalhaike sites, respectively. Figure 4: Bipartite Network Visualization for Dalhaike site Figure 5: Insect network of Backhill site plants by degree, (bigger size of nodes means greater degree centrality) Figure 3: Bipartite Network Visualization for Backhill site Figure 6: Insect network of Dalhaikie site plants visualization by degree As previously mentioned, the bipartite graph representing the relationships between insects and plants can be transformed into two separate unipartite graphs. By doing so, the insect-insect relationship can be defined based on the shared plants between two insects. Conversely, the plant-plant relationship can be determined by considering the common insects involved in the pollination process of plants. After converting the bipartite networks of both sites into unipartite networks for insect-insect and plant-plant relationships, the special vector centrality and closeness centrality measures were computed. In the insect-insect network, the relationship is established based on the shared plants between two insects. Consequently, an insect with a high special vector centrality is surrounded by other insects that visit numerous plants. This indicates that the insects feeding on this insect visit a diverse range of plants, suggesting a more generalist feeding behavior. However, this insect may also serve as a crucial pollinator for specific plants. Plants that exhibit a strong association with this insect, as indicated by a high special vector value, are likely to be more dependent on this specific insect for pollination. Conversely, this insect and the associated plants may be more vulnerable to damage if the insect population is disrupted, as the feeding insects responsible for pollination exhibit a greater diversity in their feeding habits. The same principle applies to the plant-plant relationship. The figures below depict the insect-insect and plant-plant unipartite graphs, showcasing the values of special vector centrality for each site. \begin{table} \begin{tabular}{c c c c} \hline **Species** & **Deg Cent** & **Species** & **Deg Cent** \\ \hline \multirow{2}{*}{Crisium palustre} & \multirow{2}{*}{8} & Episyrphus & \multirow{2}{*}{1} \\ & & balteatus & \\ \hline \multirow{2}{*}{Potentilla erecta} & \multirow{2}{*}{7} & Rhingia & \multirow{2}{*}{1} \\ & & campestris & \\ \hline \multirow{2}{*}{Gailium saxatile} & \multirow{2}{*}{7} & Sepsid\_sp. & 1 \\ \hline \multirow{2}{*}{Aijuga reptans} & \multirow{2}{*}{6} & Empis spA & 1 \\ \hline \multirow{2}{*}{Melanostoma scalare} & \multirow{2}{*}{5} & Thricops & \multirow{2}{*}{1} \\ & & semicinetreus & \\ \cline{1-1} & & Wiedermann & \\ \hline \multirow{2}{*}{Bombus pacuroum} & \multirow{2}{*}{3} & Aphantopus & \multirow{2}{*}{1} \\ & & hyperantus & \\ \hline \multirow{2}{*}{Hercostomus sp.} & \multirow{2}{*}{2} & Bombus & \multirow{2}{*}{1} \\ & & lucorum & \\ \hline \multirow{2}{*}{Micromoth\_spB} & \multirow{2}{*}{2} & Platycheirus & \multirow{2}{*}{1} \\ & & granditatsus & \\ \hline \multirow{2}{*}{Phaonia incana Wiedermann} & \multirow{2}{*}{2} & Mordelidae & \multirow{2}{*}{1} \\ & & sp & \\ \hline \multirow{2}{*}{Teucrium scorodonia} & \multirow{2}{*}{2} & Phaonia & \multirow{2}{*}{1} \\ & & basalis Zett. & \\ \hline \end{tabular} \end{table} Table 2: Ten species with the highest degree centrality and ten vulnerable species of the Backhill site on the left and right, respectively Figure 8: Construction of insect-insect and plant-plant network from the original insect-plant network by projection method. \begin{table} \begin{tabular}{c c c c} \hline **Species** & **Deg Cent** & **Species** & **Deg Cent** \\ \hline \multirow{2}{*}{Ranunculus repens} & \multirow{2}{*}{9} & Hercostomus sp. & \multirow{2}{*}{1} \\ & & Neoascia & \\ \hline \multirow{2}{*}{Veronica sp} & \multirow{2}{*}{9} & podagrica & \multirow{2}{*}{1} \\ & & podagrica & \\ \hline \multirow{2}{*}{Melanostoma scalare} & \multirow{2}{*}{5} & Anth spA & \multirow{2}{*}{1} \\ & & Thricops & \\ \hline \multirow{2}{*}{Gailium saxatile} & \multirow{2}{*}{5} & semicineus & \multirow{2}{*}{1} \\ & & Wiedermann & \\ \hline \multirow{2}{*}{Trientalis europaea} & \multirow{2}{*}{4} & Drymeia & \multirow{2}{*}{1} \\ & & brumalis & \\ & & (Rondani) & \\ \hline \multirow{2}{*}{Empidae spB} & \multirow{2}{*}{3} & Aphantopus & \multirow{2}{*}{1} \\ & & hyperantus & \\ \hline \multirow{2}{*}{Empidae spC} & \multirow{2}{*}{3} & Haeanotopa & \multirow{2}{*}{1} \\ & & pluvialis & \\ \hline \multirow{2}{*}{Potentilla erecta} & \multirow{2}{*}{3} & Chloromyia & \multirow{2}{*}{1} \\ & & Formosa & \\ \hline \multirow{2}{*}{Ranunculus acris} & \multirow{2}{*}{3} & Orthellia & \multirow{2}{*}{1} \\ & & caesarion Mg. & \\ \hline \multirow{2}{*}{Phaonia basalis Zett.} & \multirow{2}{*}{2} & Tachyorus & \multirow{2}{*}{1} \\ & & obtusus & \\ \hline \end{tabular} \end{table} Table 3: Ten species with the highest degree centrality and ten vulnerable species of the Dalhaike site on the left and right, respectively Figure 7: Degree distributions of insect-plant network, Dalhaike site The tables provided below present the top five insects and top five plants, determined through the calculation of eigenvector centrality, in the insect-insect and plant-plant unipartite graphs of the Backhill and Dalhaike sites, respectively. These species hold the highest eigenvector centrality values, indicating their significant influence within the respective networks. Also, in the figures below, the insect-insect and plant-plant single-section graphs of two sites, Backhill and Dalhaike, are displayed based on the value of closeness centrality in each site. By utilizing the value calculation algorithm, species can be ranked based on their association with private species in the network. The algorithm emphasizes that species linked to a larger number of private species possess higher value, signifying their significance in preserving such private species. The results of applying the value calculation algorithm to two specific sites, Backhilland Dalhaike, are presented in the tables below. These tables exhibit the top ten species with the highest calculated values for each site. Additionally, the graphs illustrate a comparative depiction of the values across different types of networks for the Backhilland Dalhaike sites. ## VI Community Detection To evaluate the grouping of insects and plants at the Backhillsite, five different Community Detection methods were utilized. The effectiveness of each method was assessed by calculating the contract value, which represents the quality of the resulting groupings. The table presents the number of communities identified and the corresponding contract scores for each of the five methods. By comparing these values, the most suitable method can be determined. In the following, the communities obtained in each method for the Backhill site are displayed. \begin{table} \begin{tabular}{c c c c} \hline \hline **C.D Methods** & **Louvain** & **Fast Greedy** & **Label Propagate** & **ALO** & **FEV** \\ \hline **Modularity** & 0.62 & 0.62 & 0.61 & 0.62 & 0.62 \\ \hline **Num. of C** & 5 & 5 & 6 & 5 & 5 \\ \hline \hline \end{tabular} \end{table} Table 10 - Modularity score and the number of communities discovered in different methods for the Backhillsite. Figure 14: - The amount of value calculated in the value calculation algorithm for all types of Backhillsites. \begin{table} \begin{tabular}{c c c c} \hline \hline **Species** & \begin{tabular}{c} **Closeness** \\ **Centrality** \\ \end{tabular} & **Species** & \begin{tabular}{c} **Closeness** \\ **Centrality** \\ \end{tabular} \\ \hline Melanostoma sclare & 0.84 & Veronica sp & 1 \\ \hline Empidae spB & 0.8096 & Potentilla erecta & 0.8571 \\ \hline Empidae spA & 0.75 & \begin{tabular}{c} Ramunculus \\ repens \\ \end{tabular} & 0.8571 \\ \hline Micromoth\_spB & 0.7 & \begin{tabular}{c} Trientalis \\ europaea \\ \end{tabular} & 0.8571 \\ \hline Empidae spC & 0.6774 & Prunella vulgaris & 0.75 \\ \hline \hline \end{tabular} \end{table} Table 7: The closeness centrality values of Dalhaike site for the five nodes with the highest closeness centrality in the insect-insect and plant-plant networks are on the left and right. Figure 13: The amount of value calculated in the value calculation algorithm for all types of Backhill sites. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Species** & \begin{tabular}{c} **Closeness** \\ **Centrality** \\ \end{tabular} & **Species** & \begin{tabular}{c} **Closeness** \\ **Centrality** \\ \end{tabular} \\ \hline Melanostoma sclare & 0.84 & Veronica sp & 1 \\ \hline Empidae spB & 0.8096 & Potentilla erecta & 0.8571 \\ \hline Empidae spA & 0.75 & \begin{tabular}{c} Ramunculus \\ repens \\ \end{tabular} & 0.8571 \\ \hline Micromoth\_spB & 0.7 & \begin{tabular}{c} Trientalis \\ europaea \\ \end{tabular} & 0.8571 \\ \hline Empidae spC & 0.6774 & Prunella vulgaris & 0.75 \\ \hline \hline \end{tabular} \end{table} Table 9: Ten species with the highest value obtained from the value calculation algorithm on the Dalhaike site. The provided table presents the calculation of the number of communities and the contract score for the Dalhaike site using five different methods. This evaluation allows for a comparison of the effectiveness of each method in terms of the identified communities and the corresponding contract scores. The depicted figure showcases the communities achieved through various methods for the Dalhaike site. It provides a visual representation of the groupings obtained by each method. It is recognized that different algorithms yield varying values of modularity and communities. Except for the ALO and FEV algorithms, specifically, which yield identical results, all other algorithms exhibit varying outcomes. ## VII Conclusion The identification of key members or species in ecological networks is highly practical and significant. Centrality and node value indices are used to determine these important members from various perspectives. Species with high centrality, such as degree centrality and special vector, play a crucial role in communication and maintaining biodiversity within the network. If a species with high centrality is lost, it can result in significant damage to the insects or plants associated with it. On the other hand, species with high closeness centrality, if destroyed, will cause relatively less damage to the connected insects or plants. Species with first degree centrality, which have only one partner in the network, require special attention as they are highly dependent on their sole interaction for survival. Additionally, a proposed algorithm calculates the value score of each node, indicating its importance for biodiversity and the conservation of endangered species. Environmental experts are recommended to employ different methods to identify significant species within ecosystem \begin{table} \begin{tabular}{c c c c c c} \hline \multirow{2}{*}{**C.D Methods**} & \multirow{2}{*}{**Louvain**} & \multicolumn{2}{c}{**Fast Greedy**} & \multirow{2}{*}{**Label Propagate**} & \multirow{2}{*}{**ALO**} & \multirow{2}{*}{**FEV**} \\ \cline{1-1} \cline{5-5} **Modularity** & & & & & & \\ \cline{1-1} \cline{5-5} **Number of C** & 7 & 6 & 3 & 5 & 5 \\ \hline \end{tabular} \end{table} Table 11: Modularity score and the number of communities discovered in different methods for the Dalhaike site. Figure 16: Communities Detection for the Dalhaike site based on Louvain, Fast Greedy, Label Propagate, Ant Lion Optimizer and FEV algorithms. Figure 15: Communities Detection for backfill site based on Louvain, Fast Greedy, Label Propagate, Ant Lion Optimizer and FEV algorithms. networks. This approach aids in safeguarding environmental biodiversity and ensuring the stability of connections between species. In the context of socialization and grouping of insects and plants, different methods are used to categorize them. Sometimes, a more detailed and customized examination is required for specific groups within a collection, or when the number of members is too large to handle individually, and representative members are needed. For instance, insects are grouped based on their feeding type, while plants are grouped based on their pollination characteristics. Therefore, environmental experts are advised to utilize socialization methods for grouping insects or plants, enabling focused investigations or studies on specific categories. In this case, an example of an influential insect in biodiversity could be the X-type insects within Group 1, which are significant in terms of nutrition. The research is subject to certain limitations, particularly concerning the collected data. Despite efforts to maintain consistency across factors, inherent variability exists due to the natural context of the data. Additionally, the two-part nature of the insect-plant communication graph poses challenges in applying conventional analytical methods. Therefore, it is necessary to undertake further study, verification, method modification, or development of novel approaches to effectively analyze the graph. Furthermore, the unavailability of information regarding species' independence in sustaining life without their partners led to the utilization of approximated data in the co-extinction model analysis. The accuracy of future research can be enhanced by incorporating more precise data. Another significant limitation is the scarcity of research sources within this interdisciplinary field, impending comprehensive exploration of the subject matter. ## VIIIFuture Works For a more comprehensive analysis of changes, employ homogeneous data where all factors are identical, except for the variable being investigated. In the co-extinction models section, it is crucial to obtain data demonstrating the resilience of species in the absence of their partners. However, acquiring such data may be challenging, particularly for rare or specialized species that may exhibit vulnerability.
2310.01756
Improved Algorithms for Adversarial Bandits with Unbounded Losses
We consider the Adversarial Multi-Armed Bandits (MAB) problem with unbounded losses, where the algorithms have no prior knowledge on the sizes of the losses. We present UMAB-NN and UMAB-G, two algorithms for non-negative and general unbounded loss respectively. For non-negative unbounded loss, UMAB-NN achieves the first adaptive and scale free regret bound without uniform exploration. Built up on that, we further develop UMAB-G that can learn from arbitrary unbounded loss. Our analysis reveals the asymmetry between positive and negative losses in the MAB problem and provide additional insights. We also accompany our theoretical findings with extensive empirical evaluations, showing that our algorithms consistently out-performs all existing algorithms that handles unbounded losses.
Mingyu Chen, Xuezhou Zhang
2023-10-03T02:44:31Z
http://arxiv.org/abs/2310.01756v1
# Improved Algorithms for Adversarial Bandits with Unbounded Losses ###### Abstract We consider the Adversarial Multi-Armed Bandits (MAB) problem with unbounded losses, where the algorithms have no prior knowledge on the sizes of the losses. We present UMAB-NN and UMAB-G, two algorithms for non-negative and general unbounded loss respectively. For non-negative unbounded loss, UMAB-NN achieves the first adaptive and scale free regret bound without uniform exploration. Built up on that, we further develop UMAB-G that can learn from arbitrary unbounded loss. Our analysis reveals the asymmetry between positive and negative losses in the MAB problem and provide additional insights. We also accompany our theoretical findings with extensive empirical evaluations, showing that our algorithms consistently outperforms all existing algorithms that handles unbounded losses. ## 1 Introduction Multi-armed bandit (MAB) presents a popular online learning framework for studying decision making under uncertainty [10, 11, 12], with a wide range of applications such as advertisement [1], medical treatments [23], and recommendation systems [13]. In this paper we focus on the adversarial MAB (AMAB), where the losses are allowed to be generated adversarially by the environment [1]. Most prior works on AMAB assume that the losses are naturally bounded, e.g. \(\ell_{t}\in[0,1],\forall t\). With such knowledge, the algorithms can set their _learning rate_ (in a general sense) properly. For example, in its regret analysis, the EXP3 algorithm relies on the inequality \(\exp(x)\leq 1+x+(e-2)x^{2}\) to transform exponential terms into quadratic terms [1], which only holds true if the loss \(x\) can be upper bounded by \(1\). In many real-world applications, however, such natural loss bound does not always exist. For example, in quantitative trading, the fluctuation of stock prices can differ wildly across time. In online market places, the price can vary dramatically for different products. If one must give a uniform bound \(M\) for the losses across all actions and time, such a bound will likely be loose. In such cases, existing algorithms will have a regret that scales with \(M\), which is suboptimal compared to a guarantee that depends on the actual size of the losses. Motivated by the above limitation of existing algorithms, we wish to design AMAB algorithms that require no prior knowledge on the scale of the losses and _adaptively_ achieves smaller regret when the losses are small in scale. In addition, instead of a regret bound that depends on the number of rounds and a (hidden) uniform bound of the losses, we wish to design _data-dependent_ algorithms whose regret scales with the actual loss sequence, which is beneficial when the sequence of loss is sparse or when its scale varies across time [21; 1]. In other words, we would like to ask the following question: Can we design an algorithm that achieves **optimal** and **adaptive** regret guarantee **without** any prior knowledge on the losses? In the following, we present two algorithms, UMAB-NN and UMAB-G, for Non-Negative and General unbounded loss, respectively. Our main contributions can be summarized as follows. 1. We propose UMAB-NN, a **scale-free** AMAB algorithm that works for unbounded non-negative losses. The regret guarantee of UMAB-NN adapts to the infinity norm of the loss sequence while matching the worst-case lower bound of [1]. 2. Building upon UMAB-NN, we then propose UMAB-G which works for arbitrary unbounded losses that can be both possible and negative. We present two versions of the algorithm, distinguished by whether the exploration subroutine adapts to the observed losses. For the non-adaptive version, it achieves an optimal worst-case regret guarantee and partially adapts to the non-negative part of the loss sequence, improving upon the previous results of [14; 15; 16]. For the adaptive version, our algorithm achieves an improvement on the order of \(\mathcal{O}(\sqrt{n})\) compared to [15], where \(n\) is the number of the actions. 3. Last but not least, we evaluate the performance of our algorithms on real world datasets. The results show that our algorithms consistently outperform existing methods in a variety of tasks with distinct loss patterns. We also construct synthetic simulations to illustrate the impact of our exploration strategy and draw comparisons between the two versions of our algorithm. ## 2 Problem Setup and Related Works We start with some notations. Let \([n]\) denote the set \(\{1,\ldots,n\}\) and \([T]\) denote the set \(\{1,\ldots,T\}\). Let \(\Delta_{n}\) be the probability simplex \(\{\mathbf{p}\in\mathbb{R}^{n}:\sum_{k\in[n]}p_{k}=1;p_{k}\geq 0,\forall k \in[n]\}\). Let \(\mathbf{1}_{n}\) and \(\mathbf{0}_{n}\) be all ones and all zeros \(n\)-dimensional vector respectively. Let \(\mathbf{e}_{k}\) denotes the one-hot vector with \begin{table} \begin{tabular}{||c|c|c|c||} \hline **Algorithm** & **Unbounded** & **Adaptive** & **Regret** \\ \hline \hline [14] & No & Yes & \(\widetilde{O}\Big{(}\sqrt{\sum_{t=1}^{T}\|\ell_{t}\|_{2}^{2}}\Big{)}\) \\ [14] & Yes & No & \(\widetilde{O}\Big{(}\ell_{\infty}\sqrt{nT}\Big{)}\) \\ [15] Non-Adaptive & Yes & No & \(\widetilde{O}\Big{(}\ell_{\infty}\sqrt{nT}+\sqrt{n\sum_{t=1}^{T}\|\ell_{t}\|_ {2}^{2}}\Big{)}\) \\ [15] Adaptive & Yes & Yes & \(\widetilde{O}\Big{(}\ell_{\infty}\sqrt{n\sum_{t=1}^{T}\|\ell_{t}\|_{1}}+\sqrt {n\sum_{t=1}^{T}\|\ell_{t}\|_{2}^{2}}\Big{)}\) \\ UMAB-G **Non-Adaptive** & Yes & No & \(\widetilde{O}\Big{(}\ell_{\infty}\sqrt{nT}+\sqrt{n\sum_{t=1}^{T}\|\ell_{t}\|_ {\infty}^{2}}\Big{)}\) \\ UMAB-G **Adaptive** & Yes & Yes & \(\widetilde{O}\Big{(}\ell_{\infty}\sqrt{n\sum_{t=1}^{T}\|\ell_{t}\|_{\infty}}+ \sqrt{n\sum_{t=1}^{T}\|\ell_{t}\|_{\infty}^{2}}\Big{)}\) \\ \hline \end{tabular} \end{table} Table 1: Comparison between our results and previous works1 on the \(k\)th entry. For vectors \(\mathbf{p}_{t}\) and \(\ell_{t}\), we use \(p_{t,k}\) and \(\ell_{t,k}\) to represent the \(k\)th entry of \(\mathbf{p}_{t}\) and \(\ell_{t}\) respectively. The L1, L2 and L-infinity norms of \(\ell_{t}\) are denoted as \(\|\ell_{t}\|_{1}=\sum_{k\in[n]}|\ell_{t,k}|,\ \|\ell_{t}\|_{2}=\sqrt{\sum_{k\in[n]} \ell_{t,k}^{2}},\ \|\ell_{t}\|_{\infty}=\max_{k\in[n]}|\ell_{t,k}|\) respectively. We denote by \(\ell_{\infty}=\max_{t\in[T]}\|\ell_{t}\|_{\infty}\) the uniform norm bound of the losses. Moreover, we denote by \(\ell_{\infty}^{-}=\max_{t\in[T],k\in[n]}|\min(\ell_{t,k},0)|\) the magnitude of the most negative entry of the losses. Notice that \(\ell_{\infty}^{-}\leq\ell_{\infty}\), and \(\ell_{\infty}^{-}=0\) if the loss sequence is non-negative. Both \(\ell_{\infty}\) and \(\ell_{\infty}^{-}\) are unknown to the player through the game. Adversarial Multi-armed Bandit:We consider the _oblivious adversarial_ setting. In each round \(t=1,\ldots,T\), the player selects a distribution \(\mathbf{p}_{t}\) over \([n]\) and the adversary selects a loss vector \(\ell_{t}\in\mathbb{R}^{n}\)_simultaneously_. Then, the player samples action \(k_{t}\sim\mathbf{p}_{t}\) and observes loss \(\ell_{t,k_{t}}\). We measure the performance of an algorithm in terms of its _pseudo-regret_: \[\mathcal{R}_{T}:=\mathbb{E}\Big{[}\sum\nolimits_{t=1}^{T}\ell_{t,k_{t}}-\min _{k\in[n]}\sum\nolimits_{t=1}^{T}\ell_{t,k}\Big{]} \tag{1}\] ### Related Works Scale-free algorithms:Scale-free algorithms are ones whose regret bound scales linearly with respect to \(\ell_{\infty}\), while requiring no knowledge of \(\ell_{\infty}\) a prior 2. Scale-free regret bounds were first studied in the full information setting, such as experts problems [11, 12, 13] and online convex optimization [10, 12, 14]. For experts problems, the AdaHedge algorithm from [1] achieves the first scale-free regret bound. For online convex optimization, past algorithms can be categorized into two generic algorithmic frameworks: Mirror Descent (MD) and Follow The Regularizer Leader (FTRL). The scale-free regret from the MD family is achieved by AdaGrad proposed by [10]. However, the regret bound of [10] is only non-trivial when the Bregman divergence associated with the regularizer can be well bounded. Later, the [1] proposed the AdaFTRL algorithm which achieves the first scale-free regret bound in the FTRL family and generalizes [10]'s results to cases where the Bregman divergence associated with the regularizer is unbounded. For the AMAB problem, [15] extends the method of [10] and provides a scale-free regret bound of \(\widetilde{O}\Big{(}\ell_{\infty}\sqrt{nT}\Big{)}\), which is optimal (up to log terms) in the worst case. However, such worst-case regret bounds can be overly pessimistic in general cases: a single outlier loss \(\ell_{outlier}\) can result in an additional regret on the order of \(O(\|\ell_{outlier}\|_{\infty}\sqrt{nT})\). To address it, [1] presents scale-free bounds that adapt to the individual size of losses across time. Unfortunately, the worst-case guarantee of [1] is \(\widetilde{O}\Big{(}\ell_{\infty}n\sqrt{T}\Big{)}\), which scales linearly to the number of actions. Our paper closes this gap: our algorithms achieve an adaptive regret better than [1], as well as an optimal worst-case regret that matches with [15]. Footnote 2: We note that an alternative and more strict interpretation of scale-free algorithms refers to ones that will not change the sequence of \(p_{t}\)’s when losses are multiplied by a positive constant. Adaptive algorithms:Adaptive algorithms refer to the algorithms that dynamically adjusts to the input data it encounters. Rather than scaling solely on \(T\) in the regret, an adaptive algorithm adapts to a "measure of hardness" of the sequence of losses. An adaptive regret algorithm performs better than the worst-case regret if the sequence of loss is "good". In the last two decades, adaptive algorithms have been widely studied in the settings of expert problems and online convex optimization [11, 12, 13, 14, 15]. For the MAB setting, several works derive adaptive regret bounds based on different "measure of hardness". For example, [1, 10, 16, 17] derive the first-order regret (a.k.a. _small-loss regret_), which depends on the cumulative loss \(\min_{k\in[n]}\sum_{t\in[T]}|\ell_{t,k}|\), but under the assumption that \(\ell_{t,k}\in[0,1],\forall t,k\). [11, 12, 13] propose bounds that depend on the empirical variance of the losses, i.e., \(\sum_{t\in[T]}\|\ell_{t}\|_{2}^{2}\). Path-length bounds are also studied [12, 13, 14, 15], which depends on the fluctuation of loss sequence \(\sum_{t\in[T]}\|\ell_{t}-\ell_{t-1}\|_{1}\). We remark that _all_ results above require the assumption that losses are bounded within \([0,1]\), which we remove in this paper. ## 3 Algorithm and Analysis We now present our two algorithms UMAB-NN and UMAB-G. UMAB-NN works for the case where losses are Non-Negative, i.e., \(\ell_{t}\in\mathbb{R}_{+}^{n}\). Remarkably, UMAB-NN is a _strictly scale-free_ algorithm: the algorithm will not change its sequence of action distributions if the sequence of losses is multiplied by a positive constant, which immediately implies scale-free regret. Our second algorithm, UMAB-G, builds upon the first algorithm to allow potentially negative losses, i.e., \(\ell_{t}\in\mathbb{R}^{n}\). We provide two versions of the algorithm: UMAB-G with non-adaptive and adaptive exploration rates. For the non-adaptive version, our results achieve adaptability to the non-negative part of the loss, while ensuring the optimality for the worst case guarantee, which is new compared to previous works3. For the adaptive version, we improve the previous result [1] by \(\mathcal{O}(\sqrt{n})\). A summary of the comparisons to prior works can be found in Table 1. Footnote 3: We note that a recent work [14] proposes an algorithm that claims to achieve adaptive regret for general unbounded loss. However, there exists a critical issue within their proof and algorithm, resulting in their regret being actually unbounded. We have communicated and confirmed with the authors about the issue. More details are provided in Appendix A.2. Both the algorithms we propose are based on the Follow-the-Regularized-Leader (FTRL) framework. Let us first consider the full information case, the traditional adaptive FTRL framework uses a regularizer \(\Psi\) and time-varying learning rates \(\eta_{1},\ldots,\eta_{T+1}\), with certain regularity constraints (see, e.g., [15]). The update rule takes the form of \[\mathbf{p}_{1}=\arg\min_{\mathbf{p}\in\Delta_{n}}\frac{1}{\eta_{1}}\Psi( \mathbf{p}),\qquad\mathbf{p}_{t}=\arg\min_{\mathbf{p}\in\Delta_{n}}\Big{(} \sum_{s=1}^{t-1}\langle\ell_{s},\mathbf{p}\rangle+\frac{1}{\eta_{t}}\Psi( \mathbf{p})\Big{)}, \tag{2}\] where \(\ell_{s}\) is the observed loss at round \(s\) and \(\eta_{t}\) is the adaptive learning rate depending on the losses \(\ell_{1},\ldots,\ell_{t-1}\). In the bandit setting, we cannot observe the complete loss vector \(\ell_{t}\). Similar to prior works, we construct an unbiased loss estimator through the importance-weighted (IW) sampling method introduced by [1], i.e., construct \(\hat{\ell}_{t}\in\mathbb{R}^{n}\) such that \[\hat{\ell}_{t,k}=\frac{\mathbb{1}(k=k_{t})}{p_{t,k}}\ell_{t,k},\ \forall k\in[n],\] where \(\mathbb{1}(k=k_{t})\) denotes the indicator function. Notice that \[\mathbb{E}[\hat{\ell}_{t}]=\sum_{k=1}^{n}p_{t,k}\frac{\mathbf{e}_{k}}{p_{t,k} }\ell_{t}=\ell_{t}.\] Using \(\hat{\ell}_{t}\), we are able to reduce the bandit setting to the full information case. ### Non-negative loss Let's start with the setting where the loss sequence is non-negative but can be arbitrarily large, i.e., \(\ell_{t,k}\geq 0\) for every \(t\in[T]\) and \(k\in[n]\). Umab-nn (Algorithm 1) is a natural adaptation of the classic FTRL algorithm with log-barrier regularizer. The log-barrier regularizer is defined as \[\Psi(\mathbf{p}_{t})=\sum_{k=1}^{n}\Big{(}\log\Big{(}\frac{1}{p_{t,k}}\Big{)}- \log\Big{(}\frac{1}{n}\Big{)}\Big{)}.\] Notice that \(\Psi(\mathbf{p})\geq 0\) for all \(\mathbf{p}\in\Delta_{n}\). Such regularizers are commonly used for studying adaptive regret in the AMAB setting [22, 23, 24]. In each round, Umab-nn calculates an action distribution \(\mathbf{p}_{t}\) through the update rule, then plays action \(k_{t}\) sampled from \(\mathbf{p}_{t}\). After receiving loss \(\ell_{t,k}\), Umab-nn constructs the unbiased IW estimator \(\hat{\ell}_{t}\) and updates the learning rate \(\eta_{t}\). The novelty comes in our design of learning rate (line 5). Different from the learning rate in [25], we use \(\ell_{t,k_{t}}^{2}\) instead of \(\|\hat{\ell}_{t}\|_{2}^{2}\). This is because \(\|\hat{\ell}_{t}\|_{2}^{2}\) is of order \(1/p_{t,k_{t}}^{2}\). If one uses the one in [25] instead, i.e. \(\eta_{t+1}=O(\sqrt{n/\sum_{s=1}^{t}\|\hat{\ell}_{s}\|_{2}^{2}})\), the learning rate will be too small since \(1/p_{t,k_{t}}^{2}\) cannot be bounded. Based on this observation, Umab-nn adapts the learning rate to the sum of the square of the partial loss, i.e., \(\eta_{t+1}=O(\sqrt{n/\sum_{s=1}^{t}\ell_{s,k_{s}}^{2}})\), which can be well bounded by \(O(\ell_{\infty}\sqrt{n/T})\). We remark that Algorithm 1 is strictly scale-free. If all losses are multiplied by a constant \(c\), then in line 2, both terms on the right hand side will be multiplied by \(c\), resulting in the same \(p_{t}\) being picked by the algorithm. Our main result is the following regret bound for Algorithm 1. ``` Input: Log-barriers regularization \(\Psi\), \(\eta_{1}=\infty\) 1for\(t=1,\dots,T\)do 2 Compute the action distribution \(\mathbf{p}_{t}=\arg\min_{\mathbf{p}\in\Delta_{n}}\Big{(}\sum_{s=1}^{t-1}\langle \hat{\ell}_{s},\mathbf{p}\rangle+\frac{1}{\eta_{t}}\Psi(\mathbf{p})\Big{)}\) 3 Sample and play action \(k_{t}\sim\mathbf{p}_{t}\). Receive loss \(\ell_{t,k_{t}}\) 4 Construct IW estimator \(\hat{\ell}_{t}\) such that \(\hat{\ell}_{t,k}=\frac{1}{p_{t,k}}\ell_{t,k},\ \forall k\in[n]\) 5 Update learning rate \(\eta_{t+1}=2\sqrt{\frac{n}{\sum_{s=1}^{t}\ell_{s,k_{s}}^{2}}}\) ``` **Algorithm 1**Umab-nn: Unbounded AMAB for Non-Negative loss **Theorem 1**: _For any \(\ell_{1},\dots,\ell_{T}\in\mathbb{R}_{+}^{n}\), the expected regret of Algorithm 1 is upper bounded by_ \[\mathcal{R}_{T}\leq\tilde{\mathcal{O}}\Big{(}\sqrt{n\sum_{t=1}^{T}\|\ell_{t}\|_ {\infty}^{2}}\Big{)}\] Notice that Theorem 1 is adaptive to the infinity norm of the losses. Furthermore, the worst case regret is bounded by \(\tilde{\mathcal{O}}(\ell_{\infty}\sqrt{nT})\), which matches the lower bound established in [1]. We remark that Theorem 1 is the first result that achieves both optimal adaptive rate and optimal minimax rate for unbounded non-negative losses. Next, we briefly highlight the key steps in proving Theorem 1, which also provide intuition for our further improvement in the next section. Proof sketch of Theorem 1Since \(\hat{\ell}_{t}\) is an unbiased estimator of \(\ell_{t}\) for every \(t\in[T]\) and comparator \(\mathbf{p}^{\dagger}\in\Delta_{n}\), we have \[\mathbb{E}\Big{[}\sum_{t=1}^{T}\ell_{t,k_{t}}-\sum_{t=1}^{T}\langle \ell_{t},\mathbf{p}^{\dagger}\rangle\Big{]}=\mathbb{E}\Big{[}\sum_{t=1}^{T} \langle\hat{\ell}_{t},\mathbf{p}_{t}-\mathbf{p}^{\dagger}\rangle\Big{]}.\] It suffices to focus on bounding \(\sum_{t=1}^{T}\langle\hat{\ell}_{t},\mathbf{p}_{t}-\mathbf{p}^{\dagger}\rangle\). We start with the standard analysis of a FTRL-type algorithm. **Lemma 1**: _([1] Lemma 7.1) For any \(\hat{\ell}_{1},\ldots,\hat{\ell}_{T}\in\mathbb{R}^{n}\), using the update rule of (2) along with the non-increasing sequence of learning rates \(\eta_{1},\ldots,\eta_{T+1}\), there is_ \[\sum_{t=1}^{T}\langle\hat{\ell}_{t},\mathbf{p}_{t}-\mathbf{p}^{ \dagger}\rangle\leq\frac{\Psi(\mathbf{p}^{\dagger})}{\eta_{T+1}}+\sum_{t=1}^{ T}\Big{(}\langle\hat{\ell}_{t},\mathbf{p}_{t}-\mathbf{p}_{t+1}\rangle+F_{t}( \mathbf{p}_{t})-F_{t}(\mathbf{p}_{t+1})\Big{)}\] _for every comparator \(\mathbf{p}^{\dagger}\in\Delta_{n}\), where function \(F_{t}\) is defined as_ \[F_{t}(\mathbf{p})=\sum_{s=1}^{t-1}\langle\hat{\ell}_{s},\mathbf{ p}\rangle+\frac{1}{\eta_{t}}\Psi(\mathbf{p}).\] For the sake of completeness, the proof of Lemma 1 is provided in the appendix. Lemma 1 decomposes the regret into two terms. The first term depends on the regularizer and the comparator. Intuitively, \(\Psi(\mathbf{p}^{\dagger})\) will appear to be infinity if \(\mathbf{p}^{\dagger}\) is the best fixed action (some entries of \(\mathbf{p}^{\dagger}\) are zeros). The problem can be easily solved by comparing with some close neighbor of the best action [1], i.e., mixing a uniform distribution with the best fixed action. Therefore, it suffices to focus on the terms \(\langle\hat{\ell}_{t},\mathbf{p}_{t}-\mathbf{p}_{t+1}\rangle+F_{t}(\mathbf{p} _{t})-F_{t}(\mathbf{p}_{t+1})\). The following key lemma gives an upper bound using the notions of local norms. **Lemma 2**: _For any \(\hat{\ell}_{1},\ldots,\hat{\ell}_{T}\in\mathbb{R}^{n}\), using the update rule of (2), denote by \(\|\mathbf{x}\|_{\mathbf{A}}=\sqrt{\mathbf{x}^{\top}\mathbf{A}\mathbf{x}}\), there is_ \[\langle\hat{\ell}_{t},\mathbf{p}_{t}-\mathbf{p}_{t+1}\rangle+F_{t }(\mathbf{p}_{t})-F_{t}(\mathbf{p}_{t+1})\leq\frac{1}{2}\eta_{t}\|\hat{\ell}_{ t}\|_{(\nabla^{2}\Psi(\xi_{t}))^{-1}}^{2}, \tag{3}\] _where \(\xi_{t}\) is a point between \(\mathbf{p}_{t}\) and \(\mathbf{p}_{t+1}\). Moreover, it suffices to set \(\xi_{t}\) as \(\mathbf{p}_{t}\) when \(\hat{\ell}_{t}\in\mathbb{R}^{n}_{+}\)._ Note that (3) holds for general losses and will be useful in the next section. When \(\hat{\ell}_{t}\in\mathbb{R}^{n}_{+}\), we can further bound (3) by \(\min\Big{(}\frac{1}{2}\eta_{t}\ell_{t,k_{t}}^{2},|\ell_{t,k_{t}}|\Big{)}\), since \[\langle\hat{\ell}_{t},\mathbf{p}_{t}-\mathbf{p}_{t+1}\rangle+F_{t }(\mathbf{p}_{t})-F_{t}(\mathbf{p}_{t+1})\leq\langle\hat{\ell}_{t},\mathbf{p} _{t}\rangle=|\ell_{t,k_{t}}|, \tag{4}\] which implies \[\sum_{t=1}^{T}\langle\hat{\ell}_{t},\mathbf{p}_{t}-\mathbf{p}^{ \dagger}\rangle\leq\frac{\Psi(\mathbf{p}^{\dagger})}{\eta_{T+1}}+\sum_{t=1}^{T }\min\Big{(}\frac{1}{2}\eta_{t}\ell_{t,k_{t}}^{2},|\ell_{t,k_{t}}|\Big{)}. \tag{5}\] The right hand side of (5) takes a similar form as in scale-free online convex optimization [1], but the upper bound depends on \(\ell_{t,k_{t}}\) instead of \(\|\ell_{t}\|_{2}\). Using a learning rate as in Algorithm 1, the second term on the right hand side of (5) can be bounded by \(\mathcal{O}(\sqrt{n\sum_{t=1}^{T}\ell_{t,k_{t}}^{2}})\) based on [1], which suffices to complete the proof. ### General loss Next, we remove the non-negative assumption and study the general loss setting, i.e., \(\ell_{1},\ldots,\ell_{T}\in\mathbb{R}^{n}\). To begin with, we first explain why Algorithm 1 cannot work when the losses become negative. Recall Lemma 2, it requires bounding \(\langle\hat{\ell}_{t},\mathbf{p}_{t}-\mathbf{p}_{t+1}\rangle+F_{t}(\mathbf{p}_ {t})-F_{t}(\mathbf{p}_{t+1})\) by \(\eta_{t}||\hat{\ell}_{t}||^{2}_{(\nabla^{2}\Psi(\xi_{t}))^{-1}}/2\) for general losses. However, notice that \[\|\hat{\ell}_{t}\|^{2}_{(\nabla^{2}\Psi(\xi_{t}))^{-1}}=\sum_{k=1}^{n}\frac{ \hat{\ell}_{t,k}^{2}}{\nabla_{k,k}^{2}\Psi(\xi_{t})}=\sum_{k=1}^{n}\frac{\ell_ {t,k}^{2}\mathbb{1}(k=k_{t})}{p_{t,k}^{2}}\xi_{t,k}^{2}=\frac{\ell_{t,k_{t}}^{ 2}}{p_{t,k_{t}}^{2}}\xi_{t,k_{t}}^{2}, \tag{6}\] where \(\xi_{t,k_{t}}\) is some value between \(p_{t,k_{t}}\) and \(p_{t+1,k_{t}}\). Given \(p_{t+1,k_{t}}\) might significantly exceed \(p_{t,k_{t}}\), the size of \(\xi_{t,k_{t}}/p_{t,k_{t}}\) cannot be confined. In this case, \(\ell_{t,k_{t}}^{2}\xi_{t,k_{t}}^{2}/p_{t,k_{t}}^{2}\) is potentially of order \(O(1/p_{t,k_{t}}^{2})\), which is too large for the analysis. Additionally, \(-\langle\hat{\ell}_{t},\mathbf{p}_{t+1}\rangle\) could potentially be positive and cannot be well bounded due to the same reason, which implies that (4) will not go through. Thus, inequality (5) no longer holds under the condition of general loss. Inspired by such observations, it naturally follows to consider bounding the magnitude of \(p_{t+1,k_{t}}/p_{t,k_{t}}\). Unfortunately, without imposing additional restrictions on the losses, using the update (2) directly cannot bound \(p_{t+1,k_{t}}/p_{t,k_{t}}\). For example, given arbitrary \(\mathbf{p}_{t}\), \(\eta_{t+1}\), and \(k_{t}\), we can always find a sufficiently small \(\ell_{t,k_{t}}<0\) that makes \(p_{t+1,k_{t}}\geq 1/2\) through (2). In this case, if \(p_{t,k_{t}}\) is close to zero, \(p_{t+1,k_{t}}/p_{t,k_{t}}\) could be extremely large. To address this issue, we propose UMBB-G, as illustrated in Algorithm 2. The key ideas of UMBB-G include (1) using truncated loss to update the action distribution. Instead of directly taking \(\hat{\ell}_{t}\) as the input loss, we clip it by a threshold \(C_{t}\) that depends on previous received losses \(\hat{\ell}_{1},\ldots,\hat{\ell}_{t-1}\). The truncation ensures that every input loss is "not too negative" for the update of action, and thus the magnitude of \(p_{t+1,k_{t}}/p_{t,k_{t}}\) can be well bounded. (2) adding an extra exploration to ensure that the probability \(p_{t,k}\) would not be overly small. For unbounded AMAB with general loss, we need to ensure that each arm has a certain probability to be pulled, so that we can perceive the change of loss norm in time to tune the learning rate. Instead of the commonly used scheme of mixing with a uniform distribution [14, 1], we develop a data-dependent mixing strategy (Algorithm 3) that substantially reduces the error caused by the extra exploration. Specifically, similar to [1], we consider two exploration rate distinguished by whether the exploration rate is adaptive. The main result of Algorithm 2 is as follows. **Theorem 2**: _For any \(\ell_{1},\ldots,\ell_{T}\in\mathbb{R}^{n}\), with the non-adaptive and adaptive exploration rate, the expected regret of Algorithm 2 is upper bounded by_ \[\text{Non-Adaptive:}\qquad\mathcal{R}_{T} \leq\tilde{\mathcal{O}}\Big{(}\ell_{\infty}n^{2}+\sqrt{n\sum_{t=1 }^{T}\|\ell_{t}\|_{\infty}^{2}}+\ell_{\infty}^{-}\sqrt{nT}\Big{)}, \tag{7}\] \[\text{Adaptive:}\qquad\qquad\mathcal{R}_{T} \leq\tilde{\mathcal{O}}\Big{(}\ell_{\infty}n^{2}+\sqrt{n\sum_{t=1 }^{T}\|\ell_{t}\|_{\infty}^{2}}+\ell_{\infty}\sqrt{n\sum_{t=1}^{T}\|\ell_{t} \|_{\infty}}+\sqrt{n\sum_{t=1}^{T}\|\ell_{t}\|_{\infty}}\Big{)} \tag{8}\] Notice that the non-adaptive regret in Theorem 2 achieves "semi-adaptivity" to the loss sequence. If the loss sequence is non-negative, the right hand side of (7) is reduced to a form of the regret in Theorem 1. Moreover, the worst case bound of (7) is \(\tilde{\mathcal{O}}(\ell_{\infty}\sqrt{nT})\) for large \(T\), which is optimal up to log factors [1]. For the adaptive exploration rate, our result improves upon the previous result [1] and achieves optimal dependency on \(n\) and \(T\). ``` Input: Action distribution \(\mathbf{p}_{t}\). Exploration rate \(\rho_{t}\leq 1/2n^{2}\) Output: Extra exploration distribution \(\mathbf{p}^{\prime}_{t}\) 1 Define \(k^{\star}_{t}\in\arg\max_{k^{\prime}\in[n]}p_{t,k^{\prime}}\). Construct a vector \(\mathbf{c}_{t}\in\mathbb{R}^{n}\) such that for every \(k\in[n]\), there is \[c_{t,k}=\begin{cases}1,&\text{if }p_{t,k}<\rho_{t}\\ -\sum_{k^{\prime}\in[n]/\{k\}}c_{t,k^{\prime}}&\text{if }k=k^{\star}_{t}\\ 0,&\text{else}\end{cases}\] Construct the extra exploration distribution \(\mathbf{p}^{\prime}_{t}=\mathbf{p}_{t}+\rho_{t}\mathbf{c}_{t}\). ``` **Algorithm 3**Extra Exploration on Action Distribution Proof sketch of Theorem 2Recall that \(\hat{\ell}_{t}\) is the unbiased estimator and \(\hat{\ell}^{\prime}_{t}\) is the clipping biased estimator. By Algorithm 2 and the proof of Theorem 1, it suffices to bound the expectation of \(\sum_{t=1}^{T}\langle\hat{\ell}_{t},\mathbf{p}^{\prime}_{t}-\mathbf{p}^{ \dagger}\rangle\). We first decompose the regret into three terms as follows. \[\sum_{t=1}^{T}\langle\hat{\ell}_{t},\mathbf{p}^{\prime}_{t}-\mathbf{p}^{ \dagger}\rangle=\underbrace{\sum_{t=1}^{T}\langle\hat{\ell}^{\prime}_{t}, \mathbf{p}_{t}-\mathbf{p}^{\dagger}\rangle}_{\text{1}}+\underbrace{\sum_{t=1}^ {T}\langle\hat{\ell}^{\prime}_{t},\mathbf{p}^{\prime}_{t}-\mathbf{p}_{t} \rangle}_{\text{2}}+\underbrace{\sum_{t=1}^{T}\langle\hat{\ell}_{t}-\hat{\ell}^ {\prime}_{t},\mathbf{p}^{\prime}_{t}-\mathbf{p}^{\dagger}\rangle}_{\text{3}}.\] Here, term 1 is the regret of the corresponding FTRL algorithm with truncated loss \(\hat{\ell}^{\prime}_{1},\ldots,\hat{\ell}^{\prime}_{T}\). Term 2 measures the error incurred by extra exploration, i.e., using \(\mathbf{p}^{\prime}_{t}\) instead of \(\mathbf{p}_{t}\). Term 3 corresponds to the error of using the truncated loss \(\hat{\ell}^{\prime}_{t}\). In the rest of the proof, we bound these three terms respectively. **Bounding 1**: By Lemma 1 and Lemma 2, we have \[\sum_{t=1}^{T}\langle\hat{\ell}^{\prime}_{t},\mathbf{p}_{t}-\mathbf{p}^{ \dagger}\rangle\leq\frac{\Psi(\mathbf{p}^{\dagger})}{\eta_{T+1}}+\frac{1}{2} \sum_{t=1}^{T}\eta_{t}\|\hat{\ell}^{\prime}_{t}\|_{(\nabla^{2}\Psi(\xi_{t}))^{ -1}}^{2}=\frac{\Psi(\mathbf{p}^{\dagger})}{\eta_{T+1}}+\frac{1}{2}\sum_{t=1}^{ T}\eta_{t}{\ell^{\prime}}_{t,k_{t}}^{2}\frac{p_{t,k_{t}}^{2}}{{p^{\prime}}_{t,k_{t}}^{2 }}\frac{\xi_{t,k_{t}}^{2}}{{p^{\prime}}_{t,k_{t}}^{2}}.\] The key step is to bound the magnitude of \(p_{t,k_{t}}/{p^{\prime}}_{t,k_{t}}\) and \(p_{t+1,k_{t}}/p_{t,k_{t}}\) (since \(\xi_{t,k_{t}}\) is always between \(p_{t,k_{t}}\) and \(p_{t+1,k_{t}}\)) for \(\ell_{t,k_{t}}\leq 0\). This in turn is guaranteed by our design of loss truncation and extra exploration, which is illustrated in the following lemma. **Lemma 3**: _Given any action sequence \(k_{1},\ldots,k_{T}\), if \(\ell_{t,k_{t}}\leq 0\). there is \(p_{t,k_{t}}\leq 2p^{\prime}_{t,k_{t}}\) and \(p_{t+1,k_{t}}\leq 6p_{t,k_{t}}\) for every \(t\in[T]\)._ Lemma 3 ensures that both \(p_{t,k_{t}}/p^{\prime}_{t,k_{t}}\) and \(p_{t+1,k_{t}}/p_{t,k_{t}}\) can be bounded by constants. With these two ratio bounded, we can immediately reduce the right-hand-side to the form of (5). Using a similar proof as in Section 3.1, we can bound 1. **Bounding 2**: By the definition of \(\mathbf{p}^{\prime}_{t}\), we first note that \(\sum_{t=1}^{T}\langle\hat{\ell}^{\prime}_{t},\mathbf{p}^{\prime}_{t}-\mathbf{ p}_{t}\rangle=\sum_{t=1}^{T}\rho_{t}\langle\hat{\ell}^{\prime}_{t},\mathbf{c}_{t}\rangle\), where \(\rho_{t}\) is the exploration rate and \(\mathbf{c}_{t}\) is an offset on \(p_{t}\) to prevent some entries in action distribution from being too small. The key of our extra exploration algorithm is to upper bound \(\langle\hat{\ell}^{\prime}_{t},\mathbf{c}_{t}\rangle\) by \(\mathcal{O}(\ell_{\infty}\sqrt{nT})\), in contrast to \(\mathcal{O}(\ell_{\infty}n^{3/2}\sqrt{T})\) as in [1]. This reduces the variance of our exploration rate, leading to an improved regret. The details are provided in Lemma 4 as follows. **Lemma 4**: _With the non-adaptive and adaptive exploration rates as in Algorithm 3, we have_ \[\text{Non-Adaptive:}\qquad\mathbb{E}\Big{[}\sum_{t=1}^{T}\langle \hat{\ell}^{\prime}_{t},\mathbf{p}^{\prime}_{t}-\mathbf{p}_{t}\rangle\Big{]} \leq 2\sqrt{n\sum_{t=1}^{T}\|\ell_{t}\|_{\infty}^{2}},\] \[\text{Adaptive}\qquad\qquad\mathbb{E}\Big{[}\sum_{t=1}^{T}\langle \hat{\ell}^{\prime}_{t},\mathbf{p}^{\prime}_{t}-\mathbf{p}_{t}\rangle\Big{]} \leq 2n^{2}\ell_{\infty}+2\sqrt{1+4n\sum_{t=1}^{T}\|\ell_{t}\|_{ \infty}}+2\ell_{\infty}\sqrt{n\sum_{t=1}^{T}\|\ell_{t}\|_{\infty}}.\] **Bounding 3**: Notice that \[\sum_{t=1}^{T}\langle\hat{\ell}_{t}-\hat{\ell}^{\prime}_{t},\mathbf{p}^{\prime }_{t}-\mathbf{p}^{\dagger}\rangle\leq\sum_{t=1}^{T}\|\hat{\ell}_{t}-\hat{\ell} ^{\prime}_{t}\|_{1}\|\mathbf{p}^{\prime}_{t}-\mathbf{p}^{\dagger}\|_{\infty} \leq\sum_{t=1}^{T}\|\hat{\ell}_{t}-\hat{\ell}^{\prime}_{t}\|_{1}.\] The key idea of bounding 3 is to show that the number of distinct \((\hat{\ell}_{t},\hat{\ell}^{\prime}_{t})\) pairs and \(\|\hat{\ell}_{t}\|_{\infty}\) can be bounded by \(\mathcal{O}(\log\ell_{\infty})\) due to the double tricks, which is shown in Lemma 5. **Lemma 5**: _Given any action sequence \(k_{1},\ldots,k_{T}\), with the non-adaptive and adaptive exploration rates as in Algorithm 3, we have_ \[\text{Non-Adaptive:}\qquad\mathbb{E}\Big{[}\sum_{t=1}^{T}\langle \hat{\ell}_{t}-\hat{\ell}_{t}^{\prime},\mathbf{p}_{t}^{\prime}-\mathbf{p}^{ \dagger}\rangle\Big{]} \leq\ell_{\infty}^{-}(2n^{2}+\sqrt{nT})\log_{2}(1+\ell_{\infty}),\] \[\text{Adaptive:}\qquad\qquad\mathbb{E}\Big{[}\sum_{t=1}^{T}\langle \hat{\ell}_{t}-\hat{\ell}_{t}^{\prime},\mathbf{p}_{t}^{\prime}-\mathbf{p}^{ \dagger}\rangle\Big{]} \leq\ell_{\infty}^{-}\Big{(}2n^{2}+3\sqrt{n\sum_{t=1}^{T}\| \ell_{t}\|_{\infty}}\Big{)}\log_{2}(1+\ell_{\infty}).\] Summing the bounds for 1, 2, 3 gives Theorem 2. ## 4 Experiments We now corroborate our theoretical improvements and testify the performance of our algorithms UMB-G (Algorithm 2 with non-adaptive exploration) and UMB-G-A (Algorithm 2 with adaptive exploration). We compare to **all** existing scale-free/unbounded AMAB algorithms, including SF-MAB [13], SF-MAB-A [13], AHB [14], and banker-OMD [12]. The figures show the average performance and standard deviations across 500 trails. Applications to Stock Trading: In out first experiment, we consider an application to the stock market. Here we consider \(n=10\) stocks and \(T=1258\) rounds (daily price for 5-years). For every stock, its loss is the normalized price difference, i.e., the difference between two consecutive days for 100 shares. Stock prices are generally chaotic and the fluctuation can vary greatly among stocks and across time. The regret trajectories of the different algorithms are illustrated in Figure 1(a). Note that the regret of UMB-G and UMB-G-A is significantly smaller than that of other algorithms, In particular when the number of rounds is large. This is because 1). Compared to [13], our algorithms tune the learning and exploration rate more carefully, resulting in a saving of \(\mathcal{O}(\sqrt{n})\) term in theory and better empirical performance in practice. 2). Compared to [12], our exploration rate design ensures that the algorithms can perceive the changes in loss scale and adapt learning rate in time. 3). Compared to [14], our exploration design leads to smaller regret than mixing with uniform distribution. Figure 1: Real Data Experiments Applications to Amazon SalesWe further construct an experiment using Amazon sales data. Similar to the above, we consider \(n=10\) Amazon stores and \(T=1258\) rounds (weekly sales for 2-years). We assume that in each round, each store randomly discloses the weekly sales of one of its departments. The loss is defined by the negative of the weekly sales. We generate 10 rounds of loss using one week's data. Notice that the loss we considered in this setting is completely negative. The simulation results are shown in Figure 1(b). As expected, our algorithms outperform all other competitors. Compared to the stock market example, the fluctuation of regret trajectories of Amazon sales data is more stable for all the algorithms. This is because changes in Amazon store sales are more gradual than those in stocks: since all the algorithms we consider in the experiment are based on the FTRL/OMD framework, such a loss sequence will induce a stable action distribution, thereby resulting in the smoothness of the regret curve. Applications to Model SelectionIn the last setting, we explore an application to the model selection problem. We assume that we have access to \(n=10\) linear regression meta-algorithms (SGD with different learning rate). Similarly to the above, we set the number of rounds \(T=1258\). In each round \(t\), the meta-algorithms output the training loss error based on a dataset of size \(t\). Notice that since the size of the data set varies in each round, the optimal meta-algorithm will also change. In this scenario, the regret measures whether a model selection algorithm can promptly detect the change in the optimal meta-algorithm. Moreover, the prediction error can be very large when the data set is small. The results are shown in Figure 1(c). Again, the regrets of our algorithms are strictly smaller than all baselines. Compared to the first two experiments, the regret trajectories are smoother because of the stochastic nature of the loss sequence as \(t\) increases. Impact of extra explorationWe demonstrate the importance of extra exploration for unbounded loss. Consider a problem with two arms \(n=2\) and set \(T=1258\). We design the following loss sequence: \[\ell_{t}=\begin{cases}[0,-0.5]^{\top},&\text{if }1\leq t<100\\ [-10,0]^{\top},&\text{if }100\leq t<150\\ [-0.05,0]^{\top},&\text{if }150\leq t<1258\end{cases} \tag{9}\] Figure 2: Impact of Extra Exploration with Non-Adaptive/Adaptive Rates The intuition is to try deceive algorithms into taking the second arm as the "superior option" in the initial rounds which reduces the frequency of algorithms pulling the first arm, and thus hindering algorithms ability to detect the changes of the optimal arm. In particular, considering the loss can be unbounded, failing to detect the changes is costly. In this case, the regret trajectories are provided in Figure 2(a), where the comparison is between UMAPB-G-A and our algorithm with no extra exploration. It suffices to note that the algorithm with extra exploration performs much better than the one without extra exploration. This is consistent with the intuition of our design: extra exploration ensures that each arm has a probability of being pulled, so that the algorithm can always perceive the changes in the losses and adjust its learning rate in relatively few rounds. Comparison between Umap-G and Umap-G-AIn the last part we investigate the difference between our algorithms with non-adaptive and adaptive exploration rates. Intuitively, adaptive exploration rate is usually larger than the non-adaptive rate because it is of order \(O(1/\sqrt{t})\) instead of \(O(1/\sqrt{T})\) (assuming \(\ell_{\infty}\ll T\)). This makes adaptive exploration perform better in adversary cases, e.g. as shown in Figure 2(b), where we use the same loss sequence in (9). However, if the loss sequence is not adversary, e.g. there exists one arm that is always better than the others, non-adaptive exploration will be better since it loses less in extra exploration. An example is illustrated in Figure 2(c), where we use stochastic loss with expectation \([1,0]^{\top}\). In summary, adaptive and non-adaptive have their own advantages under different loss sequences in practice. ## 5 Conclusion We proposed the first algorithms that achieve optimal adaptive and non-adaptive regrets in adversarial multi-armed bandit problem with unbounded losses. Real data experiments validate our theoretical findings and demonstrate the superior performance of our algorithms compared to all existing algorithms for unbounded losses. Future work include extending our algorithmic tools to more challenging settings such as contextual bandit and reinforcement learning.
2307.02848
Revisiting Computer-Aided Tuberculosis Diagnosis
Tuberculosis (TB) is a major global health threat, causing millions of deaths annually. Although early diagnosis and treatment can greatly improve the chances of survival, it remains a major challenge, especially in developing countries. Recently, computer-aided tuberculosis diagnosis (CTD) using deep learning has shown promise, but progress is hindered by limited training data. To address this, we establish a large-scale dataset, namely the Tuberculosis X-ray (TBX11K) dataset, which contains 11,200 chest X-ray (CXR) images with corresponding bounding box annotations for TB areas. This dataset enables the training of sophisticated detectors for high-quality CTD. Furthermore, we propose a strong baseline, SymFormer, for simultaneous CXR image classification and TB infection area detection. SymFormer incorporates Symmetric Search Attention (SymAttention) to tackle the bilateral symmetry property of CXR images for learning discriminative features. Since CXR images may not strictly adhere to the bilateral symmetry property, we also propose Symmetric Positional Encoding (SPE) to facilitate SymAttention through feature recalibration. To promote future research on CTD, we build a benchmark by introducing evaluation metrics, evaluating baseline models reformed from existing detectors, and running an online challenge. Experiments show that SymFormer achieves state-of-the-art performance on the TBX11K dataset. The data, code, and models will be released at https://github.com/yun-liu/Tuberculosis.
Yun Liu, Yu-Huan Wu, Shi-Chen Zhang, Li Liu, Min Wu, Ming-Ming Cheng
2023-07-06T08:27:48Z
http://arxiv.org/abs/2307.02848v2
# Revisiting Computer-Aided Tuberculosis Diagnosis ###### Abstract Tuberculosisis (TB) is a major global health threat, causing millions of deaths annually. Although early diagnosis and treatment can greatly improve the chances of survival, it remains a major challenge, especially in developing countries. Recently, computer-aided tuberculosis diagnosis (CTD) using deep learning has shown promise, but progress is hindered by limited training data. To address this, we establish a large-scale dataset, namely the Tuberculosis X-ray (TBX11K) dataset, which contains 11,200 chest X-ray (CXR) images with corresponding bounding box annotations for TB areas. This dataset enables the training of sophisticated detectors for high-quality CTD. Furthermore, we propose a strong baseline, SymFormer, for simultaneous CXR image classification and TB infection area detection. SymFormer incorporates Symmetric Search Attention (SyntAttention) to tackle the _bilateral symmetry property_ of CXR images for learning discriminative features. Since CXR images may not strictly adhere to the bilateral symmetry property, we also propose Symmetric Positional Encoding (SPE) to facilitate SymAttention through feature recalibration. To promote future research on CTD, we build a benchmark by introducing evaluation metrics, evaluating baseline models returned from existing detectors, and running an online challenge. Experiments show that SymFormer achieves state-of-the-art performance on the TBX11K dataset. The data, code, and models will be released. Tuberculosis, tuberculosis diagnosis, tuberculosis detection, symmetric search attention, symmetric positional encoding ## 1 Introduction Tuberculosisis (TB), a pervasive infectious disease, has persistently ranked as the second leading cause of morbidity and mortality, typically following HIV, over the centuries [2, 3]. Despite the global COVID-19 outbreak in 2020, TB continues to afflict 10 million individuals and accounts for the death of 1.4 million people annually [4], rendering it the second most lethal infectious disease after COVID-19. Principally targeting the respiratory system, TB is caused by Mycobacterium tuberculosis and propagates through sneezing, severe coughing, or other means of disseminating infectious bacteria. Hence, TB typically occurs in the lungs through the respiratory tract. The vulnerability of immunocompromised individuals, including those with HIV and malnourished persons in developing countries, has exacerbated this issue. The mortality rate among TB patients remains exceedingly high in the absence of appropriate treatment. Nevertheless, early diagnosis of TB can significantly increase the recovery rate with the administration of corresponding antibiotics [5, 6, 7]. As TB propagates rapidly, early diagnosis also plays a crucial role in controlling the spread of infection [6]. The rise of multidrug-resistant TB underscores the urgent need for timely and accurate diagnostic methods to monitor the progress of clinical treatment [8]. However, TB diagnosis continues to pose a significant challenge [5, 6, 7, 9, 10, 11, 12]. Specifically, the _gold standard_ for TB diagnosis entails the microscopic examination of sputum samples and bacterial cultures to identify Mycobacterium tuberculosis [11, 12]. To ensure the safety of the examination process, a biosafety level-3 (BSL-3) laboratory is required for culturing Mycobacterium tuberculosis. This procedure can typically take several months [5, 11, 12]. Compounding the issue, many hospitals in developing countries and resource-constrained communities _lack the necessary infrastructure_ to establish BSL-3 facilities. On the other hand, X-ray imaging is the most prevalent and data-intensive screening method in current medical image examinations. Chest X-ray (CXR) can swiftly detect lung abnormalities caused by pulmonary TB, making it a widely-used tool for TB screening. The World Health Organization also recommends CXR as the initial step in TB screening [13]. Early diagnosis through CXR significantly aids in early TB detection, treatment, and prevention of the disease's spread [5, 14, 15, 10, 13]. However, even experienced radiologists may fail to identify TB infections in CXR images, as the human eye struggles to discern TB areas in CXR images due to its limited sensitivity to many details. Our human study reveals that experienced radiologists from top hospitals achieve _an accuracy of only 68.7%_ when compared with the gold standard. Thanks to the remarkable representation learning capabilities, deep learning has outperformed humans in various domains such as face recognition [16], image classification [17], object detection [18], and edge detection [19, 20]. It is reasonable to anticipate the application of deep learning's robust potential to TB diagnosis using CXR. Deep learning can automatically localize the precise TB infection site 24 hours a day, never getting fired like people. However, deep learning relies on extensive training data, which cannot be provided by existing TB datasets, as shown in Table I. Since it is challenging to collect large-scale TB CXR data due to the high cost and privacy considerations, existing TB datasets have only a few hundred CXR images. The scarcity of publicly available CXR data has _hindered_ the successful application of deep learning in improving computer-aided tuberculosis diagnosis (CTD) performance. In order to deploy the CTD system to assist TB patients worldwide, it is first necessary to address the issue of insufficient data. In this paper, we contribute a large-scale **Tuberculosis X-ray (TBX11K)** dataset to the community through long-term collaboration with major hospitals. This new TBX11K dataset surpasses previous CTD datasets in several aspects: i) Unlike previous public datasets [6, 21] containing only tens or hundreds of CXR images, TBX11K consists of 11,200 CXR images, approximately 17 times larger than the existing largest dataset, _i.e._, the Shenzhen dataset [21], making it feasible to train deep networks; ii) In contrast to image-level annotations in previous datasets, TBX11K employs bounding box annotations for TB infection areas, allowing future CTD methods to recognize TB manifestations and detect TB regions for assisting radiologists in definitive diagnoses; iii) TBX11K comprises four categories: healthy, sick but non-TB, active TB, and latent TB, as opposed to binary classification in previous datasets (_i.e._, TB or non-TB), enabling future CTD systems to adapt to more complex real-world scenarios and provide people with more detailed disease analyses. Each CXR image in the TBX11K dataset is tested using the gold standard (_i.e._, diagnostic microbiology) of TB diagnosis and annotated by experienced radiologists from major hospitals. The TBX11K dataset has been de-identified by data providers and ex-empted by relevant institutions, allowing it to be publicly available to promote future CTD research. Based on our TBX11K dataset, we propose a simple yet effective framework for CTD, termed as **SymFormer**. Inspired by the inherent _bilateral symmetry property_ observed in CXR images, SymFormer leverages this property to enhance the interpretation of CXR images. The bilateral symmetry property denotes the similarity or identical appearance of the left and right sides of the chest, indicating a symmetric pattern. This property proves valuable in improving the interpretation of CXR images. For instance, if there is a mass or consolidation present on one side of the chest but not the other, it could indicate a problem in that area. To tackle this property, SymFormer incorporates the novel **Symmetric Search Attention (SymAttention)** for learning discriminative features from CXR images. Since CXR images may not strictly be bilaterally symmetric, we also propose the **Symmetric Positional Encoding (SPE)** to facilitate SymAttention through feature recalibration. SymFormer conducts simultaneous CXR image classification and TB infection area detection by adding a classification head onto the TB infection area detector with a two-stage training diagram. To promote future research on CTD, we establish a benchmark on our TBX11K dataset. Specifically, we adapt the evaluation metrics for image classification and object detection to CTD, which would standardize the evaluation of CTD. We also launch an online challenge using the test data of TBX11K by keeping the ground truth of the test data private, which would make future comparisons on CTD fair. Besides, we construct several strong baseline models for CTD by reforming existing popular object detectors. Extensive comparisons demonstrate the superiority of SymFormer over these baselines. Compared with the preliminary conference version [1], we make plentiful extensions by proposing a novel SymFormer framework for CTD and validating its effectiveness with extensive experiments. In summary, the contributions of this paper are three-fold: * We establish a large-scale CTD dataset, TBX11K, which is much larger, better annotated, and more realistic than previous TB datasets, enabling the training of deep neural networks for simultaneous multi-class CXR image classification and TB infection area detection rather than only binary CXR classification in previous TB datasets. * We propose a simple yet effective framework for CTD, namely SymFormer, consisting of the novel Symmetric Search Attention (SymAttention) and Symmetric Positional Encoding (SPE) to leverage the _bilateral symmetry property_ of CXR images for significantly improving CTD over baseline models. * We build a CTD benchmark on our TBX11K dataset by introducing the evaluation metrics, evaluating several baselines reformed from existing object detectors, and running an online challenge, which is expected to set a good start for future research. ## 2 Related Work In this section, we first revisit previous TB datasets, followed by a review of the existing research on CTD. Since our proposed CTD method SymFormer uses self-attention of vision transformers, we also discuss the recent progress of vision transformers in medical imaging. ### _Tuberculosis Datasets_ Since TB data are very private and it is difficult to diagnose TB with the golden standard, the publicly available TB datasets are very limited. We provide a summary for the publicly available TB datasets in Table I. Jaeger _et al._[21] \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Datasets & Pub. Year & \#Classes & Annotations & \#Samples \\ \hline MC [21] & 2014 & 2 & Image-level & 138 \\ Shenzhen [21] & 2014 & 2 & Image-level & 662 \\ DA [6] & 2014 & 2 & Image-level & 156 \\ DB [6] & 2014 & 2 & Image-level & 150 \\ \hline TBX11K (Ours) & - & 4 & Bounding box & 11,200 \\ \hline \hline \end{tabular} \end{table} TABLE I: **Summary of publicly available TB datasets.** The size of our dataset is about 17\(\times\) larger than that of the previous largest dataset. Besides, our dataset annotates TB infection areas with bounding boxes, instead of only image-level labels. established two CXR datasets for TB diagnosis. The Montgomery County chest X-ray set (MC) [21] was collected through cooperation with the Department of Health and Human Services, Montgomery County, Maryland, USA. MC dataset consists of 138 CXR images, 80 of which are healthy cases and 58 are cases with manifestations of TB. The Shenzhen chest X-ray set (Shenzhen) [21] was collected through cooperation with Shenzhen No. 3 People's Hospital, Guangdong Medical College, Shenzhen, China. The Shenzhen dataset is composed of 326 norm cases and 336 cases with manifestations of TB, leading to 662 CXR images in total. Chauhan _et al._[6] built two datasets, namely DA and DB, which were obtained from two different X-ray machines at the National Institute of Tuberculosis and Respiratory Diseases, New Delhi. DA is composed of the training set (52 TB and 52 non-TB CXR images) and the independent test set (26 TB and 26 non-TB CXR images). DB contains 100 training CXR images (50 TB and 50 non-TB) and 50 test CXR images (25 TB and 25 non-TB). Note that all these four datasets are annotated with image-level labels for binary CXR image classification. These datasets are too small to train deep neural networks, so recent research on CTD has been hindered although deep learning has achieved numerous success stories in the computer vision community. On the other hand, the existing datasets only have image-level annotations, and thus we cannot train TB detectors with previous data. To help radiologists make accurate diagnoses, we are expected to detect the TB infection areas, not only an image-level classification. Therefore, the lack of TB data has prevented deep learning from bringing success to practical CTD systems that have the potential to save millions of TB patients every year. In this paper, we build a large-scale dataset with bounding box annotations for training deep neural networks for simultaneous CXR image classification and TB infection area detection. The presentation of this new dataset is expected to benefit future research on CTD and promote more practical CTD systems. ### _Computer-aided Tuberculosis Diagnosis_ Owing to the lack of data, traditional CTD methods cannot train deep neural networks. Thus, traditional methods mainly use hand-crafted features and train binary classifiers for CXR image classification. Jaeger _et al._[5] first segmented the lung region using a graph cut segmentation method [22]. Then, they extracted hand-crafted texture and shape features from the lung region. Finally, they apply a binary classifier, _i.e._, support vector machine (SVM), to classify the CXR image as normal or abnormal. Candemir _et al._[10] adopted image retrieval-based patient-specific adaptive lung models to a nonrigid registration-driven robust lung segmentation method, which would be helpful for traditional lung feature extraction [5]. Chauhan _et al._[6] implemented a MATLAB toolbox, TB-Xpredict, which adopted Gist [23] and PHOG [24] features for the discrimination between TB and non-TB CXR images without requiring segmentation [25, 26]. Karargyris _et al._[27] extracted shape features to describe the overall geometrical characteristics of lungs and texture features to represent image characteristics. Instead of using hand-crafted features, Lopes _et al._[9] adopted the frozen convolutional neural networks pre-trained on ImageNet [28] as the feature extractors for CXR images. Then, they train SVM to classify the extracted deep features. Hwang _et al._[7] trained an AlexNet [29] for binary classification (TB and non-TB) using a private dataset. Other private datasets are also used in [30] for image classification networks. However, our proposed large-scale dataset, _i.e._, TBX11K, has been made publicly available to promote research in this field. With our new dataset, we propose a transformer-based CTD method, SymFormer, for simultaneous CXR image classification and TB infection area detection, which serves as a strong baseline for future research on CTD by achieving state-of-the-art performance. ### _Vision Transformers in Medical Imaging_ Transformer [31] is initially introduced in natural language processing (NLP), and it has a good ability to capture long-range dependencies. Pioneering works on adapting transformers to vision tasks, such as ViT [32], DeiT [33], and P2T [34], showed that transformer networks can surpass the widely-used convolutional neural networks. Therefore, vision transformers attract increasing attention from the computer vision community, including medical imaging. Various efforts have been made to incorporate vision transformers into medical image segmentation [35, 36, 37, 38, 39, 40, 41, 42] and medical image classification [43, 44, 45, 46, 47, 48, 49, 50]. However, the adoption of transformer-based techniques for medical image detection lags behind that of segmentation and classification. Most studies utilizing vision transformers for medical image detection are primarily built on the detection transformer (DETR) framework [51]. The pioneering work in this field is COTR [52], comprising a convolutional neural network for feature extraction, hybrid convolution-in-transformer layers for feature encoding, transformer decoder layers for object querying, and a feed-forward network for polyp detection. Mathai _et al._[53] employed DETR [51] to detect lymph nodes in T2 MRI scans, which can be used to evaluate lymphoproliferative diseases. Li _et al._[54] proposed a Slice Attention Transformer (SATr) block to model the long-range dependency among different computed tomography (CT) slices, which can be plugged into convolution-based models for universal lesion detection. Please refer to recent survey papers [55, 56, 57] for a more comprehensive review of vision transformers in medical imaging. In this paper, we propose SymFormer for CTD using CXR images. SymFormer conducts simultaneous CXR image classification and TB infection area detection. It leverages SymAttention to tackle the _bilateral symmetry property_ of CXR images, which is further promoted by SPE. With SymAttention and SPE, SymFormer exhibits much better performance than recent popular object detector baselines, suggesting its superiority in CTD. ## 3 TBX11K Dataset Deep neural networks are highly dependent on large amounts of training data, while existing public TB datasets are not large-scale as shown in Table I. To address this issue, we establish a comprehensive and large-scale dataset called TBX11K, which enables the training of deep networks for CTD. In this section, we first describe how we collect and annotate the CXR data in SS3.1. Next, we present the results of a human study conducted by experienced radiologists in SS3.2. Finally, we discuss potential research topics that can be explored using our TBX11K dataset in SS3.3. ### _Data Collection and Annotation_ To collect and annotate the data, we adhere to four primary steps: i) establishing a taxonomy, ii) collecting CXR data, iii) professional data annotation, and iv) dataset splitting. We will introduce each of these steps in detail below. #### 3.1.1 Taxonomy Establishment The current TB datasets only consist of two categories: TB and non-TB, where non-TB refers to healthy cases. However, in practice, abnormalities in CXR images that indicate TB, atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, _etc._, have similar abnormal patterns such as blurry and irregular lesions, which differ significantly from healthy CXR that have almost clear patterns. Therefore, relying solely on healthy CXR as the negative category leads to biases that can cause large false positives in the model's prediction for clinical scenarios where there are many sick but non-TB patients. To address this issue and promote the practical application of CTD, we propose a new category, sick but non-TB, in our dataset. Furthermore, differentiating between active TB and latent TB is crucial in providing patients with proper treatment. Active TB results from Mycobacterium TB infection or reactivation of latent TB, while individuals with latent TB are neither sick nor contagious. Therefore, we have divided TB into two categories of active TB and latent TB in our dataset. In light of the above analysis, the proposed TBX11K dataset includes four categories: healthy, sick but non-TB, active TB, and latent TB. #### 3.1.2 Data Collection The collection of TB CXR data presents two main challenges: i) The high privacy of CXR data, particularly TB CXR data, making it almost impossible for individuals to access the raw data without risking breaking the law; ii) The scarcity of definitively tested TB CXR images, due to the complex and lengthy process of examining Mycobacterium TB using the golden standard [11, 12], despite the millions of TB patients worldwide. To address these challenges, we collaborate with top hospitals in China to gather the CXR data. Our resulting TBX11K dataset comprises 11,200 CXR images, including 5,000 healthy cases, 5,000 sick but non-TB cases, and 1,200 TB cases. Each CXR image corresponds to a unique individual. Of the 1,200 TB CXR images, 924 are active TB cases, 212 are latent TB cases, 54 contain both active and latent TB, and 10 are uncertain cases whose TB types cannot currently be recognized. We include 5,000 sick but non-TB cases to cover a broad range of radiograph diseases that can appear in clinical scenarios. All CXR images are in a resolution of approximately \(3000\times 3000\), and each CXR image is accompanied by the corresponding gender and age information to provide comprehensive clinical information for TB diagnosis. The data providers have de-identified the data, and relevant government institutions have exempted the dataset, making it publicly available legally. #### 3.1.3 Professional Data Annotation Our dataset comprises CXR images that have undergone rigorous testing using the golden standard, which provides image-level labels. However, while this approach enables us to categorize a CXR image as indicative of TB if the sputum of the corresponding patient shows manifestations of the disease, it does not reveal the specific location or extent of the TB in the CXR image. The ability to detect these TB infection areas is crucial to enable radiologists to make informed decisions. Currently, relying solely on image-level predictions makes it difficult for the human eye to identify TB infection areas, as evidenced by the low accuracy of radiologists during clinical examinations (see SS3.2). By simultaneously providing image classification and TB localization results, CTD systems have the potential to enhance the accuracy and efficiency of radiologists in making informed decisions. To achieve our goal, our TBX11K dataset includes bounding box annotations for TB infection areas in CXR images. To the best of our knowledge, this is the first dataset designed for TB infection area detection. These annotations are carried out by experienced radiologists from top hospitals. Specifically, each TB CXR image in the dataset is first labeled by a radiologist with 5-10 years of experience in TB diagnosis. Subsequently, another radiologist with over 10 years of experience in TB diagnosis reviews the box annotations. The radiologists do not just label bounding boxes for TB areas but also identify the type of TB (active or latent) for each box. To ensure consistency, the labeled TB types are double-checked against the image-level labels produced by the golden standard. In the event of a mismatch, the CXR image is placed in the unlabeled data for re-annotation, and the annotators do not know which CXR image was previously labeled incorrectly. If a CXR image is labeled incorrectly twice, we inform the annotators of the gold standard for that CXR image and request that they discuss how to re-annotate it. This double-checked process ensures that the annotated bounding boxes are highly reliable for TB infection area detection. Additionally, non-TB CXR images are only labeled with image-level labels produced by the golden standard. Examples of the TBX11K dataset are shown in Figure 4, and the distribution of TB Fig. 1: **Distribution of the areas of TB bounding boxes in the TBX11K dataset. Each bin represents a specific range of bounding box areas. The left and right values of each bin correspond to its area range, and the height of the bin represents the number of TB bounding boxes within that range. It should be noted that the CXR images in TBX11K have a resolution of about \(3000\times 3000\).** bounding box areas is displayed in Figure 1, indicating that most TB bounding boxes are in the range of \((384^{2},960^{2}]\). #### 3.1.4 Dataset Splitting We have partitioned the data into three subsets: training, validation, and test, following the split detailed in Table II. The ground truths for both the training and validation sets have been made public, whereas the ground truths for the test set remain confidential. This is because we have launched an online challenge using the test data on our website. To ensure a more representative dataset, we have considered four distinct TB cases: i) CXR images with active TB only, ii) CXR images with latent TB only, iii) CXR images with both active and latent TB, and iv) CXR images with uncertain TB type that cannot be recognized under current medical conditions. For each TB case, we have maintained a ratio of \(3:1:2\) for the number of TB CXR images in the training, validation, and test sets. It is worth noting that the uncertain TB CXR images have been assigned to the test set, enabling researchers to evaluate class-agnostic TB detection using these 10 uncertain CXR images. We recommend that researchers train their models on the training set, tune hyper-parameters on the validation set, and report the model's performance on the test set after retraining using the union of the training and validation sets. This approach follows scientific experiment settings and is expected to yield reliable results. ### _Human Study by Radiologists_ The human study involving radiologists is a critical component in understanding the role of CTD in clinical settings. We begin by randomly selecting 400 CXR images from the test set of the new TBX11K dataset, which includes 140 healthy CXR images, 140 sick but non-TB CXR images, and 120 CXR images with TB. Of the 120 CXR images with TB, 63 show active TB, 41 show latent TB, 15 show both active and latent TB, and 1 shows uncertain TB. Next, we invite an experienced radiologist from a major hospital with over 10 years of work experience to label the CXR images according to four image-level categories: healthy, sick but non-TB, active TB, and latent TB. If a CXR image displays both active and latent TB manifestations, the radiologist assigns both labels. It is important to note that this radiologist is different from those who labeled the original dataset. The radiologist achieves an accuracy of only 68.7% when compared to the ground truth produced by the golden standard. If we ignore the differentiation between active and latent TB, the accuracy improves to 84.8%, but distinguishing between the types of TB is crucial for effective clinical treatment. This low performance highlights one of the major challenges in TB diagnosis, treatment, and prevention. Unlike natural color images, CXR images are grayscale and often have fuzzy and blurry patterns, making accurate recognition challenging. Unfortunately, diagnosing TB with the golden standard can take several months in a BSL-3 laboratory [11, 12], which is not feasible in many parts of the world. The challenge in TB diagnosis leads to TB becoming the second most common infectious disease worldwide after HIV. However, we will show in our upcoming study that deep-learning-based CTD models trained on the proposed TBX11K dataset can significantly outperform even experienced radiologists, offering hope for improved TB diagnosis and treatment. ### _Potential Research Topics_ Moving forward, we discuss some potential research topics related to CTD using our newly developed TBX11K dataset. **Simultaneous classification and detection.** Our TBX\(11\)K dataset opens up new possibilities for conducting research on CTD, including CXR image classification and TB infection area detection. Our test set includes a broad range of health and non-TB sick data, enabling the simulation of clinical data distribution for evaluating CTD systems. We believe that the development of simultaneous CXR image classification and TB infection area detection systems would be a challenging and fascinating research topic, with potential applications for assisting radiologists in TB diagnosis. Deploying such systems could ultimately improve the accuracy and efficiency of TB diagnosis and treatment. **Imbalanced data distribution.** In addition to the challenge of simultaneous detection and image classification, our TBX11K dataset also presents an imbalanced data distribution across different categories. However, we believe that this data imbalance is reflective of real-world clinical scenarios. When patients undergo chest examinations, they may be experiencing discomfort or illness, increasing the likelihood of getting sick, and our dataset captures this reality with only 44.6% of takers being healthy. TB is just one of many possible chest diseases, and our dataset reflects this reality with only 10.7% of takers being infected with TB, while 44.6% are sick but non-TB. Latent TB can result from two scenarios: exposure to active TB and conversion from active TB after treatment. Most cases of latent TB are caused by exposure to active TB. However, individuals with latent TB are not sick or contagious and are unlikely to seek medical attention, resulting in a higher number of active TB cases in our dataset than latent TB cases. This data imbalance presents a challenge for future CTD methods, which must be designed to overcome this problem in practice. For example, methods for training models on the imbalanced TBX11K training set will need to be developed to improve the accuracy of TB diagnosis. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline & Classes & Train & Val & Test & Total \\ \hline \multirow{2}{*}{Non-TB} & Healthy & 3,000 & 800 & 1,200 & 5,000 \\ & Sick \& Non-TB & 3,000 & 800 & 1,200 & 5,000 \\ \hline \multirow{4}{*}{TB} & Active TB & 473 & 157 & 294 & 924 \\ & Latent TB & 104 & 36 & 72 & 212 \\ & Active \& Latent TB & 23 & 7 & 24 & 54 \\ & Uncertain TB & 0 & 0 & 10 & 10 \\ \hline \multirow{2}{*}{} & Total & 6,600 & 1,800 & 2,800 & 11,200 \\ \cline{2-6} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular} \end{table} TABLE II: **Split for the TBX11K dataset. “Active & \(\&\) Latent TB” refers to CXR images with both active and latent TB; “Active TB” refers to CXR images with only active TB; “Latent TB” refers to CXR images with only latent TB; “Uncertain TB” refers to TB CXR images where the type of TB infection cannot be recognized using current medical conditions.** **Incremental learning with private data.** Incremental learning is a machine learning technique that involves updating a model's parameters with new data as it becomes available, without requiring the model to be retrained from scratch. Given the high privacy concerns surrounding TB CXR data, researchers may possess private data that cannot be released. In such cases, it may be beneficial to use a model pre-trained on the TBX11K dataset as the base model. Researchers can then leverage incremental learning to fine-tune the pre-trained model using their private data, thereby enhancing the model's capacity for accurate CTD. Hence, investigating the potential of incremental learning for CTD using the newly developed TBX11K dataset would also be a crucial research direction. ## 4 Our SymFormer Framework In this section, we first present an overview of our SymFormer framework in SS4.1. Then, we describe our **Symmetric Abnormity Search (SAS)** method in SS4.2. SAS consists of two components: **Symmetric Positional Encoding (SPE)** in SS4.2.1, and **Symmetric Search Attention (SymAttention)** in SS4.2.2. Next, we introduce the TB diagnosis heads for SymFormer in SS4.3. Finally, we present the two-stage training diagram for simultaneous CXR image classification and TB infection area detection in SS4.4. ### _Overview_ We illustrate the overall pipeline of SymFormer in Fig. 2. SymFormer comprises three parts: feature extraction, symmetric abnormity search, and TB diagnosis heads. We will elaborate on each part below. **Feature extraction.** For the sake of convenience, we take ResNets [59] as an example backbone network for feature extraction due to its generality acknowledged by the community. When given a CXR image as input, the backbone network outputs features in four stages, which are scaled down by factors of \(1/4\), \(1/8\), \(1/16\), and \(1/32\), respectively, in comparison to the input size. As the sizes and shapes of TB infection areas vary widely, it is crucial to capture multi-scale features from the backbone network. In order to achieve this, a feature pyramid network (FPN) [58] is applied upon the backbone network, which generates a feature pyramid, _i.e._, feature maps at different scales. We denote the feature pyramid as \(\mathbf{F}=\{\mathbf{F}_{1},\mathbf{F}_{2},\mathbf{F}_{3},\mathbf{F}_{4}\}\)_w.r.t._\(\mathbf{F}_{i}\in\mathbb{R}^{C\times\frac{H}{2^{H}+1}\times\frac{W}{2^{H}+1}}(i \in\{1,2,3,4\})\), in which \(C\) is the feature dimension and \(H\) and \(W\) are the height and width of the input CXR image, respectively. The feature pyramid is effective at enabling TB infection detection across different feature levels. **Symmetric abnormity search.** The SAS module serves to enhance the extracted feature pyramid \(\mathbf{F}\). To achieve this, an SAS module is incorporated after each side output of FPN [58] to process each feature map \(\mathbf{F}_{i}\) in the feature pyramid \(\mathbf{F}\). The enhanced feature pyramid is expressed as \(\hat{\mathbf{F}}=\{\hat{\mathbf{F}}_{1},\hat{\mathbf{F}}_{2},\hat{\mathbf{F} }_{3},\hat{\mathbf{F}}_{4}\}\)_w.r.t._\(\hat{\mathbf{F}}_{i}\in\mathbb{R}^{C\times\frac{H}{2^{H}+1}\times\frac{W}{2^{H}+1}}(i \in\{1,2,3,4\})\). The SAS modules at various side outputs share the same weights to reduce the number of network parameters. According to the _bilateral symmetry property_, the bilaterally symmetric regions in a normal CXR image should look similar or identical. The SAS module leverages this insight by searching for symmetric positions in each position of the feature map to determine if it is normal. The SAS module consists of three components: SPE, SymAttention, and a feed-forward network. While the CXR image may not be strictly symmetric, the SPE is designed to recalibrate the features, which then benefits the SymAttention for symmetric-search-based feature enhancement. **TB diagnosis heads.** We connect two types of TB diagnosis heads to the feature pyramid \(\hat{\mathbf{F}}\), which is enhanced by the SAS module, for performing TB infection area detection and CXR image classification, respectively. Each feature map in the feature pyramid \(\hat{\mathbf{F}}\) is fed into the detection head, and each detected bounding box is expected to cover a TB infection area. However, there is a risk of introducing false positives for non-TB CXR images during TB infection area detection, which leads to unnecessary costs for radiologists to check these false positives for clinical diagnosis. To address this issue, we feed the feature map \(\hat{\mathbf{F}}_{4}\) at the top level of the enhanced feature pyramid into a classification head to determine whether a CXR image contains TB or not. If a CXR image is classified as TB, radiologists can further examine the detected TB infection areas for a more accurate and detailed clinical diagnosis. If a CXR image is classified as non-TB, the detected areas need not be checked further by radiologists. ### _Symmetric Abnormity Search_ Bilateral symmetry is a property of CXR images where the structures on the left and right sides of the chest appear similar or identical. In other words, if a line is drawn down the center of the CXR image, the structures on either side of the line should be approximately the same size and shape. This property plays a crucial role in the interpretation of CXR images since it enables radiologists and clinicians to identify asymmetries or abnormalities in the lung fields. For example, the presence of a mass or consolidation on one side of the lung but not the other could indicate a problem in that area. However, it is worth noting that perfect bilateral symmetry is not always present in normal CXR images, depending on the patient's pose and position relative to the X-ray machine when the CXR image is taken. Fig. 2: **Illustration of the proposed SymFormer framework.** FPN [58] is applied to generate the feature pyramid. Our proposed method, SAS, leverages the bilateral symmetry property to enhance the feature representations of CXR images. As mentioned above, the lungs in CXR images may not be strictly symmetric. To account for this, SAS first incorporates SPE for feature recalibration. This recalibrated feature map is then used by SymAttention to search the symmetric adjacent area of each spatial position in the feature map, where the symmetric adjacent area refers to the adjacent area of the bilaterally symmetric position for a given position. SymAttention aggregates features in the symmetric adjacent area in an adaptive way through attention. The adjacent area is also determined in a learning way. By forcing each spatial position to look at the symmetric adjacent area, as suggested by the bilateral symmetry property, we can learn discriminative features for the CXR image for CTD. #### 4.2.1 Symmetric Positional Encoding To incorporate positional information into self-attention computations for a feature map, we must add positional encoding to the feature map. There are two types of positional encoding: absolute positional encoding and relative positional encoding [31, 32]. Our method, called SPE, is based on absolute positional encoding, and our experiments indicate that relative positional encoding is inferior to our SPE, as shown in SS6.4. The widely-used absolute positional encoding [31, 32] employs sine and cosine functions of different frequencies: \[\mathbf{P}[pos,2j] =\sin(pos/10000^{\frac{2j}{C}}), \tag{1}\] \[\mathbf{P}[pos,2j+1] =\cos(pos/1000^{\frac{2j}{C}}),\] where \(pos\) denotes the spatial position and \(j\) indexes the feature dimension. For each input feature map \(\mathbf{F}_{i}\) from the feature pyramid \(\mathbf{F}_{i}\), we use Eq. 1 to calculate the corresponding positional encoding \(\mathbf{P}_{i}\). \(\mathbf{P}_{i}\) has the same shape as \(\mathbf{F}_{i}\) so that \(\mathbf{P}_{i}\) and \(\mathbf{F}_{i}\) can be summed. As mentioned earlier, CXR images may not strictly adhere to the bilateral symmetry property as they can have slight rotations and translations. The proposed SPE is designed to tackle this issue by feature recalibration. SPE first splits the positional encoding \(\mathbf{P}_{i}\) into two sides, _i.e._, \(\mathbf{P}_{i}^{\textit{right}}\) and \(\mathbf{P}_{i}^{\textit{right}}\), by drawing a line down the center of \(\mathbf{P}_{i}\). Then, we transfer \(\mathbf{P}_{i}^{\textit{right}}\) to the left side using spatial transformer networks (STN) [60] and horizontal flipping. Finally, we concatenate the transformed left-side positional encoding and \(\mathbf{P}_{i}^{\textit{right}}\) along the \(x\) dimension to form the output \(\mathbf{P}_{i}^{\textit{right}}\). This process can be formulated as follows: \[\mathbf{P}_{i}^{\textit{trans}} =\mathrm{Flip}_{x}(\mathrm{STN}(\mathbf{P}_{i}^{\textit{right}}; \Theta)), \tag{2}\] \[\mathbf{P}_{i}^{\textit{sum}} =\mathrm{Concat}_{x}(\mathbf{P}_{i}^{\textit{trans}},\mathbf{P}_{ i}^{\textit{right}}),\] in which \(\Theta\) is the weights of STN, \(\mathrm{Flip}_{x}\) represents horizontal flipping, and \(\mathrm{Concat}_{x}\) stands for concatenation along the \(x\) dimension. In Eq. 2, \(\mathbf{P}_{i}^{\textit{right}}\) can be replaced with \(\mathbf{P}_{i}^{\textit{left}}\) by swapping the order of the inputs of \(\mathrm{Concat}_{x}\). However, our experiments in SS6.4 show that \(\mathbf{P}_{i}^{\textit{right}}\) performs slightly better than \(\mathbf{P}_{i}^{\textit{left}}\). For each input \(\mathbf{F}_{i}\) (\(i\in\{1,2,3,4\}\)), we compute the corresponding \(\mathbf{P}_{i}^{\textit{right}}\) using Eq. 2. Using the SPE \(\mathbf{P}_{i}^{\textit{sum}}\), we recalibrate the input feature map through \[\mathbf{F}_{i}^{\textit{recall}}=\mathbf{F}_{i}+\mathbf{P}_{i}^{\textit{sum}}. \tag{3}\] The output \(\mathbf{F}_{i}^{\textit{recall}}\) will facilitate the calculation of the subsequent SymAttention. Micro designs of STN.The spatial transformation of STN [60] in Eq. 2 is conditional on the positional encoding itself. STN [60] feeds the one-side positional encoding into a small network to predict the affine matrix that is used for the affine transformation. The small network includes two alternating max-pooling and Conv-ReLU layers. Then, a flattening operation is carried out on the spatial dimension, followed by a multilayer perceptron (MLP) to predict the affine matrix. We initialize the MLP to ensure that the affine transformation with the initial affine matrix is equivalent to an identical mapping. #### 4.2.2 Symmetric Search Attention Self-attention has gained popularity in various fields due to its ability to learn relationships among elements within a sequence or image [31, 32]. In medical image analysis, self-attention has been applied to identify relevant features in images and enhance disease detection. However, classical self-attention performs global relationship modeling by calculating attention weights for each reference location, which fuses features from all locations. This approach may not be optimal for CTD with CXR images. Specifically, natural images can be captured in various scenarios and contain various objects and elements, so global relationship modeling is beneficial for the understanding of the entire scene. However, CXR images only depict the human chest in a single scenario, and the difference among various CXR images is often limited to the presence of elusive abnormity regions. Therefore, global relationship modeling may be _redundant_ for CXR images, limiting the capacity of self-attention to learn relevant relationships for enhancing feature representation. This is because it is challenging for a neural network to automatically identify a few relevant locations out of thousands of redundant locations. For instance, in our experiments, we observe that the DETR detection framework [51] can not converge when used to discriminate indistinguishable TB features in CTD. To tackle this challenge, we propose SymAttention, which leverages the bilateral symmetry property to aid self-attention in identifying relevant locations in CXR images. As previously mentioned, radiologists can diagnose TB by comparing the bilaterally symmetric locations of the two sides of the lungs. Consequently, the relevant locations for each reference location in CXR images are the bilaterally symmetric locations. Inspired by this, SymAttention searches for features in a symmetrical pattern across the left and right lungs, allowing each reference location only attends to the locations around the bilaterally symmetric location of the reference location. Given the feature map \(\mathbf{F}_{i}^{\textit{recall}}\), we first select a small set of key sampling locations, following Deformable DETR [61]. Let \(K\) denote the number of selected locations, and \(M\) denote the number of heads in the self-attention calculation. The coordinate shifts of the selected locations can be learned by \[\Delta\mathbf{p}_{i}^{x}=\mathbf{W}_{x}^{\textit{pos}}\mathbf{F}_{i}^{\textit{ recall}},\qquad\Delta\mathbf{p}_{i}^{y}=\mathbf{W}_{y}^{\textit{pos}}\mathbf{F}_{i}^{\textit{ recall}}, \tag{4}\] in which \(\mathbf{W}_{x}^{\text{pos}},\mathbf{W}_{y}^{\text{pos}}\in\mathbb{R}^{(M\times K) \times C}\) are trainable parameter matrices. The attention \(\mathbf{A}_{i}\) and value \(\mathbf{F}_{i}^{v}\) are simply calculated using \[\begin{split}\mathbf{A}_{i}&=\mathrm{Softmax}( \mathrm{Reshape}(\mathbf{W}^{\text{att}}\mathbf{F}_{i}^{\text{realif}})),\\ \mathbf{F}_{i}^{v}&=\mathbf{W}^{\text{value}}\mathbf{F }_{i}^{\text{realif}}\end{split} \tag{5}\] where \(\mathbf{W}^{\text{att}}\in\mathbb{R}^{(M\times K)\times C}\), \(\mathbf{W}^{\text{value}}\in\mathbb{R}^{C\times C}\) are trainable parameter matrices and the softmax function is performed along the dimension of \(K\). Then, we reshape \(\mathbf{F}_{i}^{v}\) like \[\mathbf{F}_{i}^{v}\in\mathbb{R}^{C\times\frac{H}{2^{i+1}}\times\frac{W}{2^{i+ 1}}}\rightarrow\mathbf{F}_{i}^{v}\in\mathbb{R}^{M\times\frac{C}{2^{i+1}} \times\frac{H}{2^{i+1}}}. \tag{6}\] Next, SymAttention can be formulated as \[\begin{split}\mathbf{F}_{i}^{\text{att}}=\mathrm{Concat}_{m=1}^{M }(\sum_{k=1}^{K}(&\mathbf{A}_{i}[m,k]\cdot\mathbf{F}_{i}^{v}[m,:,\mathbf{p}_{i}^{y}+\Delta\mathbf{p}_{i}^{y}[m,k],\\ &\frac{W}{2^{i+1}}-(\mathbf{p}_{i}^{x}+\Delta\mathbf{p}_{i}^{x}[m,k])+1])),\end{split} \tag{7}\] in which \(\mathrm{Concat}_{m=1}^{M}\) means to concatenate all the results generated by setting \(m\) from \(1\) to \(M\). The term with the wavy_underline_ underline projects the sampled locations onto the bilaterally symmetric locations by taking the vertically centering line as the line of symmetry, which is the core of the proposed SymAttention. Finally, to ease optimization, a residual connection is connected, followed by an MLP: \[\hat{\mathbf{F}}_{i}^{\text{att}}=\mathbf{W}^{\text{proj}}\mathbf{F}_{i}^{ \text{att}}+\mathbf{F}_{i},\qquad\hat{\mathbf{F}}_{i}=\mathrm{MLP}(\hat{ \mathbf{F}}_{i}^{\text{att}})+\hat{\mathbf{F}}_{i}^{\text{att}}, \tag{8}\] where we have \(\mathbf{W}^{\text{proj}}\in\mathbb{R}^{C\times C}\) and \(\hat{\mathbf{F}}_{i}\) is the enhanced output as in SS4.1. In Eq. 4 - Eq. 8, each reference location attends to a small set of key sampling locations around the bilaterally symmetric location of the reference location, rather than just the symmetric location. The key sampling locations are automatically set in a learning way. This ensures the receptive field when comparing the appearance of the left and right sides of the lungs. In our experiments, we set \(M=8\) and \(K=4\). Suppose \(N=\frac{H}{2^{i+1}}\times\frac{W}{2^{i+1}}\), and the computational complexity can be expressed as \(\mathcal{O}(NC^{2})\). Thus, SymAttention is very efficient and flexible for application to the feature pyramid \(\mathbf{F}\). ### _TB Diagnosis Heads_ In SS4.1, we mention that there are two TB diagnosis heads: the TB infection area detection head and the CXR image classification head. In this section, we introduce them in detail. The detection head is based on RetinaNet [62], a well-known one-stage object detector, consisting of two branches for bounding box classification and location regression. In contrast to object detection for natural images, where each bounding box covers an object, each bounding box in our system is designed to cover a TB infection area. The detection head learns to detect TB with _two categories_: active TB and latent TB. During clinical TB screening, most CXR cases do not have TB infections, making it easy for the detection head to introduce false positives. To tackle this challenge, we add a CXR image classification head to conduct simultaneous CXR image classification and TB infection area detection. We discard the detected TB areas if a CXR image is classified as non-TB. For simplicity, we stack several convolutions with pooling operations for the classification head. There are five sequential convolution layers, each with 512 output channels and ReLU activation. We then adopt global average pooling to obtain a global feature vector, followed by a fully connected layer with 3 output neurons for classification into _three categories_: healthy, sick but non-TB, and TB. ### _Two-stage Training Diagram_ Our SymFormer framework consists of two heads designed for CXR image classification and TB infection area detection, respectively. In clinical settings, the number of non-TB cases significantly outweighs the number of TB cases. Directly training the infection area detection head with non-TB cases would result in an excessive number of pure background supervisions. Therefore, simultaneous training of the classification and detection heads is suboptimal. Additionally, CXR images solely depict structures and organs in the chest, unlike natural images that have complex and diverse backgrounds. If we first train the backbone network and the classification head, the backbone network for feature extraction would become overfitted and would not generalize well to infection area detection. Furthermore, image classification mainly focuses on global features, while infection area detection requires fine-grained features for TB area localization. As a result, training image classification first is also suboptimal. Our proposed approach entails training the backbone network and the infection area detection head initially using only TB CXR images. Then, we employ all CXR images to train the classification head by freezing the backbone network and the detection head. This training strategy benefits from more specific bounding box annotations provided by the detection annotations, which mitigates the risk of overfitting. The fine-grained features learned through the infection area detection can also be easily transferred to CXR image classification. ## 5 Experimental Setup In this section, we first elaborate on the implementation details for the proposed SymFormer in SS5.1. Subsequently, we introduce several baseline models for CTD in SS5.2 and discuss the evaluation metrics used for CTD in SS5.3. ### _Implementation Details_ Our implementation of SymFormer is done using PyTorch [63] and the open-source mmdetection framework [64]. The training of the first stage uses TB CXR images in the TBX11K trainval (train + val) set, while the training of the second stage not only uses all TBX11K trainval CXR images but also the random half of the MC [21] and Shenzhen [21] datasets as well as the training sets of the DA [6] and DB [6] datasets. The other half of the MC [21] and Shenzhen [21] datasets as well as the test sets of the TBX11K, DA [6] and DB [6] datasets are used to evaluate the performance of CXR image classification. We set the number of FPN feature channels, denoted as \(C\), to 256, consistent with RetinaNet [62]. Other settings also follow those in RetinaNet. For both the training of the first and second stages, we use a batch size of 16 and train for 24 epochs. We employ the SGD optimizer with a momentum of 0.9 and a weight decay of 0.0001. The initial learning rate is set to 0.001, and we reduce it by a factor of 10 at the 16th and 22nd epochs. To augment the data, we use random flipping. We resize both the CXR images used for training and testing to \(512\times 512\). All experiments are conducted using 4 RTX 2080Ti GPUs. ### _Baseline Models_ As discussed in SS4.1, incorporating an image classification head can significantly reduce the false positives of detection in clinical TB screening. However, existing object detectors do not consider background images and often disregard images without bounding-box objects [65, 66, 67, 68, 62]. Using these detectors directly for CTD leads to numerous false positives due to the large number of non-TB CXR images in clinical practice. To address this issue, we introduce a classification head to enable simultaneous CXR image classification and TB infection area detection, where the CXR image classification results are used to filter out the false positives of detection. To achieve this, we reformulate several well-known object detectors, including SSD [65], RetinaNet [62], Faster R-CNN [67], FCOS [66], and Deformable DETR [61] for simultaneous CXR image classification and TB infection area detection. Specifically, we add the same image classification head as used in our SymFormer framework to these object detectors, after the final layer of their backbone networks, _i.e., conv5\(3\) for VGG16 [69] and _res5c_ for ResNet-50 [59]. The image classification head learns to classify CXR images into three categories: healthy, sick but non-TB, and TB, while the TB detection head learns to detect TB with two categories: active TB and latent TB. The training of existing detectors follows the two-stage training diagram described in SS4.4. ### _Evaluation Metrics_ **CXR image classification.** We continue by introducing the evaluation metrics for the CTD task. In CXR image classification, the goal is to classify each CXR image into one of three categories: healthy, sick but non-TB, and TB. To assess the classification results, we utilize the following six evaluation metrics: * Accuracy: This metric measures the percentage of CXR images that are correctly classified across all three categories. * Area Under Curve (AUC): The AUC computes the area under the Receiver Operating Characteristic (ROC) curve. The ROC curve plots the true positive rate against the false positive rate for the TB class. * Sensitivity: Sensitivity quantifies the percentage of TB cases that are accurately identified as TB. It represents the recall for the TB class. * Specificity: Specificity determines the percentage of non-TB cases that are correctly identified as non-TB, encompassing both the healthy and sick but non-TB classes. It represents the recall for the non-TB class. * Average Precision (AP): AP calculates the precision for each class and takes the average across all classes. It provides an overall measure of precision. * Average Recall (AR): AR computes the recall for each class and averages the values across all classes. It provides an overall measure of recall. These six metrics enable the evaluation of the CXR image classification quality from various perspectives. **TB infection area detection.** For the evaluation of TB detection, we utilize the average precision of the bounding box metric (AP\({}^{\text{bb}}\)) proposed by the COCO benchmark [70]. AP\({}^{\text{bb}}\) is widely used as the primary detection metric in the vision community [66, 67, 34, 71]. The default AP\({}^{\text{bb}}\) is computed by averaging over IoU (intersection-over-union) thresholds ranging from 0.5 to 0.95 with a step size of 0.05. Additionally, we report AP\({}^{\text{bb}}_{50}\), which represents AP\({}^{\text{bb}}\) at an IoU threshold of 0.5. To provide insights into the detection performance for different types of TB, we present evaluation results separately for active TB and latent TB, excluding uncertain TB CXR images. We also report category-agnostic TB detection results, where the TB categories are disregarded, to describe the detection of all TB areas. In this case, uncertain TB CXR images are included. Furthermore, we introduce two evaluation modes: i) utilizing all CXR images in the TBX11K test set, and ii) considering only TB CXR images in the TBX11K test set. By employing these metrics, we can comprehensively analyze the performance of CTD systems from various useful perspectives. ## 6 Experimental Results In this section, we present the results for CXR image classification in SS6.1, followed by the results for TB infection area detection in SS6.2. Subsequently, we visualize detection results and the learned deep features in SS6.3. Lastly, we conduct ablation studies in SS6.4 to gain a better understanding of the proposed SymFormer. ### _CXR Image Classification_ We summarize the evaluation results for CXR image classification in Table III. All methods adopt pretraining models from ImageNet [28] for initialization. We report the results of the proposed SymFormer integrated with RetinaNet [62] and Deformable DETR [61] as the base methods. As can be observed, incorporating SymFormer into RetinaNet [62] and Deformable DETR [61] leads to significant performance improvements for RetinaNet and Deformable DETR, respectively. SymFormer with Deformable DETR [61] achieves the best performance across all metrics except for sensitivity, where Faster R-CNN [67] achieves the highest sensitivity rate of 91.2%. However, Faster R-CNN performs considerably worse than the proposed SymFormer in terms of other metrics. SymFormer with Deformable DETR achieves a specificity of 97.0%, indicating that 3 out of 100 non-TB CXR images will be misclassified as TB. The default model we employ is SymFormer with RetinaNet, which exhibits slightly lower performance than SymFormer with Deformable DETR but outperforms the latter by a significant margin in object detection, as demonstrated in SS6.2. Furthermore, in terms of accuracy, all methods greatly outperform radiologists who achieve an accuracy of 84.8% as in SS3.2. This emphasizes the promising potential of deep-learning-based CTD as a research field. Continued progress in this direction holds the promise of facilitating practical CTD systems that can assist millions of TB patients. ### _TB Infection Area Detection_ We proceed by presenting the results for TB infection area detection. As discussed in SS5.3, we report the performance for both the entire TBX11K test set and a subset consisting only of TB CXR images. Evaluating the performance using only TB CXR images allows for precise detection analysis since non-TB CXR images do not contain target TB infection areas. Conversely, evaluating using all CXR images incorporates the influence of false positives in non-TB CXR images. To ensure accurate evaluation using all CXR images, we _discard_ all predicted boxes in CXR images that are classified as non-TB by the CXR image classification head. However, it is important to note that this filtering process is not applicable when evaluating using only TB CXR images. The results for TB infection area detection are presented in Table IV. It is evident that both SymFormer with Deformable DETR and SymFormer with RetinaNet demonstrate significant improvements over their respective base methods, Deformable DETR [61] and RetinaNet [62]. Interestingly, SymFormer with RetinaNet outperforms SymFormer with Deformable DETR by a considerable margin, indicating that SymFormer is better suited for integration with the RetinaNet framework. As a result, we select SymFormer with RetinaNet as our default model for CTD. It is worth noting that all methods struggle with accurately detecting latent TB areas. However, the evaluation results for category-agnostic TB are better than those for active TB, indicating that many latent TB targets are correctly located but mistakenly classified as active TB. We attribute this to the limited number of latent TB CXR images in the TBX11K dataset, where only 212 CXR images depict latent TB compared to 924 CXR images depicting active TB. Therefore, future research should address this data imbalance issue and focus on improving the detection of latent TB areas. Furthermore, we observe that the performance in terms of AP\({}_{50}^{\text{bb}}\) is generally superior to that of AP\({}^{\text{bb}}\). This suggests that while detection models are capable of identifying the target regions, their localization accuracy is often not very precise. We argue that locating TB bounding box regions differs significantly from locating regions of natural objects. Even experienced radiologists find it challenging to precisely pinpoint TB regions. Consequently, AP\({}_{50}^{\text{bb}}\) is more crucial than AP\({}^{\text{bb}}\) since predicted boxes with an IoU of 0.5 with target TB areas are sufficient to assist radiologists in identifying TB infection areas. In Fig. 3, we present the precision-recall (PR) curves for the detection error analyses, focusing on category-agnostic TB detection. It is evident that all methods exhibit substantial improvements when transitioning from an IoU threshold of 0.75 to 0.5. This indicates that the performance of all methods is particularly challenged at higher IoU thresholds due to their limited object localization capabilities. Comparing the results obtained using all CXR images with those using only TB CXR images, we observe that the region labeled as "FN" (false negatives) is larger when evaluating \begin{table} \begin{tabular}{l|c|c c|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Test Data} & \multirow{2}{*}{Backbones} & \multicolumn{2}{c|}{Category-agnostic TB} & \multicolumn{2}{c|}{Active TB} & \multicolumn{2}{c}{Latent TB} \\ \cline{5-8} & & & AP\({}_{50}^{\text{bb}}\) & AP\({}_{50}^{\text{bb}}\) & AP\({}_{50}^{\text{bb}}\) & AP\({}_{50}^{\text{bb}}\) & AP\({}_{50}^{\text{bb}}\) & AP\({}_{50}^{\text{bb}}\) & AP\({}^{\text{bb}}\) \\ \hline SSD [65] & & VGGNet-16 & 52.3 & 22.6 & 50.5 & 22.8 & 8.1 & 3.2 \\ RetinaNet [62] & & ResNet-50 w / FPN & 52.1 & 22.2 & 45.4 & 19.6 & 6.2 & 2.4 \\ Faster R-CNN [67] & & ResNet-50 w / FPN & 57.3 & 22.7 & 53.3 & 21.9 & 9.6 & 2.9 \\ FCOS [66] & ALL & ResNet-50 w / FPN & 46.6 & 18.9 & 40.3 & 16.8 & 6.2 & 2.1 \\ Deformable DETR [61] & & ResNet-50 w / FPN & 50.4 & 20.8 & 48.2 & 20.1 & 5.0 & 1.7 \\ SymFormer w/ Deformable DETR & & ResNet-50 w / FPN & 55.6 & 22.0 & 51.6 & 21.6 & 7.0 & 2.4 \\ SymFormer w/ RetinaNet & & ResNet-50 w / FPN & **67.4** & **28.8** & **62.0** & **26.4** & **12.2** & **3.7** \\ \hline SSD [65] & & VGGNet-16 & 68.3 & 28.7 & 63.7 & 28.0 & 10.7 & 4.0 \\ RetinaNet [62] & & ResNet-50 w / FPN & 69.4 & 28.3 & 61.5 & 25.3 & 10.2 & 4.1 \\ Faster R-CNN [67] & & ResNet-50 w / FPN & 63.4 & 24.6 & 58.7 & 23.7 & 9.6 & 2.8 \\ FCOS [66] & Only TB & ResNet-50 w / FPN & 56.3 & 22.5 & 47.9 & 19.8 & 7.4 & 2.4 \\ Deformable DETR [61] & & ResNet-50 w / FPN & 58.3 & 23.6 & 55.3 & 22.8 & 6.8 & 2.2 \\ SymFormer w/ Deformable DETR & & ResNet-50 w / FPN & 60.5 & 23.8 & 56.3 & 23.4 & 8.0 & 2.7 \\ SymFormer w/ RetinaNet & & ResNet-50 w / FPN & **73.6** & **30.9** & **67.9** & **28.4** & **14.9** & **4.4** \\ \hline \hline \end{tabular} \end{table} TABLE IV: TB infection area detection results on our TBX11K test set. The “Test Data” column specifies whether the evaluation was performed using all CXR images in the test set or only TB CXR images in the test set. The “Backbone” column indicates the specific backbone network used. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline Methods & Backbones & Accuracy & AUC (TB) & Sensitivity & Specificity & Ave. Prec. (AP) & Ave. Rec. (AR) \\ \hline SSD [65] & VGGNet-16 & 84.7 & 93.0 & 78.1 & 89.4 & 82.1 & 83.8 \\ RetinaNet [62] & ResNet-50 w / FPN & 87.4 & 91.8 & 81.6 & 89.8 & 84.8 & 86.8 \\ Faster R-CNN [67] & ResNet-50 w / FPN & 89.7 & 93.6 & **91.2** & 89.9 & 87.7 & 90.5 \\ FCOS [66] & ResNet-50 w / FPN & 88.9 & 92.4 & 87.3 & 89.9 & 86.6 & 89.2 \\ Deformable DETR [61] & ResNet-50 w / FPN & 91.2 & 97.8 & 87.8 & 94.2 & 89.4 & 90.7 \\ \hline SymFormer w/ Deformable DETR & ResNet-50 w / FPN & **93.3** & **98.2** & 87.3 & **97.0** & **92.3** & **92.4** \\ SymFormer w/ RetinaNet & ResNet-50 w / FPN & 92.8 & 97.8 & 87.0 & 96.4 & 91.2 & 92.0 \\ \hline \hline \end{tabular} \end{table} TABLE III: CXR image classification results on our TBX11K test data. The “Backbone” column indicates the specific backbone network used. using all CXR images. This suggests that the filtering process based on image classification disregards many correctly detected TB areas, despite its effectiveness in improving overall detection performance. Importantly, the "FN" region for SymFormer is significantly smaller than that of other methods, highlighting its superior ability to detect fewer false negatives. Regardless of whether all CXR images or only TB CXR images are utilized, SymFormer consistently exhibits higher PR curves for IoU thresholds of 0.75, 0.5, and 0.1. By considering the results of both image classification and TB infection area detection, we can confidently conclude that the proposed SymFormer achieves state-of-the-art performance and serves as a strong baseline for future research in the field of CTD. ### _Visualization_ To gain insights into the learning process of deep neural networks on CXR images, we visualize the feature map of the RetinaNet [62] backbone at a scale of \(1/32\). To achieve this, we employ principal component analysis (PCA) to reduce the channels of the feature map to a single channel. The resulting single-channel map is then converted into a heat map for visualization purposes. The visualization of the learned features, along with the corresponding detection results, are presented in Fig. 4. Upon analysis, we observe that the visualization of healthy cases exhibits irregular feature patterns, indicating the absence of significant abnormalities. In contrast, the visualization of sick but non-TB cases displayed some discernible highlights, potentially representing the presence of lesions. For TB cases, the highlights in the visualization map align well with the annotated TB infection areas, thereby indicating the effectiveness of the proposed SymFormer in learning deep features for TB area detection. ### _Ablation Study_ In this part, we carry out ablation studies to investigate the effectiveness of the proposed modules. Specifically, we train the models using the training set of our TBX11K dataset and evaluate them on the validation set. The results are presented in Table V. The baseline model is RetinaNet Fig. 4: **Visualization of the learned deep features from CXR images using SymFormer w/ RetinaNet. We randomly select CXR images from the TBX11K test set, and for each class mentioned in Table II, we provide one example. In each example, the infection areas of active TB, latent TB, and uncertain TB are indicated by boxes colored in green, red, and blue, respectively. The ground-truth boxes are displayed with thick lines, while the detected boxes are shown with thin lines.** Fig. 3: **Error analyses of category-agnostic TB area detection using baseline models and SymFormer w/ RetinaNet. The first row is evaluated using all CXR images, while the second row only uses TB CXR images. C50/C75: PR curves under IoU thresholds of 0.5/0.75. Loc: the PR curve under the IoU threshold of 0.1. BG: removing background false positives. FN: removing other errors caused by undetected targets (false negatives). SymFormer largely outperforms other methods in all metrics, _e.g._, obtaining a remarkable 99% BG score when only using TB CXR images.** [62], which corresponds to the first model in Table V and does not incorporate any attention or positional encoding. The term "vanilla attention" refers to the deformable attention employed in Deformable DETR [61]. We utilize well-established implementations for both absolute positional encoding [31, 32] (as described in Eq. 1) and relative positional encoding [72]. As specified in Eq. 2, the default version of SPE transfers the right side of the positional encoding to the left side. Here, we also evaluate the performance when transferring the left side to the right side. Based on the discussions in SS6.2, the \(\mathrm{AP}_{50}^{\mathrm{bb}}\) metric is deemed sufficient for measuring the effectiveness of a model in assisting radiologists with identifying TB infection areas. As evident from Table V, relative positional encoding achieves inferior performance compared to absolute positional encoding, leading us to construct our SPE using absolute positional encoding. Besides, the addition of absolute positional encoding and any form of attention to RetinaNet [62] yields significant improvements in detection performance. Furthermore, across all types of positional encoding, our proposed SymAttention consistently outperforms deformable attention, showcasing its superiority in learning distinctive representations for CTD. Notably, even without STN, the proposed SPE consistently achieves superior performance compared to both absolute positional encoding and relative positional encoding. The inclusion of STN further enhances the performance of SPE, confirming its effectiveness. Therefore, our investigation into symmetric abnormality search in CTD has yielded successful results. In addition, we can observe that, for the symmetry of SPE, the transfer of positional encoding from right to left, as opposed to left to right, slightly outperforms. Thus, we transfer the positional encoding from right to left by default. ## 7 Conclusion Early diagnosis plays a crucial role in effectively treating and preventing tuberculosis (TB), a prevalent infectious disease worldwide. However, TB diagnosis remains a significant challenge, particularly in resource-constrained communities and developing countries. The conventional gold standard test for TB necessitates a BSL-3 laboratory and is a time-consuming process, taking several months to provide definitive results, making it impractical in many settings. Deep learning has shown promising advancements in various domains, prompting researchers to explore its potential in computer-aided TB diagnosis (CTD). Nonetheless, the lack of annotated data has hindered the progress of deep learning in this field. To address this limitation, we introduce TBX11K, a large-scale TB dataset with bounding box annotations. This dataset not only facilitates the training of deep neural networks for CTD but also serves as the first dataset specifically designed for TB detection. In addition to the dataset, we propose a simple yet effective framework called SymFormer for simultaneous CXR image classification and TB infection area detection. Leveraging the _bilateral symmetry property_ inherent in CXR images, SymFormer incorporates Symmetric Search Attention (SymAttention) to extract distinctive feature representations. Recognizing that CXR images may not exhibit strict symmetry, we introduce Symmetric Positional Encoding (SPE) to enhance the performance of SymAttention through feature recalibration. Furthermore, to provide a benchmark for CTD research, we introduce evaluation metrics, assess baseline models adapted from existing object detectors, and launch an online challenge. Our experiments demonstrate that SymFormer achieves state-of-the-art performance on the TBX11K dataset, positioning it as a strong baseline for future research endeavors. The introduction of the TBX11K dataset, the SymFormer method, and the CTD benchmark in this study are expected to significantly advance research in the field of CTD, ultimately contributing to improved detection and management of TB worldwide.
2310.10883
CAT(0) and cubulated Shephard groups
Shephard groups are common generalizations of Coxeter groups, Artin groups, and graph products of cyclic groups. Their definition is similar to that of a Coxeter group, but generators may have arbitrary order rather than strictly order 2. We extend a well known result that Coxeter groups are $\mathrm{CAT}(0)$ to a class of Shephard groups that have "enough" finite parabolic subgroups. We also show that in this setting, if the associated Coxeter group is type (FC), then the Shephard group acts properly and cocompactly on a $\mathrm{CAT}(0)$ cube complex. As part of our proof of the former result, we introduce a new criteria for a complex made of $A_3$ simplices to be $\mathrm{CAT}(1)$.
Katherine Goldman
2023-10-16T23:32:25Z
http://arxiv.org/abs/2310.10883v1
# Cat(0) and Cubulated Shephard Groups ###### Abstract. Shephard groups are common generalizations of Coxeter groups, Artin groups, and graph products of cyclic groups. Their definition is similar to that of a Coxeter group, but generators may have arbitrary order rather than strictly order 2. We extend a well known result that Coxeter groups are CAT(0) to a class of Shephard groups that have "enough" finite parabolic subgroups. We also show that in this setting, if the associated Coxeter group is type (FC), then the Shephard group acts properly and cocompactly on a CAT(0) cube complex. As part of our proof of the former result, we introduce a new criteria for a complex made of \(A_{3}\) simplices to be CAT(1). ## 1. Introduction The classical notion of a _Shephard group_ first arose in G. C. Shephard's thesis [10]. In it, Shephard introduces the notion of a regular complex polytope and studies the symmetry groups of such an object, which have been termed Shephard groups after his work. It turns out that these symmetry groups are complex reflection groups (finite subgroups of \(GL_{n}(\mathbb{C})\) generated by complex linear reflections) and have a "Coxeter-like" presentation. This group presentation can be encoded in a "Coxeter-like" diagram, with certain restrictions on labels and shape of the diagram. In this paper, we study those groups defined by presentations identical to those of the classical Shephard groups, but without these same restrictions on the diagrams. Their precise definition is as follows. Let \(\Gamma\) be a simplicial graph with vertex set \(I\) and the following information. For each vertex \(i\in I\), we assign a number \(p_{i}\in\mathbb{Z}_{\geq 2}\cup\{\infty\}\). For each edge \(\{i,j\}\) of \(\Gamma\), we assign a number \(m_{ij}=m_{ji}\in\mathbb{Z}_{\geq 3}\cup\{\infty\}\). If \(\{i,j\}\) is not an edge, define \(m_{ij}=2\). If \(m_{ij}\) is odd, then we require that \(p_{i}=p_{j}\). Sometimes if \(p_{i}=2\) or \(m_{ij}=3\) we omit the label. We say this data defines an _extended Coxeter diagram_. Associated to such a diagram \(\Gamma\), we define the _Shephard group_\(G_{\Gamma}\) on generators \(S=\{\,s_{i}:i\in I\,\}\) with the following presentation: \[G_{\Gamma}=\left\langle S\ \middle|\begin{array}{c}\operatorname{prod}(s_{i},s_ {j};m_{ij})=\operatorname{prod}(s_{j},s_{i};m_{ij})\\ s_{i}^{p_{i}}=1\end{array}\right\rangle, \tag{1.1}\] where \(\operatorname{prod}(a,b;m)\) denotes \((ab)^{m/2}\) if \(m\) is even and \((ab)^{(m-1)/2}a\) if \(m\) is odd. (When \(p_{i}=\infty\) or \(m_{ij}=\infty\), the corresponding relation is omitted.) This is why we require \(p_{i}=p_{j}\) when \(m_{ij}\) is odd; \(s_{i}\) and \(s_{j}\) are conjugate, and thus if \(p_{i}\neq p_{j}\) the order of \(s_{i}\) and \(s_{j}\) would not necessarily be \(p_{i}\) or \(p_{j}\), respectively. Note that if \(\Gamma\) is disconnected, then \(G_{\Gamma}\) splits as a direct product over the connected components of \(\Gamma\). We point out some special cases to show the power of this general definition. 1. If all \(p_{i}=2\), then \(G_{\Gamma}\) is a Coxeter group. 2. If all \(p_{i}=\infty\), then \(G_{\Gamma}\) is an Artin group. 3. If all \(m_{ij}=2\) or \(\infty\) (in which case we may call \(G_{\Gamma}\) "right-angled"), then \(G_{\Gamma}\) is a graph product of cyclic groups. This is, however, not simply an empty generalization to compile the above groups into one succinct class. Many interesting infinite Shephard groups (and their quotients) appear in various places in mathematics. For example, they arise naturally in algebraic geometry as objects closely related to moduli spaces of arrangements in \(\mathbb{CP}^{2}\)[12]. We detail further examples in the following. **Example 1.1**.: In certain cases, Shephard groups have (typically proper) quotients to infinite affine and hyperbolic complex reflection groups; a particularly interesting class of examples comes from the Shephard groups \(G_{\Gamma}\) with diagram \(\Gamma\) given in Figure 1. When \(p=3\), \(G_{\Gamma}\) has a quotient to one of Popov's _affine complex reflection groups_[20]. It is demonstrated in [12] that this quotient (refered to as Refl(\(\widetilde{G_{4}}\)) in said article) possesses a quite elegant geometry: the complement of its reflection hyperplanes deformation retracts to a non-positively curved \(2\)-dimensional complex built out of \((3,3,3)\) triangles, whose link is a Mobius-Kantor graph. This link is isomorphic (and isometric) to the link of a complex we define for arbitrary Shephard groups--but this \((3,3,3)\) triangle complex for Refl(\(\widetilde{G_{4}}\)) is _not_ the same as our complex for \(G_{\Gamma}\). (It may be interesting to further study the exact relationship between these two complexes.) Further, when \(p=3,4\) or \(5\), \(G_{\Gamma}\) has quotients to the hyperbolic complex reflection groups upon which Mostow's famous examples of complex hyperbolic polyhedra in \(\mathbb{CH}^{2}\) and non-arithmetic lattices of \(\operatorname{PU}(2,1)\) are based [14]. Another interesting pair of examples comes from the following diagram, which we call \(B_{n}(p,q)\): This family of Shephard groups finds use in the dual Garside theory of Artin groups. The dual Garside theory has shown to be quite effective in making progress on the \(K(\pi,1)\) conjecture (e.g., in the recent proof of the conjecture for the affine-type Artin groups [10]). **Example 1.2**.: In [1], it is shown that the Artin group of type \(D_{n}\) embeds into the Shephard group of type \(B_{n}(2,\infty)\) as a finite-index subgroup. This is used to classify the so-called _Mikado braids_ (certain well-behaved words) of the \(D_{n}\) Artin group, and specifically, to show that the set of simple elements under its dual Garside structure are all Mikado braids. Figure 1. A class of infinite Shephard groups **Example 1.3**.: In [10], the Shephard group of type \(B_{n}(\infty,2)\) is called the "middle group" \(\textsc{Mid}(B_{n})\), and is defined explicitly as a certain semidirect product of translations and reflections in \(\mathbb{R}^{n}\). This group plays a key technical role in the proofs of the facts that, for a general affine-type Artin group, it is isomorphic to its dual Artin group, and this dual Artin group embeds in a Garside group. (It is these results that form part of the foundation of [11].) Much of our work relies on the close relationship of Shephard groups and Coxeter groups, and so we make the following definition. For any extended Coxeter diagram, we let \(W_{\Gamma}\) and \(A_{\Gamma}\) denote the Coxeter group and Artin group, resp., defined by the underlying Coxeter diagram of \(\Gamma\) (that is, the Coxeter diagram obtained from \(\Gamma\) by ignoring the vertex labels). We recall that these groups have the following presentations: \[A_{\Gamma} =\langle\ S\ |\ \mathrm{prod}(s_{i},s_{j};m_{ij})=\mathrm{prod}(s_{ j},s_{i};m_{ij})\ \rangle\,,\] \[W_{\Gamma} =\langle\ S\ \big{|}\ (s_{i}s_{j})^{m_{ij}}=1,s_{i}^{2}=1\ \rangle\,,\] If the diagram \(\Gamma\) has all \(p_{i}=2\) (so \(G_{\Gamma}=W_{\Gamma}\)), we will sometimes call \(\Gamma\) "Coxeter" (without the "extended") and otherwise call \(\Gamma\) "non-Coxeter". But we emphasize that we declare \(W_{\Gamma}\) is a Coxeter group regardless of the vertex labeling of \(\Gamma\). ### Description of results We now give a broad overview of the results of this paper. We heavily use the fact that \(G_{\Gamma}\) is finite for certain diagrams \(\Gamma\). If \(G_{\Gamma}\) is a finite group, we sometimes call the diagram \(\Gamma\) itself "finite". It is an interesting fact that there are finite (abstract) Shephard groups \(G_{\Gamma}\) which are not themselves Coxeter groups. See Table 4 for examples. The diagrams in this table come from the the well-known classification of finite complex reflection groups. However, it is a priori unclear if there is some diagram \(\Gamma\), say, which is branched or a cycle, so that \(G_{\Gamma}\) is a finite group under this abstract presentation, but does not correspond to any complex reflection group. We confirm in Theorem 2.7 that this cannot happen; that is, the only finite abstract Shephard groups \(G_{\Gamma}\) are those arising from the complex reflection groups (specifically, those in Tables 3 and 4). We are able to classify the finite abstract Shephard groups, but the infinite Shephard groups remain rather mysterious. It is unclear if there is a unified topological interpretation of the Shephard groups, compared to the relationship between Coxeter groups and real reflection groups, and that between Artin groups and hyperplane complements. One of the main objectives of this article is to propose a geometric model for the infinite Shephard groups which, in many cases, can play the role of the Davis complex for a given Coxeter group. Similarly to the Davis complex, this model is built out of the finite "parabolic" subgroups of the Shephard group. To make this more precise, we make the following definitions. For an extended Coxeter diagram \(\Gamma\), a _subdiagram_ is a **full** subgraph \(\Gamma^{\prime}\) of \(\Gamma\) inheriting the vertex and edge labels of \(\Gamma\). (Recall that a subgraph is full if whenever two vertices of the subgraph are joined by an edge in the original graph, they remain joined by an edge in the subgraph.) If \(\Gamma^{\prime}\) is a subdiagram of \(\Gamma\), then \(G_{\Gamma^{\prime}}\) is also a Shephard group. The group \(G_{\Gamma^{\prime}}\) surjects onto the subgroup of \(G_{\Gamma}\) generated by the vertex set of \(\Gamma^{\prime}\) viewed within \(\Gamma\). To state this more precisely, and momentarily avoid the question of whether this is an isomorphism or not, we introduce the following notation: let \(S=\{\,s_{i}:i\in I\,\}\) denote the generating set of \(G_{\Gamma}\), and let \(J\subseteq I\) with \(T=\{\,s_{j}:j\in J\,\}\subseteq S\). Then define \(G_{T}=\langle T\rangle\) to be the subgroup of \(G_{\Gamma}\) generated by \(T\), and define \(G_{\Gamma(T)}\) to be the Shephard group defined by \(\Gamma(T)\), the full subgraph of \(\Gamma\) on vertex set \(J\). We now describe the main complex on which the Shephard groups act. Consider the following posets. \[\mathcal{S}^{f}_{\Gamma}=\mathcal{S}^{f}=\{\,T\subseteq S:W_{\Gamma(T)}\text{ is finite}\,\},\] \[\mathcal{S}^{fs}_{\Gamma}=\mathcal{S}^{fs}=\{\,T\subseteq S:G_{\Gamma(T)}\text{ is finite}\,\}.\] The set \(\mathcal{S}^{f}\) plays a major role in the study of Coxeter groups as the basis of the fundamental domain of the Davis complex. Its direct analogue in a Shephard group is \(\mathcal{S}^{fs}\). We continue the analogy with Coxeter groups to define a complex (in Definition 3.3), which we denote \(\Theta=\Theta_{\Gamma}\), on which \(G_{\Gamma}\) acts properly and cocompactly. It would be too much to hope that this complex behaves nicely for all Shephard groups; after all, \(\Gamma\) could have all vertex labels \(\infty\), in which case \(\mathcal{S}^{fs}\) would be empty. But with certain restrictions on the number and type of finite subgroups, we can show that \(\Theta\) has a very nice geometry, and is, in fact, \(\mathrm{CAT}(0)\). Our first step in this direction is the following. **Theorem A**.: _Suppose \(\Gamma\) is an extended Coxeter diagram with \(\mathcal{S}^{f}_{\Gamma}=\mathcal{S}^{fs}_{\Gamma}\) such that \(\Gamma\) has no subdiagram of type "\(A_{4}(3)\)" (see Table 4). Then \(\Theta_{\Gamma}\) is \(\mathrm{CAT}(0)\), and hence \(G_{\Gamma}\) is \(\mathrm{CAT}(0)\) (i.e., acts properly and cocompactly on a \(\mathrm{CAT}(0)\) space by isometries)._ Some examples of extended Coxeter diagrams satisfying the hypothesis of Theorem A can be found in Table 1. Among the other nice properties of \(\mathrm{CAT}(0)\) groups, notable consequences of Theorem A for the applicable \(G_{\Gamma}\) are: **Corollary**.: Suppose \(\Gamma\) is an extended Coxeter diagram with \(\mathcal{S}^{f}_{\Gamma}=\mathcal{S}^{fs}_{\Gamma}\) which has no subdiagram of type \(A_{4}(3)\). Then the word problem and congugacy problem for \(G_{\Gamma}\) are solvable, and \(G_{\Gamma}\) has quadratic Dehn function. Along the way, we extract the following combinatorial criteria of Charney introduced in [10] (based on [1]) to show that a specific 2-dimensional piecewise spherical complex is \(\mathrm{CAT}(1)\). See Definition 5.1 and Theorem 5.2 for more detailed descriptions. **Theorem B**.: _Suppose \(\Psi\) is a marked \(A_{3}\) simplicial complex. If \(\Psi\) is locally \(\mathrm{CAT}(1)\) with the link of the \(\pi/2\) vertices complete bipartite, and if the 4-cycles and 6-cycles in the 1-skeleton of \(\Psi\) seen in Figure 2 can be "filled in" to the subcomplexes in Figure 3, then \(\Psi\) is \(\mathrm{CAT}(1)\)._ Our second result is comparable to [1, Thm. 4.3.5]. There are certain Shephard groups where \(\Theta_{\Gamma}\) has a natural cubical structure: **Theorem C**.: _Suppose \(\Gamma\) is an extended Coxeter diagram so that the underlying Coxeter diagram is type (FC) and \(\mathcal{S}^{f}_{\Gamma}=\mathcal{S}^{fs}_{\Gamma}\). Then \(\Theta_{\Gamma}\) is a \(\mathrm{CAT}(0)\) cube complex, and hence \(G_{\Gamma}\) is (cocompactly) cubulated._ Recall that a Coxeter diagram is type (FC) if it satisfies the following condition: * A subdiagram \(\Gamma^{\prime}\) of \(\Gamma\) generates a finite Coxeter group if and only if \(\Gamma^{\prime}\) has no edge labeled \(\infty\). \begin{table} \begin{tabular}{c Thus an equivalent hypothesis for Theorem C is "\(\Gamma\) is an extended Coxeter diagram such that a subdiagram \(\Gamma^{\prime}\) of \(\Gamma\) generates a finite Shephard group \(G_{\Gamma^{\prime}}\) if and only if \(\Gamma^{\prime}\) has no edge labeled \(\infty\)." We observe that Theorem C includes the right-angled Shephard groups (with finite vertex labels) as a special case. More examples can be found in Table 2. The inspiration for the definition of \(\Theta_{\Gamma}\) comes from [1], and the overarching themes are quite similar: after detailing the construction of the complexes for a given Shephard group and the metrics thereupon, we verify that links in these complexes are CAT(1). The metrics are defined identically to the Coxeter and Artin group cases--however, our technique for showing that these metrics are CAT(0) diverge greatly from [1], heavily utilizing the connection between the finite Shephard groups and regular complex polytopes. ### Organization of paper In Section 2, we establish the classification of finite Shephard groups and recall how they relate to the regular complex polytopes. In Section 3, we describe the complexes for Shephard groups which generalize the Coxeter and Deligne complexes of [1]. Section 4 is dedicated to proving Theorem C, which only uses the geometry and combinatorics of the regular complex polytopes. Sections 5 and 6 are dedicated to proving Theorem A; Section 5 focuses solely on proving Theorem B, while Section 6 completes the proof of Theorem A. As an aside, Appendix A compiles the various facts about the complex polytope for the "\(A_{3}(3)\)" Shephard group which are used in the proof of Theorem A. ## 2. Finite Shephard groups In this section, we collect relevant background information regarding the finite Shephard groups, in particular, their classification and connection to regular complex polytopes. ### The classification of finite Shephard groups As discussed in the introduction, the study of Shephard groups has tended to focus on their use as finite complex reflection groups. Of course, it is well known (and one of their defining features) that the (non-Coxeter) Shephard groups arising as complex reflection groups all have a presentation as in (1.1) with diagram found in Table 4. However, if one considers an abstract group with presentation (1.1) and arbitrary diagram \(\Gamma\), it is not clear if the converse holds; that is, if \(G_{\Gamma}\) is finite, then \(\Gamma\) appears in Tables 3 or 4. It turns out that this is in fact true, although not entirely trivial to show. To the author's knowledge, there is no published full proof of this fact (as previous work seems to only rely on the fact that the diagrams in Table 4 form a subset of the finite Shephard groups), so, we include a brief section on the proof that the diagrams in Tables 3 and 4 form the entire class of finite Shephard groups. The following definition is adapted from [10], who adapted it from the Hermitian forms typically used in the study of classical Shephard groups. **Definition 2.1**.: Let \(\Gamma\) be a Shephard group on vertex set \(I\) with all \(p_{i}\) finite. For an edge spanned by \(i,j\in I\), \(i\neq j\), define \[\alpha_{ij}=-\left(\frac{\cos(\pi/p_{i}-\pi/p_{j})+\cos(2\pi/m_{ij})}{2\sin( \pi/p_{i})\sin(\pi/p_{j})}\right)^{1/2}.\] We let \(\alpha_{ii}=1\). If \(i\) and \(j\) are not joined by an edge, then \(\alpha_{ij}=0\). We define a Hermitian form \(H=H_{\Gamma}=\langle\cdot,\cdot\rangle\) on \(\mathbb{C}^{I}\) with basis \(\{e_{i}:i\in I\,\}\) by \[\langle e_{i},e_{j}\rangle=\alpha_{ij}.\] The form \(H\) viewed as a matrix (which we still call \(H\) by abuse of notation) is analogous to the cosine matrix for a usual Coxeter group, and reduces to the cosine matrix if all \(p_{i}=2\). We can then define a representation \(\rho=\rho_{\Gamma}\) of \(G_{\Gamma}\) by sending the generator \(s_{i}\) to the complex reflection \[R_{i}(z)=z+(\zeta_{p_{i}}-1)\langle z,e_{i}\rangle e_{i},\] where \(\zeta_{p_{i}}=\exp(2\pi\sqrt{-1}/p_{i})\) is the usual primitive \(p_{i}\)-th root of unity. (This reduces to the complexification of the usual "canonical representation" of a Coxeter group if \(\Gamma\) is Coxeter.) Our main tool is the following observation: **Proposition 2.2**.: _If the Shephard group \(G_{\Gamma}\) is finite, then the Hermitian form \(H_{\Gamma}\) is positive definite._ The proof is identical to the corresponding statement for Coxeter groups. We provide the proof here for completeness. First, we need a lemma. **Lemma 2.3**.: (cf. [1, Ch. V, SS4.6 Thm. 7]) _Suppose \(\Gamma\) is connected. Let \(H^{0}=H_{\Gamma}^{0}\) denote the kernel of \(H=H_{\Gamma}\), i.e.,_ \[H^{0}=\{\,x\in\mathbb{C}^{I}:H(x,y)=0\text{ for each }y\in\mathbb{C}^{I}\,\}.\] _Then \(G_{\Gamma}\) acts trivially on \(H^{0}\) and every proper \(G_{\Gamma}\)-invariant subspace is contained in \(H^{0}\)._ Proof.: By the definition of the maps \(R_{i}\), if \(z\in H^{0}\) then \(R_{i}(z)=z\) for all \(i\), and hence \(H^{0}\) is fixed by \(G_{\Gamma}\). Let \(E\) be a proper subspace of \(\mathbb{C}^{I}\) fixed by \(G_{\Gamma}\). Suppose some \(e_{i}\) is contained in \(E\). Let \(j\) be a vertex of \(\Gamma\) joined to \(i\) by an edge. By direct computation we see that the coefficient of \(e_{j}\) in \(R_{j}(e_{i})\) is nonzero, and hence \(e_{j}\) is also contained in \(E\). Since \(\Gamma\) is connected, it follows that every basis vector is contained in \(E\), contradicting our assumption that \(E\) is proper. Thus \(E\) contains no basis vector. Consider the action of some \(R_{i}\) on \(\mathbb{C}^{I}\). By basic linear algebra, \(\mathbb{C}^{I}\) decomposes as the \(+1\) and \(\zeta_{p_{i}}\) eigenspaces of \(R_{i}\), which are \(e_{i}^{\perp}\) (the subspace orthogonal to \(e_{i}\) under \(H\)) and \(\mathbb{C}e_{i}\), respectively. Since \(e_{i}\not\in E\) by assumption, we must have \(E\subseteq e_{i}^{\perp}\). Thus \(E\subseteq\bigcap e_{i}^{\perp}=H^{0}\). We can now complete the proof of Proposition 2.2. Proof (of Proposition 2.2).: Without loss of generality, we may assume \(\Gamma\) is connected (otherwise \(H_{\Gamma}\) is positive definite if and only if the Hermitian form for each component is positive definite). First we note that by classical representation theory1, \(\rho\) is semisimple (i.e., if a subspace of \(\mathbb{C}^{I}\) is \(G\)-invariant under \(\rho\) then it's a direct summand of \(\mathbb{C}^{I}\)) and fixes some inner product (i.e., a positive definite Hermitian form). Since this representation is semisimple, Lemma 2.3 implies \(H\) is non-degenerate; otherwise, we would have that \(H^{0}\neq 0\) is a proper non-trivial subspace fixed by \(G\), but by the Lemma, there can be no complementary \(G\)-invariant subspace of \(H^{0}\), contradicting the fact that \(\rho\) is semisimple. Thus \(H\) is nondegenerate, i.e., has \(H^{0}=0\), and thus by Lemma 2.3, we know that the representation \(\rho\) is irreducible. Since \(\rho\) is irreducible, it follows from Schur's lemma that \(H\) is a scalar multiple of the aforementioned invariant inner product. Since we've required \(H(e_{i},e_{i})=1\) for all \(i\), it follows that \(H\) is positive definite. Our main use for this proposition is via its contrapositive, namely that if \(H_{\Gamma}\) is not positive definite, then \(G_{\Gamma}\) is infinite. The following is the other key technical fact. **Lemma 2.4**.: _Let \(p,q\geq 2\) and \(m\geq 3\) be integers with \(p=q\) whenever \(m\) is odd. Define_ \[c =-\cos(\pi/m),\] \[\alpha =-\left(\frac{\cos(\pi/p-\pi/q)+\cos(2\pi/m)}{2\sin(\pi/p)\sin( \pi/q)}\right)^{1/2}.\] _Then \(\alpha\leq c\)._ Proof.: We'll consider two cases. First, suppose \(\alpha\leq-1\). Since cosine is bounded above by \(1\), this means \[\alpha\leq-1\leq-\cos(\pi/m)=c.\] Now suppose \(\alpha>-1\). Since \(\alpha\leq 0\), this implies \(\alpha^{2}<1\). By a straightforward trigonometric exercise, this happens if and only if \(1/p+1/q+2/m>1\). There are only finitely many such triples \((p,m,q)\) when \(m\neq 4\) (since \(p=q\) when \(m=3\)). These cases are easily checked by direct computation. When \(m=4\), this relation holds only when either \(p=q=3\), or \(q=2\) and \(p\geq 2\) (or vice versa). If \(p=q=3\) then the computation is immediate. If \(q=2\) and \(p\geq 2\), we see that \[\alpha =-\left(\frac{\cos(\pi/p-\pi/2)+\cos(2\pi/4)}{2\sin(\pi/p)\sin( \pi/2)}\right)^{1/2}\] \[=-\left(\frac{\sin(\pi/p)}{2\sin(\pi/p)}\right)^{1/2}\] \[=-2^{-1/2}\] \[=-\cos(\pi/4)=c.\qed\] **Proposition 2.5**.: _Let \(\Gamma\) be an extended Coxeter diagram. If the Coxeter group \(W_{\Gamma}\) is not finite, then neither is \(G_{\Gamma}\). (In other words, \(\mathcal{S}^{fs}\subseteq\mathcal{S}^{f}\).)_ Proof.: If \(W_{\Gamma}\) is not finite, then the cosine matrix \(C=C_{\Gamma}=(c_{ij})\), where \(c_{ij}=-\cos(\pi/m_{ij})\), \(c_{ii}=1\) is not positive definite [1, SS4.8 Thm. 2]. This means there exists a non-zero vector \(x\in\mathbb{C}^{n}\) such that \[x^{*}Cx\leq 0.\] Expanding the product gives us \[\sum_{i}c_{ij}\overline{x_{i}}x_{j}\leq 0.\] By Lemma 2.4, in combination with the requirements that \(\alpha_{ij}=0\) when \(i\) and \(j\) don't span an edge of \(\Gamma\) and \(\alpha_{ii}=1\), we know that \(\alpha_{ij}\leq c_{ij}\) for all \(i,j\). Therefore, \[x^{*}Hx=\sum\alpha_{ij}\overline{x_{i}}x_{j}\leq\sum c_{ij}\overline{x_{i}}x_{ j}\leq 0.\] Hence \(H=H_{\Gamma}\) is not positive definite, so \(G_{\Gamma}\) is not finite. This narrows down our search to adding vertex labels to the diagrams in Table 3. The following Lemma rules out the branched Coxeter diagrams. **Lemma 2.6**.: _Let \(D_{4}(p)\) denote the following extended Coxeter diagram:_ _If \(p>2\), then the corresponding Hermitian form \(H_{D_{4}(p)}\) is not positive definite. In particular, any Shephard group with a subdiagram of this form is not finite._ Proof.: Let \(H=H_{D_{4}(p)}\). Suppose the vertex set of \(D_{4}(p)\) is \(\{1,2,3,4\}\), with \(1\) being the "central" (valence 3) vertex. We compute for \(j\neq 1\) that \[\alpha_{1j}=\alpha_{j1} =-\left(\frac{\cos(\pi/p-\pi/p)+\cos(2\pi/3)}{2\sin(\pi/p)\sin( \pi/p)}\right)^{1/2}\] \[=-\left(\frac{1+\cos(2\pi/3)}{2\sin^{2}(\pi/p)}\right)^{1/2}\] \[=-\left(\frac{1-\frac{1}{2}}{2\sin^{2}(\pi/p)}\right)^{1/2}\] \[=-\left(\frac{1}{4\sin^{2}(\pi/p)}\right)^{1/2}\] \[=\frac{1}{2\sin(\pi/p)}.\] By the definition of \(H\), for all \(i\), we have \(\alpha_{ii}=1\), and if \(i\neq j\) and neither \(i\) nor \(j\) are \(1\), then \(\alpha_{ij}=0\). Thus the matrix representing \(H\) is \[\begin{pmatrix}1&(2\sin(\pi/p))^{-1}&(2\sin(\pi/p))^{-1}&(2\sin(\pi/p))^{-1}\\ (2\sin(\pi/p))^{-1}&1&0&0\\ (2\sin(\pi/p))^{-1}&0&1&0\\ (2\sin(\pi/p))^{-1}&0&0&1\end{pmatrix}.\] The determinant of this matrix is \[1-\frac{3}{4\sin^{2}(\pi/p)}\] Notice that this function is decreasing in \(p\). When \(p=3\), this determinant is \(0\). Hence it is non-positive for all \(p\geq 3\), and thus \(H\) is not positive definite when \(p\geq 3\). It follows that any Shephard group with a \(D_{4}(p)\) subdiagram has Hermitian form which is not positive definite, so the group is not finite. **Theorem 2.7**.: _A connected extended Coxeter diagram \(\Gamma\) produces a finite Shephard group \(G_{\Gamma}\) if and only if it appears in Tables 3 or 4._ Proof.: The result for Coxeter diagrams is well known [1]. So, suppose \(\Gamma\) is non-Coxeter (i.e., has at least one vertex with label \(>2\)). If \(\Gamma\) is a straight line, then by [13, SS13], \(G_{\Gamma}\) is finite if and only if it appears in Table 4. If \(\Gamma\) is not a straight line, then, by Proposition 2.5, it must have underlying Coxeter diagram of type \(D_{n}\), \(n\geq 4\), or \(E_{m}\), \(m=6,7,8\). Since the label of vertices connected by an odd-labeled edge must agree, we have that every vertex label of \(\Gamma\) is \(p\) for some \(p\geq 2\). Since we've assumed \(\Gamma\) is non-Coxeter, we know \(p\geq 3\). But then \(\Gamma\) must contain a subdiagram of the form \(D_{4}(p)\) with \(p\geq 3\); by Lemma 2.6, this forces the group to be infinite. \begin{table} \begin{tabular}{l|c c} & Shephard-Todd & Coxeter \\ \hline \(B_{n}(p,2)\) & \(G(p,1,n)\) & \(p[4]2[3]...[3]2\) \\ \(B_{3}(2,3)\) & \(G_{26}\) & \(2[4]3[3]3\) \\ \(A_{3}(3)\) & \(G_{25}\) & \(3[3]3[3]3\) \\ \(A_{4}(3)\) & \(G_{32}\) & \(3[3]3[3]3[3]3\) \\ \(I_{2}(p,m,q)\) & various & \(p[m]q\) \\ \end{tabular} \end{table} Table 4. The non-Coxeter finite extended Coxeter diagrams \begin{table} \begin{tabular}{l|c c} & Shephard-Todd & Coxeter \\ \hline \(B_{n}(p,2)\) & \(G(p,1,n)\) & \(p[4]2[3]...[3]2\) \\ \(B_{3}(2,3)\) & \(G_{26}\) & \(2[4]3[3]3\) \\ \(A_{3}(3)\) & \(G_{25}\) & \(3[3]3[3]3\) \\ \(A_{4}(3)\) & \(G_{32}\) & \(3[3]3[3]3[3]3\) \\ \(I_{2}(p,m,q)\) & various & \(p[m]q\) \\ \end{tabular} \end{table} Table 5. Comparison of notation ### Connection to complex polytopes We end this section by noting the deep relationship between the so-called regular complex polytopes and the (finite) Shephard groups, as this forms the basis for most of our arguments. First, we recall the definitions. In many ways they are identical to (reasonable) definitions of regular polytopes in \(\mathbb{R}^{d}\), with the only substantial and necessary differences made to accomodate the fact that complex "lines" have real dimension \(2\). A standard reference for this material is [10]. We introduce common terminology. Let \(\mathcal{P}\) be any collection of affine subspaces of \(\mathbb{C}^{d}\) (which may include the "empty subspace" \(\varnothing\) and the whole space). An _\(n\)-face_ of \(\mathcal{P}\) is an \(n\)-dimensional element of \(\mathcal{P}\) (by convention, \(\dim(\varnothing)=-1\)). Sometimes we call \(0\)-faces the vertices of \(\mathcal{P}\) and \(1\)-faces the edges of \(\mathcal{P}\). We say two faces are incident if they are distinct and one contains the other as a subspace. If \(F_{1}\subseteq F_{2}\) are incident faces and \(\dim(F_{1})\leq\dim(F_{2})-2\), then the collection of all \(F^{\prime}\) with \(F_{1}\subseteq F^{\prime}\subseteq F_{2}\) is called the _medial figure_ of \(F_{1}\) and \(F_{2}\). A symmetry of \(\mathcal{P}\) is a unitary map of \(\mathbb{C}^{d}\) which fixes the union of the elements of \(\mathcal{P}\) as a set. The collection of all such symmetries is the _symmetry group_ of \(\mathcal{P}\). Note that a symmetry necessarily preserves incidence. **Definition 2.8**.: We call a collection \(\mathcal{P}\) of affine subspaces of \(\mathbb{C}^{d}\) a _regular complex (\(n\)-)polytope_ if it satisfies the following axioms. 1. \(\varnothing\in\mathcal{P}\). 2. There is unique \(n\)-face of \(\mathcal{P}\) and all elements of \(\mathcal{P}\) are contained in this \(n\)-face. 3. Every medial figure contains at least two faces of each (appropriate) dimension. 4. Every medial figure of an \(i\)-face with an \(j\)-face with \(i\leq j-3\) is connected. 5. Its symmetry group acts simply transitively on the set of maximal chains (linearly ordered sets) of faces. A standard result is the following. **Theorem 2.9**.: _If \(\mathcal{P}\) is a regular complex polytope, then its symmetry group is a finite Shephard group with diagram a straight line (meaning the valence of exactly two of the vertices is \(1\) and all other vertices have valence \(2\)). Conversely, if \(\Gamma\) is a straight-line extended Coxeter diagram then \(G_{\Gamma}\) is the symmetry group of a regular complex polytope._ This is detailed in [10, SS12.1]. This is not a \(1\)-\(1\) correspondence, as there are "starry" and "non-starry" polytopes that can be constructed for a given Shephard group. From here on, when we say "regular complex polytope" we always mean the non-starry polytope. With this restriction, the above does give a \(1\)-\(1\) correspondence (modulo taking dual polytopes). We highlight specifically how a Shephard group acts on its corresponding polytope. Let \(\Gamma\) be a connected extended Coxeter diagram which is linear and has vertices named \(1,\ldots,n\) from left to right (or right to left) with \(\{s_{i}:i=1,\ldots,n\,\}\) the generators of \(G_{\Gamma}\). We fix an arbitrary maximal chain \(F_{-1}<F_{0}<F_{1}<\cdots<F_{n}<F_{n+1}\) (where \(F_{-1}=\varnothing\) and \(F_{n+1}\) is the top-dimensional face of \(\mathcal{P}\)) sometimes denoted by \(\mathcal{F}_{0}\) and called the _base chain_2. A generator \(s_{i}\) acts by fixing the elements \(F_{j}\) for \(j\neq i\), and cyclically permuting the \(i\)-faces which are incident to each \(F_{j}\), \(j\neq i\). Moreover, the stabilizer of the face \(F_{i}\) is the subgroup of \(G\) generated by \(\widehat{s_{i}}\coloneqq S\setminus\{s_{i}\}\). The following clarifies the properties of this action further. **Lemma 2.10**.: _[_12_, Lemma 12]_ _Let \(F_{i_{1}}\subseteq F_{i_{2}}\subseteq\cdots\subseteq F_{i_{k}}\) be a subchain of \(\mathcal{F}_{0}\), with \(T=\{s_{i_{1}},\ldots,s_{i_{k}}\}\). The stabilizer of this chain is \(G_{\widehat{T}}\), where \(\widehat{T}=S\setminus T\)._ We will call a \(G\)-translate of such a subchain a "chain of type \(\widehat{T}\)", since the stabilizer is a conguate of the subgroup of \(G\) generated by \(\widehat{T}\). The following clarifies that these stabilizers are themselves Shephard groups generated by subdiagrams of \(\Gamma\). **Proposition 2.11**.: _Let \(\Gamma\) be a finite extended Coxeter diagram on vertex set \(I\) with \(S=\{\,s_{i}:i\in I\,\}\). Let \(T\subseteq S\). Then \(G_{T}\cong G_{\Gamma(T)}\) via the natural map discussed in the introduction._ Proof.: We give a rough outline of the argument which is implicit throughout [12]. If \(\Gamma\) is Coxeter the result is well known, so suppose it is one of the diagrams in Table 4. Let \(\Gamma^{\prime}\) be a connected subdiagram. Name the vertices of \(\Gamma\) left to right as \(1,\ldots,n\), and let \(T=\{s_{i},s_{i+1},\ldots,s_{j}\}\) for some \(i\leq j\). Let \(\mathcal{P}_{i,j}\) denote the medial figure of \(F_{i-1}\) and \(F_{j+1}\). A medial figure of a regular complex polytope from an \((i-1)\)-face to a \((j+1)\)-face is itself a regular complex polytope. Moreover, the type3 of this medial figure is the subdiagram of \(\Gamma\) from \(i\) to \(j\), which in our case is \(\Gamma(T)\). This implies that the symmetry group of \(\mathcal{P}_{i,j}\) is \(G_{\Gamma(T)}\). But by the description of the action of \(G_{\Gamma}\) given above, we see that \(\mathcal{P}_{i,j}\) is the orbit of the base chain \(\mathcal{F}_{0}\) under \(G_{T}\) and, moreover, that this action is simply transitive. Thus \(G_{T}\) is also the symmetry group of \(\mathcal{P}_{i,j}\), and since the action of the generators of \(G_{T}\) and \(G_{\Gamma(T)}\) is identical, it can then be easily seen that \(G_{T}\cong G_{\Gamma(T)}\). Footnote 3: We’ve glossed over this, but it is possible to combinatorially describe a polytope via an extended Coxeter diagram without reference to symmetry group, and this is what we mean by “type”. The case where \(T\) is an arbitrary subset of \(S\) follows immediately from seeing that \(G_{\Gamma(T)}\) is a direct product of groups with connected diagram, and \(G_{T}\) is an (internal) direct product of subsets of \(S\) of the form considered above. Thus in the case when \(G_{\Gamma}\) is finite, we freely identify \(G_{T}\) and \(G_{\Gamma(T)}\) and will refer to either as a (standard) parabolic subgroup. The following standard construction was used implicitly in the previous proof, and it will be useful to have the precise statement later. **Definition 2.12**.: Let \(\mathcal{P}_{1},\mathcal{P}_{2}\) be regular complex polytopes in \(\mathbb{C}^{d_{1}}\) and \(\mathbb{C}^{d_{2}}\), respectively. Then the product polytope \(\mathcal{P}=\mathcal{P}_{1}\times\mathcal{P}_{2}\) is the collection of all subspaces \(f_{1}\times f_{2}\subseteq\mathbb{C}^{d_{1}}\times\mathbb{C}^{d_{2}}\cong \mathbb{C}^{d_{1}d_{2}}\) with \(f_{i}\in\mathcal{P}_{i}\), ordered by inclusion. Note that \(e_{1}\times e_{2}\subseteq f_{1}\times f_{2}\) if and only if both \(e_{i}\subseteq f_{i}\). Repeated products are defined in the obvious way. We emphasize that for any subspace \(V\subseteq\mathbb{C}^{n}\), we always have \(\varnothing\times V=V\times\varnothing=\varnothing\). In particular, if \(\mathcal{P}_{>\varnothing}\) denotes the non-empty faces of \(\mathcal{P}\), then \(\mathcal{P}_{>\varnothing}\times\mathcal{Q}_{>\varnothing}=(\mathcal{P} \times\mathcal{Q})_{>\varnothing}\). ## 3. Complexes for Shephard groups Now we describe two complexes for a given Shephard group based on familiar definitions for Coxeter groups and Artin groups. First, we introduce common notation: if \(\Psi\) is a cell complex, we denote the closed (resp. open) star of a vertex \(v\) in \(\Psi\) by \(\operatorname{St}(v)=\operatorname{St}(v,\Psi)\) (resp. \(\operatorname{st}(v)=\operatorname{st}(v,\Psi)\)), and the boundary of a star is \(\partial\operatorname{St}(v)=\partial\operatorname{St}(v,\Psi)=\operatorname{ St}(v)\setminus\operatorname{st}(v)\). The link of \(v\) in \(\Psi\) is denoted \(lk(v)=lk(v,\Psi)\). ### A complex for finite Shephard groups In this section, let \(\Gamma\) be an extended Coxeter diagram such that \(G=G_{\Gamma}\) is finite. In [10], an analogue of the Coxeter complex is defined for Shephard groups, there called the _Milnor fiber complex_. This complex is one of the main objects of study of this paper. For non-Coxeter finite Shephard groups, it acts as a generalization of the Coxeter complex for reasons we explain shortly. We briefly recall the definition before restating it in terms of complexes of groups. (The original definition is given in terms of invariant polynomials, but is equivalent to the following definition in terms of polytopes by [10, Thm. 5.1].) Suppose \(\Gamma\) is a straight line diagram (recall: \(\Gamma\) is connected and all vertices are valence \(2\) except for two unique vertices of valence \(1\)). Let \(\mathcal{P}\) be the (unique non-starry) regular complex \(n\)-polytope with \(G_{\Gamma}\) as its symmetry group. Denote by \(\mathcal{P}^{\prime}_{p}\) the derived complex of the subposet \(\mathcal{P}_{p}\) of \(\mathcal{P}\) consisting of the _proper_ faces of \(\mathcal{P}\); that is, the set of chains (linearly ordered subsets) of \(\mathcal{P}\) excluding \(\varnothing\) and \(\mathbb{C}^{n}\) (the unique \(n\)-dimensional face of \(\mathcal{P}\)). This is an abstract simplicial complex, and thus has a geometric realization, which we will call the _Milnor fiber complex_ or _extended Coxeter complex_\(\widehat{\Theta}=\widehat{\Theta}_{\Gamma}=\widehat{\Theta}(\Gamma)\). Note that if \(\Gamma\) is a (straight line) Coxeter diagram then this is isomorphic to the usual Coxeter complex. The \(n\)-simplices of \(\widehat{\Theta}\) will be denoted \[[f_{0}<f_{1}<\cdots<f_{n}]\] where \(f_{0}<f_{1}<\cdots<f_{n}\) is a chain of faces of \(\mathcal{P}\). To rephrase the definition of the derived complex, the vertices of \(\widehat{\Theta}\) correspond to (proper) faces of \(\mathcal{P}\), and a collection of vertices span a simplex if and only if they are linearly nested (which in turn happens if and only if they are pairwise nested, implying \(\widehat{\Theta}\) is a flag complex). Thus \(\widehat{\Theta}\) inherits an action of \(G\) from its action on \(\mathcal{P}\). There is a "base chamber" \(\mathcal{C}_{0}\) of \(\widehat{\Theta}\) corresponding to the base chain \(\mathcal{F}_{0}\). If \(F_{i_{1}}<F_{i_{2}}<\cdots<F_{i_{k}}\) is a subchain of \(\mathcal{F}_{0}\), with \(T=\{s_{i_{1}},\ldots,s_{i_{k}}\}\), then we call a \(G\)-translate of \[[F_{i_{1}}<F_{i_{2}}<\cdots<F_{i_{k}}]\] a _simplex of type \(\widehat{T}\)_. Since \(G\) acts simply transitively on the set of chains of \(\mathcal{P}\) by assumption, \(\mathcal{C}_{0}\) is a strict fundamental domain for the action of \(G\) on \(\widehat{\Theta}\). By Lemma 2.10, the stabilizer of a simplex of type \(\widehat{T}\) contained in the base chamber is \(G_{\widehat{T}}\). This implies that the stabilizer of a simplex of type \(\widehat{T}\) is a conjugate of \(G_{\widehat{T}}\). We can rephrase the above in the language of complexes of groups as follows. **Definition 3.1**.: Let \(\Gamma\) be any finite extended Coxeter diagram. Define \(\Delta=\Delta_{\Gamma}=\Delta_{S}\) to be a simplex whose vertices are labeled by the generators \(S\) of \(G_{\Gamma}\) (equiv., the vertices of \(\Gamma\)). For \(T\subseteq S\), let \(\sigma_{T}\) denote the face of \(\Delta_{S}\) spanned by the elements of \(T\). We define a complex of groups \(\widehat{\mathcal{G}}=\widehat{\mathcal{G}}(G_{\Gamma},\Delta_{\Gamma})\) by declaring the local group at the face \(\sigma_{T}\) to be the group \(G_{\widehat{T}}\). (Recall \(\widehat{T}\coloneqq S\setminus T\).) The edge maps are the standard maps induced by the inclusion of generating sets. This is a simple complex of groups, and hence its fundamental group is given by the direct limit of the system of edge maps [1, Def. II.12.12]. Clearly by our definitions, in this setting the fundamental group of \(\widehat{\mathcal{G}}\) is \(G_{\Gamma}\). By our discussion above, if \(\Gamma\) is a straight line, this complex is developable with development \(\widehat{\Theta}=\widehat{\Theta}_{\Gamma}\). If \(\Gamma\) is a Coxeter diagram, then this definition coincides with the usual definition of the Coxeter complex of \(W_{\Gamma}\), and hence is also developable. Thus \(\widehat{\mathcal{G}}\) is developable for any finite Shephard group. Sometimes when \(\Gamma\) is a (non-extended) Coxeter diagram, we write \(\widehat{\Sigma}(\Gamma)=\widehat{\Sigma}_{\Gamma}=\widehat{\Theta}_{\Gamma}\) and call \(\widehat{\Sigma}_{\Gamma}\) the _Coxeter complex_ of \(\Gamma\). It is well known that if \(\Gamma\) is a Coxeter diagram, then \(\widehat{\Sigma}_{\Gamma}\) possesses a natural metric under which it is isometric to a sphere. We discuss metrics on \(\widehat{\Theta}_{\Gamma}\) for non-Coxeter \(\Gamma\) in a later section. Before concluding, we present an observation which will be of use later. **Lemma 3.2**.: _Let \(\Gamma\) be a finite extended Coxeter diagram and \(\delta\) a simplex of type \(\widehat{T}\). Then_ \[lk(\delta,\widehat{\Theta})\cong\widehat{\Theta}(\Gamma(\widehat{T}))= \widehat{\Theta}(\Gamma\setminus\Gamma(T)),\] _where for a subdiagram \(\Gamma^{\prime}\) of \(\Gamma\), we let \(\Gamma\setminus\Gamma^{\prime}\) denote the subdiagram of \(\Gamma\) on vertex set \(\operatorname{Vert}(\Gamma)\setminus\operatorname{Vert}(\Gamma^{\prime})\)._ Proof.: First consider a vertex \(v\) of type \(\hat{s}\). The vertex set of \(lk(v,\widehat{\Theta})\) is the collection of all vertices of \(\widehat{\Theta}\) which are joined to \(v\) by an edge, and incidence in the link is inherited from incidence in \(\widehat{\Theta}\). These vertices are the faces of \(\mathcal{P}\) which are incident to the face \(v\). Thus there are two types of faces, those containing \(v\) and those contained by \(v\). If a face contains \(v\), then it's in the medial polytope between \(v\) and \(\mathbb{C}^{d}\) which forms one component of \(\widehat{\Theta}(\Gamma(\hat{s}))\). If a face is contained in \(v\), then it's in the medial polytope between \(\varnothing\) and \(v\), which forms the other component of \(\widehat{\Theta}(\Gamma(\hat{s}))\). Thus \(lk(v,\widehat{\Theta})\cong\widehat{\Theta}(\Gamma(\widehat{s}))\). Now let \(\delta\) be any simplex of type \(\widehat{T}\). The link of \(\delta\) is \[lk(\delta,\widehat{\Theta})=\bigcap_{\begin{subarray}{c}v\in\delta\\ v\text{ a vertex}\end{subarray}}lk(v,\widehat{\Theta}),\] with intersection interpreted on the level of posets. A vertex \(v^{\prime}\) of \(\widehat{\Theta}\) is in each \(lk(v,\widehat{\Theta})\) if and only if it is a face in each \(\widehat{\Theta}(\Gamma(\widehat{s}))\), \(s\in T\), under the identification made in the previous paragraph. It is easy to see that this holds if and only if \[v^{\prime} \in\widehat{\Theta}\left(\bigcap_{s\in T}\Gamma(\widehat{s})\right)\] \[=\widehat{\Theta}\left(\Gamma\left(\bigcap_{s\in T}\widehat{s} \right)\right)\] \[=\widehat{\Theta}(\Gamma(\widehat{T})).\qed\] ### A complex for arbitrary Shephard groups Throughout this section, let \(\Gamma\) be any extended Coxeter diagram on vertex set \(I\) with \(S=\{s_{i}:i\in I\,\}\) the standard generators for \(W=W_{\Gamma}\) and \(G=G_{\Gamma}\). The following definition is based on the definition of the "modified Coxeter/Deligne complexes" of [1]. **Definition 3.3**.: Let \(K=K_{\Gamma}=|(\mathcal{S}_{\Gamma}^{fs})^{\prime}|\), where \((\mathcal{S}_{\Gamma}^{fs})^{\prime}\) denotes the derived complex of \(\mathcal{S}_{\Gamma}^{fs}\) and \(|(\mathcal{S}_{\Gamma}^{fs})^{\prime}|\) is its geometric realization. An \(n\)-simplex of \(K\) is denoted \[[T_{0}<T_{1}<\cdots<T_{n}]\] for a chain \(T_{0}<T_{1}<\cdots<T_{n}\) with each \(T_{i}\in\mathcal{S}^{fs}\). In particular, the vertices are indexed by elements \(T\in\mathcal{S}^{fs}\); we let \(v_{T}=[T]\) denote the vertex of \(K\) coming from \(T\). We define a complex of groups \(\mathcal{G}=\mathcal{G}(G_{\Gamma},K_{\Gamma})\) over \(K\) by declaring the local group at \(v_{T}\) to be \(G_{\Gamma(T)}\) and the edge maps to be the natural maps coming from the inclusion of generators. As with the complex for the finite Shephard groups, this is also a simple complex of groups. Hence by [1, Def. II.12.12], the fundamental group of \(\mathcal{G}\) is the direct limit over the edge maps, and when \(\mathcal{S}^{f}=\mathcal{S}^{fs}\), this is clearly \(G_{\Gamma}\). If \(\Gamma\) is a Coxeter diagram, this complex of groups is developable [10], with development typically denoted \(\Sigma=\Sigma_{\Gamma}\), called the _modified Coxeter complex_ or the _Davis complex_. More generally, if \(\mathcal{G}\) is developable, we will denote its development \(\Theta=\Theta(\Gamma)=\Theta_{\Gamma}\) and may appropriately refer to it as the _modified extended Coxeter complex_, _modified Milnor fiber complex_, or _extended Davis complex_. A priori, it is not known in general if \(\mathcal{G}\) is developable for an arbitrary extended Coxeter diagram. The rest of the paper is dedicated to studying developability and, more specifically, non-negative curvature of \(\mathcal{G}\), utilizing the following (paraphrased) lemma. **Lemma 3.4**.: _[_1_, Thm. II.12.28]_ _If \(\mathcal{H}\) is a (simple) complex of groups over a simply connected domain and the local development at each vertex is locally \(\mathrm{CAT}(0)\), then \(\mathcal{H}\) is developable and has locally \(\mathrm{CAT}(0)\) development._ It is clear that \(K\) is simply connected (\([\varnothing]\) is a cone point). So in order to make use of this, we need a metric on \(K\), which we will discuss shortly. First, we describe a cell structure on \(K\) which is coarser than the given simplicial structure. The following definitions again are based on [10]. **Definition 3.5**.: For \(T\in\mathcal{S}^{fs}\), let \[\mathcal{S}_{\geq T}^{fs} =\{\,T^{\prime}\in\mathcal{S}^{fs}:T^{\prime}\supseteq T\,\} \mathcal{S}_{>T}^{fs} =\{\,T^{\prime}\in\mathcal{S}^{fs}:T^{\prime}\supsetneqq T\,\}\] \[\mathcal{S}_{\leq T}^{fs} =\{\,T^{\prime}\in\mathcal{S}^{fs}:T^{\prime}\subseteq T\,\} \mathcal{S}_{<T}^{fs} =\{\,T^{\prime}\in\mathcal{S}^{fs}:T^{\prime}\subsetneqq T\,\}\] \[F_{T} =|(\mathcal{S}_{\geq T}^{fs})^{\prime}| \,F_{T}^{*} =|(\mathcal{S}_{\geq T}^{fs})^{\prime}|\setminus|(\mathcal{S}_{>T }^{fs})^{\prime}|\] \[F_{T}^{*} =|(\mathcal{S}_{\leq T}^{fs})^{\prime}| (F_{T}^{*})^{\circ} =|(\mathcal{S}_{\leq T}^{fs})^{\prime}|\setminus|(\mathcal{S}_{<T }^{fs})^{\prime}|.\] We sometimes call \(F_{T}\) (\(F_{T}^{\circ}\)) a _face_ (resp. _open face_) of \(K\) and \(F_{T}^{*}\) (\((F_{T}^{*})^{\circ}\)) a _dual face_ (resp. _open dual face_) of \(K\). Notice that \(F_{T}^{*}\) is combinatorially a cube--the faces of the cube are of the form \(F_{T_{1}}\cap F_{T_{2}}^{*}\) where \(T_{1}\subseteq T_{2}\subseteq T\). So \(K\) itself has a cubical structure where the cubical faces are \(F_{T_{1}}\cap F_{T_{2}}^{*}\) for \(T_{1}\subseteq T_{2}\in\mathcal{S}^{fs}\). The following is a standard exercise. **Proposition 3.6**.: _With this cell structure, \(F_{T}^{*}\) is isomorphic to the cone on the barycentric subdivision \(\Delta_{T}^{\prime}\) of \(\Delta_{T}\) with cone point \(v_{T}\). The isomorphism is given by sending a vertex \(v_{T^{\prime}}\) to the barycenter of \(\sigma_{T\setminus T^{\prime}}\) in \(\Delta_{T}\)._ From here on, we identify \(lk(v_{T},F_{T}^{*})\) and \(\Delta_{T}\) in this way. With this connection between \(K\) and \(\Delta_{T}\) for \(T\in\mathcal{S}^{fs}\), we define the following metrics. **Definition 3.7**.: 1. (The cubical metric) Since the faces of \(K\) are combinatorial cubes, metrize them to be standard Euclidean unit cubes. Under this metric, \(\Delta_{T}\) is a spherical simplex with edge lengths all equal to \(\pi/2\). 2. (The Moussong metric) The definition of this metric is more involved and we refer to [10, SS4.4] for details. In summary, a cell \(F_{T_{1}}\cap F_{T_{2}}^{*}\) is metrized to be a _Coxeter block_, which is (the closure of) a connected component of the Coxeter zonotope associated to the finite Coxeter group \(W_{T_{2}}\) minus its reflection hyperplanes. In this metric, if \(T\in\mathcal{S}^{fs}\), then \(\Delta_{T}\) is a spherical simplex where the length of the edge between the vertices corresponding to \(s_{i},s_{j}\) is \(\pi-\pi_{m_{ij}}\). We conclude by briefly recalling the definition of the local development as it applies here (our notation will differ slightly from [1]). In our case, the local development of \(\mathcal{G}\) further relates \(\mathcal{G}\) with \(\widehat{\mathcal{G}}\). Let \(v_{T}\) be a vertex of \(K\), with \(T\in\mathcal{S}^{fs}\). The _upper star_\(St^{T}\) of \(v_{T}\) in \(\mathcal{G}\) is the (full) subcomplex of \(K\) spanned by the vertices \(v_{T^{\prime}}\) with \(T^{\prime}\supseteq T\). The _lower link_\(Lk_{T}\) of \(v_{T}\) in \(\mathcal{G}\) is the development of the subcomplex of groups \(\widehat{\mathcal{G}}(K_{<T})\) of \(\mathcal{G}(K)\), where \(K_{<T}\) denotes the subcomplex spanned by vertices \(v_{T^{\prime}}\) with \(T^{\prime}\subsetneq T\). Both of these objects are simplicial complexes which inherit the metric placed on \(K\). Then the _local development at_\(v_{T}\) is (combinatorially) the join \[D(T)=St^{T}*Lk_{T}.\] Its metric naturally comes from the metric on \(K\). The link of \(v_{T}\) in the local development is \[lk(v_{T},D(T))=Lk^{T}*Lk_{T},\] where \(Lk^{T}\) is the _upper link_, meaning the (full) subcomplex of \(K\) spanned by the vertices \(v_{T^{\prime}}\) with \(T^{\prime}\supsetneq T\). We may also sometimes refer to this complex as \(K_{>T}\). Note that \(K_{>T}\) is isomorphic to \(lk(v_{T},F_{T})\) and \(K_{<T}\) is isomorphic to \(lk(v_{T},F_{T}^{*})\). We use the previous proposition to identify \(K_{<T}\) with \(\Delta_{T}\). With this identification, the complex of groups \(\widehat{\mathcal{G}}(K_{<T})\) is isomorphic to \(\widehat{\mathcal{G}}(G_{T},\Delta_{T})\) as defined before and thus \(Lk_{T}\) is isomorphic to \(\widehat{\Theta}_{\Gamma(T)}\). It is straightforward to check that the metrics placed on \(K\) above agree with the claimed metrics on \(\Delta_{T}\). This is summarized in the following **Proposition 3.8**.: _If \(\Gamma\) is any extended Coxeter diagram, then in either metric, the link of a vertex \(v_{T}\) in the local development of \(\mathcal{G}(G_{\Gamma},K_{\Gamma})\) is isometric to the spherical join_ \[lk(v_{T},F_{T})*\widehat{\Theta}_{\Gamma(T)}.\] Showing the local development is nonpositively curved amounts to showing that these links are CAT(1). Since a spherical join is CAT(1) if and only if both components are [1, Cor. II.3.15], this reduces to showing that \(lk(v_{T},F_{T})\) and \(\widehat{\Theta}_{\Gamma(T)}\) are CAT(1) when \(T\in\mathcal{S}^{fs}\). We also have the following useful corollary for dealing with disconnected diagrams. **Corollary 3.9**.: _If \(\Gamma_{1}\), \(\Gamma_{2}\) are extended Coxeter diagrams and \(\Gamma=\Gamma_{1}\sqcup\Gamma_{2}\) is their disjoint union, then \(\mathcal{G}(G_{\Gamma})\cong\mathcal{G}(G_{\Gamma_{1}})\times\mathcal{G}(G_{ \Gamma_{2}})\). In particular,_ 1. _If_ \(\mathcal{G}(G_{\Gamma})\) _is developable, then under either metric,_ \(\Theta(\Gamma)\) _is isometric to_ \(\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})\)_, and_ 2. _If_ \(\Gamma_{1}\) _is a_ \(\Gamma_{2}\)_-invariant, then_ \(\Theta(\Gamma_{1})\) _is isometric to_ \(\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})\)_._ Proof.: We first show that \(\Theta(\Gamma_{1})\) is isometric to \(\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})\). By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta(\Gamma _{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta(\Gamma _{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta( \Gamma_{2}).\] By the definition of \(\Theta(\Gamma_{1})\), we have \[\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})=\Theta(\Gamma_{1})\times\Theta(\Gamma _{2}).\] 2. _If_ \(G_{\Gamma}\) _is finite, then under either metric,_ \(\widehat{\Theta}(\Gamma)\) _is isometric to the spherical join_ \(\widehat{\Theta}(\Gamma_{1})*\widehat{\Theta}(\Gamma_{2})\)_._ Proof.: By a product \(\mathcal{G}(G_{1},K_{1})\times\mathcal{G}(G_{2},K_{2})\) of (simple) complexes of groups, we mean the complex of groups over \(K_{1}\times K_{2}\) with vertex groups \(G_{v_{1}}^{(1)}\times G_{v_{2}}^{(2)}\) for \(v_{i}\in\operatorname{Vert}(K_{i})\) and vertex group \(G_{v_{i}}^{(i)}\) of \(\mathcal{G}(G_{i},K_{i})\), and edge maps \(\psi_{jk}^{(1)}\times\psi_{\ell n}^{(2)}\) with \(\psi_{jk}^{(i)}:G_{k}^{(i)}\to G_{j}^{(i)}\) the edge maps of \(\mathcal{G}(G_{i},K_{i})\). It is clear from the definitions that if \(\Gamma=\Gamma_{1}\sqcup\Gamma_{2}\) then \(G_{\Gamma}\cong G_{\Gamma_{1}}\times G_{\Gamma_{2}}\) and \(K_{\Gamma}\cong K_{\Gamma_{1}}\times K_{\Gamma_{2}}\). Thus we easily see that \(\mathcal{G}(G_{\Gamma})\cong\mathcal{G}(G_{\Gamma_{1}})\times\mathcal{G}(G_{ \Gamma_{2}})\). The statement about developability follows readily from this decomposition of the complexes of groups. Now suppose \(G_{\Gamma}\) is finite and \(\Gamma=\Gamma_{1}\sqcup\Gamma_{2}\). In particular, \(\mathcal{G}(G_{\Gamma})\) is developable since \(G_{\Gamma}\) is finite. Let \(S\) be the standard generating set of \(G_{\Gamma}\). By our definitions, \(K_{>S}\) is empty; hence \(\widehat{\Theta}_{\Gamma}\cong K_{>S}*\widehat{\Theta}_{\Gamma}\). By Proposition 3.8, this is link of \(v_{S}\) in the local development of \(\mathcal{G}(G_{\Gamma})\). But since the complex of groups is developable, this is simply \(lk(v_{S},\Theta_{\Gamma})\). Since \(\Theta(\Gamma)\cong\Theta(\Gamma_{1})\times\Theta(\Gamma_{2})\) by our previous result, \[\widehat{\Theta}_{\Gamma} \cong lk(v_{S},\Theta_{\Gamma})\] \[\cong lk(v_{S},\Theta_{\Gamma_{1}}\times\Theta_{\Gamma_{2}})\] \[\cong lk(v_{S},\Theta_{\Gamma_{1}})*lk(v_{S},\Theta_{\Gamma_{2}})\] \[\cong\widehat{\Theta}_{\Gamma_{1}}*\widehat{\Theta}_{\Gamma_{2}}.\qed\] ## 4. The cubical metric In this section, we complete the proof of Theorem C. Assume that \(\mathcal{S}^{fs}=\mathcal{S}^{f}\) and the underlying Coxeter diagram of \(\Gamma\) satisfies (FC). Endow \(\mathcal{G}=\mathcal{G}(G_{\Gamma},K_{\Gamma})\) with the cubical metric. As discussed at the end of the previous section, we must show that \(lk(v_{T},F_{T})\) and \(\widehat{\Theta}=\widehat{\Theta}_{\Gamma(T)}\) are CAT(1) when \(T\in\mathcal{S}^{fs}\). The cubical metric places a length of \(\pi/2\) on each edge of \(\widehat{\Theta}\). We apply Gromov's link condition, which says that a piecewise spherical simplicial complex with edge lengths all \(\pi/2\) is CAT(1) if and only if it's a flag complex (see [1, Thm. II.5.18] or [10]). Suppose \(v_{1}=[f_{1}],\ldots,v_{n}=[f_{n}]\) are vertices of \(\widehat{\Theta}\) which are pairwise connected by edges. This means each \(f_{i}\) is a face of the polytope corresponding to \(G_{\Gamma(T)}\), and the \(f_{i}\) are pairwise nested. The only way a collection of affine subspaces of \(\mathbb{C}^{n}\) can be pairwise nested is if they are linearly nested, resulting in a simplex \([f_{1}<\cdots<f_{n}]\) (up to change of indices) spanned by the vertices \(v_{i}\). Hence \(\widehat{\Theta}\) is flag, and therefore CAT(1) under the cubical metric. We now turn to \(lk(v_{T},F_{T})\). As with \(\widehat{\Theta}\), the cubical metric places a length of \(\pi/2\) on each edge, so we must show that this is a flag complex. We can view \(\mathcal{S}^{fs}_{>\varnothing}\) as an abstract simplicial complex with vertex set \(S\), with a set \(T\) spanning a simplex if and only if \(W_{\Gamma(T)}\) is finite. Since the underlying Coxeter diagram of \(\Gamma\) satisfies (FC), this is a flag complex. By the same argument in [1, Lem. 4.3.3], we have that \(lk(v_{T},F_{T})\cong lk(T,\mathcal{S}^{fs}_{>\varnothing})\). Since the link of a flag complex is still flag, and our previous remarks show that \(\mathcal{S}^{fs}_{>\varnothing}\) is flag, it follows that \(lk(v_{T},F_{T})\) is a flag complex, and hence CAT(1). Therefore \(\mathcal{G}\) is nonpositively curved, and hence developable with development \(\Theta=\Theta_{\Gamma}\). As discussed before, \(\Theta\) has the cell structure and metric of a cube complex. Since \(\mathcal{G}\) is a simple complex of groups over a simply connected domain, we know that \(\Theta\) is simply connected and hence is a CAT(0) cube complex. Notice that \(G_{\Gamma}\) acts cocompactly on \(\Theta\) with fundamental domain \(K\). The stabilizer of a cell of \(K\) is the finite group \(G_{\Gamma(T)}\) for some \(T\in\mathcal{S}^{fs}\), implying the stabilizer of an arbitrary cell is a conjugate of this group, and hence finite. Thus \(G\) also acts properly on \(\Theta\), and therefore is cocompactly cubulated. ## 5. A CAT(1) criterion Before proving Theorem A, we present a criterion for a simplicial complex made of \(A_{3}\)-simplices to be CAT(1). The idea behind this criteria comes from [1] (which itself is based on [1]), where it is used implicitly to show that the Moussong metric for the Deligne complex for the 4-strand braid group is CAT(0). **Definition 5.1**.: Let \(\Delta\) be a spherical 2-simplex with vertices labeled \(\hat{a}\), \(\hat{b}\), \(\hat{c}\) so that the angle at \(\hat{a}\) and \(\hat{c}\) is \(\pi/3\) and the angle at \(\hat{b}\) is \(\pi/2\). For \(g=a,b,c\), we label the edge opposite \(\hat{g}\) as \(g\). We call \(\Delta\) the _marked \(A_{3}\) simplex_. We call a homogeneous4 2-dimensional simplicial complex \(\Psi\) a _marked \(A_{3}\) simplicial complex_ if every top dimensional simplex \(\sigma\) is endowed with a choice of isomorphism \(m_{\sigma}:\sigma\to\Delta\) so that if \(x\in\sigma\cap\sigma^{\prime}\), then \(m_{\sigma}(x)=m_{\sigma^{\prime}}(x)\). The maps \(m_{\sigma}\) are the _markings_ of \(\Psi\). Sometimes we will call a top dimensional simplex a _chamber_. We call a vertex \(v\) of \(\Psi\) type \(\hat{a}\), \(\hat{b}\), or \(\hat{c}\) if for some (hence any) chamber \(\sigma\) containing \(v\), the vertex \(m_{\sigma}(v)\) of \(\Delta\) is labeled \(\hat{a}\), \(\hat{b}\), \(\hat{c}\), respectively. Similarly, we call an edge \(e\) type \(a,b,c\) if \(m_{\sigma}(e)\) is the edge of \(\Delta\) labeled \(a,b,c\), resp. Every marked \(A_{3}\) simplicial complex has a metric, which we call the _canonical metric_, obtained by pulling back the metric of \(\Delta\) along the markings, so that each chamber is a simplex of shape \(A_{3}\). Footnote 4: Recall that an \(n\)-dimensional homogeneous simplicial complex is one where every simplex is contained in some \(n\)-simplex. If \(\Psi\) is a marked \(A_{3}\) simplicial complex, we say it satisfies Charney's combinatorial CAT(1) criteria (CCCC) if 1. The link of a vertex of type \(\hat{a}\) or \(\hat{c}\) has girth5 at least 6, Footnote 5: The complex \(\Psi\) is 2-dimensional, so the links of vertices are (simplicial) graphs. We often use the language of graph theory when referring to these links. 2. The link of a vertex of type \(\hat{b}\) is a complete bipartite graph which contains an embedded 4-cycle, 3. Any edge path in Figure 2 is contained in a corresponding subcomplex of \(\Psi\) shown in Figure 3. (A shaded triangle represents a 2-simplex.) Our main theorem for this section is the following. **Theorem 5.2**.: _If \(\Psi\) is a marked \(A_{3}\) simplicial complex which satisfies CCCC, then \(\Psi\) is CAT(1) under its canonical metric._ Before proving this, we need to establish some facts about the geometry of \(\Psi\) under its canonical metric which follow from the above criteria. One of the main consequences (and inspirations) of our definition of a marked \(A_{3}\) simplicial complex is that we may _develop_ certain geodesics from \(\Psi\) onto the \(A_{3}\) Coxeter complex \(\widehat{\Sigma}=\widehat{\Sigma}(A_{3})\). Let \(W=W_{A_{3}}\) be the Coxeter group with diagram \(A_{3}\) and generators \(a,b,c\), i.e., \[W=\langle\,a,b,c\mid a^{2}=b^{2}=c^{2}=(ab)^{3}=(bc)^{3}=(ca)^{2}=1\,\rangle.\] The vertices of \(\widehat{\Sigma}\) are cosets of the form \(wW_{\{a,b,c\}\setminus\{g\}}\), where \(g\in\{a,b,c\}\). If we define \(\hat{g}\coloneqq\{a,b,c\}\setminus\{g\}\), then the vertices are cosets of \(W_{\hat{g}}\) for \(g\in\{a,b,c\}\) and may sensibly be called _type_\(\hat{g}\) (i.e., type \(\hat{a}\), \(\hat{b}\), or \(\hat{c}\)). Each chamber of \(\widehat{\Sigma}\) is a simplex of type \(A_{3}\), such that the angle of every simplex at a vertex of type \(\hat{a}\) or \(\hat{c}\) is \(\pi/3\), and the angle of every simplex at a vertex of type \(\hat{b}\) is \(\pi/2\)--hence the definition of the marked \(A_{3}\) simplex \(\Delta\) and its metric. Suppose \(\gamma\) is a local geodesic of \(\Psi\) which does not intersect any vertices. Then the sequence of chambers which intersect \(\gamma\) can be mapped down to a sequence of adjacent chambers of \(\widehat{\Sigma}\) so that the markings of \(\widehat{\Sigma}\) agree with the markings of \(\Psi\). In particular, the image of \(\gamma\) is a local geodesic in \(\widehat{\Sigma}\), called the _development of \(\gamma\) (onto \(\widehat{\Sigma}\))_. Condition (2) allows us to develop a local geodesic which intersects vertices of type \(\hat{b}\) in the following way. First we note that since \(\Psi\) is marked, (2) implies that the link of a vertex of \(\hat{b}\) is the join of a set of at least two vertices of type \(\hat{a}\) with a set of at least two vertices of type \(\hat{c}\). If \(\gamma\) is a local geodesic which passes through a vertex \(v\) of type \(\hat{b}\), then it intersects the \(\varepsilon\)-sphere of \(v\) at two points which are distance at least \(\pi\) apart in the spherical metric on the \(\varepsilon\)-sphere. This sphere is isometric to the link of \(v\), so the points on the sphere correspond to points in the link which are distance no less than \(\pi\) apart. Since the link is complete bipartite, they are exactly distance \(\pi\) apart and are contained in a loop of edge length \(4\) (since such a loop exists). This loop corresponds to \(4\) simplices of \(\Psi\) which form a quadrilateral as in Figure 4. These simplices can then be mapped down to \(\widehat{\Sigma}\), and the image of \(\gamma\) will still be locally geodesic at the image of \(v\). In fact, any local geodesic which passes through the (open) star of a vertex of type \(\hat{b}\) is contained in a quadrilateral of this form. (If it doesn't pass through the Figure 3. Filling the short closed edge paths Figure 2. The short closed edge paths vertex, this follows from the usual developing process, plus condition (2).) This leads us to the following **Lemma 5.3**.: _Suppose \(v\) is a vertex of type \(\hat{b}\) and \(\gamma:[a_{1},a_{2}]\to\operatorname{St}(v)\) is a geodesic segment with endpoints \(v_{i}=\gamma(a_{i})\) on the interior of an edge \(e_{i}\) of \(\partial\operatorname{St}(v)\). Then either \(e_{1}\) and \(e_{2}\) are adjacent (in which case we say \(\gamma\) "cuts a corner" of \(\operatorname{St}(v)\)), or there is another edge \(e_{0}\) adjacent to both \(e_{1}\) and \(e_{2}\) (in which case we say \(\gamma\) "traverses" \(\operatorname{St}(v)\))._ This follows easily from examining the aforementioned quadrilateral in the Coxeter complex. See Figure 5. The following can be seen in a similar way by developing to the Coxeter complex. **Lemma 5.4**.: _Let \(\gamma\) be a local geodesic of \(\Psi\) avoiding vertices of type \(\hat{a}\) and \(\hat{c}\). If \(\gamma\) traverses (resp, cuts a corner of) some star of a vertex of type \(\hat{b}\), then it traverses (resp, cuts a corner of) every star of a vertex of type \(\hat{b}\) whose interior intersects \(\gamma\)._ The following is one of the main lemmas of this section. It allows us to place a lower bound on the length of geodesics which avoid \(\hat{a}\) and \(\hat{c}\) vertices, as well as allows us to fill in other short loops. **Lemma 5.5**.: _The (closed) stars of two distinct vertices of type \(\hat{b}\) intersect trivially, in exactly one vertex, or in exactly one edge._ Proof.: Let \(v_{1},v_{2}\) be distinct vertices of type \(\hat{b}\). We first show that \(\operatorname{St}(v_{1})\cap\operatorname{St}(v_{2})\) contains at most one vertex of type \(\hat{a}\) and at most one vertex of type \(\hat{c}\). Let \(w_{1},w_{2}\in\operatorname{St}(v_{1})\cap\operatorname{St}(v_{2})\) be vertices of type \(\hat{a}\). This results in the leftmost edge path in Figure 6. By condition (2), there are vertices \(u_{i}\in\operatorname{St}(v_{i})\) of type \(\hat{c}\) Figure 4. Developing a geodesic through a vertex of type \(\hat{b}\) so that \(u_{i}\) is adjacent to \(w_{1},w_{2}\) for \(i=1,2\). This results in the middle complex of Figure 6. The loop consisting of the \(w_{i}\) and \(u_{i}\) is a loop of type (i) (in Figure 2) and thus there is a vertex \(v\) of type \(\hat{b}\) so that the middle complex of Figure 6 can be augmented to the rightmost complex. If \(v\neq v_{1}\) and \(w_{1}\neq w_{2}\), then the vertices \(v_{1},w_{1},v,w_{2}\) give rise to an embedded loop of length \(4\) in the link of \(u_{1}\). However, condition (1) prevents this from happening, so we must have either \(v=v_{1}\) or \(w_{1}=w_{2}\). The same reasoning shows that either \(v=v_{2}\) or \(w_{1}=w_{2}\). If \(w_{1}\neq w_{2}\), we would have \(v_{1}=v=v_{2}\), contradicting the assumption that \(v_{1}\neq v_{2}\). So, we must have \(w_{1}=w_{2}\) and thus \(\operatorname{St}(v_{1})\cap\operatorname{St}(v_{2})\) contains at most one vertex of type \(\hat{a}\). An analagous argument shows that it contains at most one vertex of type \(\hat{c}\). Now suppose \(w,u\in\operatorname{St}(v_{1})\cap\operatorname{St}(v_{2})\) are vertices of type \(\hat{a}\) and \(\hat{c}\), respectively. As noted before, the definition of marked \(A_{3}\) simplicial complex and condition (2) imply that the link of a vertex of type \(\hat{b}\) is a join of a set of vertices of type \(\hat{a}\) with a set of vertices of type \(\hat{c}\). This means there is an edge, which is necessarily unique, between \(w\) and \(u\), and this edge contained in \(\operatorname{St}(v_{1})\cap\operatorname{St}(v_{2})\). The result follows from noticing that if \(e_{1},e_{2}\) are two distinct edges, then their union contains at least three vertices (since \(\Psi\) is simplicial), and thus \(\operatorname{St}(v_{1})\cap\operatorname{St}(v_{2})\) cannot contain each of these vertices by our above work, meaning at least one of the edges is not contained in this intersection. As a first consequence, we have **Lemma 5.6**.: _Any edge path appearing in Figure 7 is contained in a subcomplex of \(\Psi\) found in Figure 8._ Proof.: We will deal with the left loop of Figure 7; the argument for the other loop is identical. By condition (2), we know that this loop is a subcomplex (indicated in bold) of the leftmost complex in Figure 9, since two \(\hat{c}\) vertices in a star of a \(\hat{b}\) vertex must be connected by an \(\hat{a}\) vertex (which is also in this star). But now the vertices of type \(\hat{a}\) and \(\hat{c}\) form a loop of type (i) (Figure 2), and in particular are contained Figure 6. Filling a loop coming from the intersection of two stars Figure 7. Two more short closed edge paths in the star of a common \(\hat{b}\) vertex. If this vertex were distinct from the one already included in our original loop, this would contradict Lemma 5.5. So the complex on the left of Figure 9 can be completed to the complex on the right, giving the desired result. We now show that the loops considered so far are the only possible short loops that can arise. **Lemma 5.7**.: _Suppose \(p\) is a simple closed edge path of \(\Psi\). Then \(\ell(p)<2\pi\) (where \(\ell\) is the length function) if and only if \(p\) is one of the paths of type (i), (ii), or (iii) in Figures 2 or 7._ Proof.: An edge of \(p\) from a vertex of type \(\hat{a}\) (or \(\hat{c}\)) to \(\hat{b}\) is immediately followed by an edge from a vertex of type \(\hat{b}\) to \(\hat{a}\) (resp. \(\hat{c}\)). This is a consequence of condition (2) and the fact that the length of edges of the link is \(\pi/2\). Moreover, as a consequence of the definition of a marked \(A_{3}\) simplicial complex, vertices of type \(\hat{a}\) or \(\hat{c}\) cannot be joined by an edge to a vertex of the same type. Thus \(p\) consists of edges between \(\hat{a}\) vertices and \(\hat{c}\) vertices, the length of which is \(\beta\coloneqq\arccos(1/\sqrt{3})\), and pairs of edges from \(\hat{a}\) to \(\hat{b}\) to \(\hat{a}\), or from \(\hat{c}\) to \(\hat{b}\) to \(\hat{c}\), the total length of which is \(2\alpha\), where \(\alpha\coloneqq\arccos(1/3)\). We conclude by an argument completely identical to that found in [1, 1] that the only possible edge paths of length less than \(2\pi\) are those in Figures 2 or 7 (which are those of Figure 7 of [1]), as desired. We are now ready to prove the main theorem of the section. The argument is completely analagous to that of [1, 2]. We give the argument here to clarify that it works in this more general setting. Proof (of Theorem 5.2).: First we note that the links of vertices are CAT(1). For a vertex \(v\) of type \(\hat{a}\) or \(\hat{c}\), the canonical metric places a length of \(\pi/3\) on the edges of \(lk(v,\Psi)\). Condition (1) guarantees the (combinatorial) girth of this link is \(6\), and hence the length of any nontrivial closed loop in \(lk(v,\Psi)\) is at least \(6\pi/3=2\pi\). If Figure 8. Filling the edge paths Figure 9. Completing the loop is a vertex of type \(\hat{b}\), then the canonical metric places a length of \(\pi/2\) on the edges of \(lk(v,\Psi)\). Since this link is bipartite, it has girth at least \(4\), so the length of any nontrivial closed loop in \(lk(v,\Psi)\) is at least \(4\pi/2=2\pi\). Hence the vertex links are \(\operatorname{CAT}(1)\). It remains to show that if \(\gamma\) is a closed geodesic of \(\Psi\), then \(\ell(\gamma)\geq 2\pi\). So, suppose \(\gamma\) is a closed geodesic. There are three cases to consider. First assume \(\gamma\) is an edge path. If \(\ell(\gamma)<2\pi\), then Lemma 5.7 implies \(\gamma\) is one of the paths of type (i), (ii), or (iii) (Figures 2 and 7). Condition (3) (resp. Lemma 5.6) guarantees that the paths of type (i) and (ii) (resp. (iii)) are not locally geodesic at any vertex of type \(\hat{a}\) or \(\hat{c}\), and thus we must have \(\ell(\gamma)\geq 2\pi\). Now assume \(\gamma\) intersects the interior of at least one \(2\)-cell and does not intersect any vertices of type \(\hat{a}\) or \(\hat{c}\). Let \(v_{1},\ldots,v_{n}\) denote the distinct vertices of type \(\hat{b}\) such that \(\gamma\cap st(v_{i})\neq\varnothing\). Since such a star is \(\operatorname{CAT}(1)\) and has diameter \(<\pi\), we know \(n\neq 1\). By Lemmas 5.3 and 5.5, \(\gamma\) cannot close up after intersecting only two such stars, so \(n\neq 2\). By Lemma 5.4, there are two cases now to consider; either \(\gamma\) traverses every \(\operatorname{St}(v_{i})\) or cuts a corner of every \(\operatorname{St}(v_{i})\). Suppose first that \(\gamma\) traverses these stars. We claim that \(n\neq 3\). To the contrary, if \(n=3\) then we have a subcomplex of \(\Psi\) seen in Figure 10, with \(\gamma\) indicated by a dashed line and the edges marked with arrows identified. This results in an edge path in the union of the \(\operatorname{St}(v_{i})\) which forms a path of type (i) above but is not contained in any of the \(\operatorname{St}(v_{i})\), indicated in bold in Figure 10. By condition (3), it follows that this path is contained in some \(\operatorname{St}(v^{\prime})\) for a vertex \(v^{\prime}\) of type \(\hat{b}\) distinct from each \(v_{i}\). But then \(\operatorname{St}(v^{\prime})\) would intersect \(\operatorname{St}(v_{3})\) in more than one edge, contradicting Lemma 5.5. By developing onto \(\widehat{\Sigma}\), any geodesic which traverses \(4\) quadrilaterals of the type in Figure 4 has length no less than \(2\pi\). Now suppose \(\gamma\) cuts corners. We claim \(n\geq 6\), resulting in a complex of the form in Figure 11, with \(\gamma\) represented by a dashed line. If \(\gamma\) closes up after cutting less than \(6\) corners, then two of the three circled vertices in Figure 11 would be identified, which cannot happen (either because \(\Psi\) is simplicial and each simplex in the diagram is distinct, or because of Lemma 5.5). Hence we must have \(n\geq 6\). By developing, a geodesic which cuts at least \(6\) corners has length \(\geq 2\pi\). Last, assume \(\gamma\) intersects at least one vertex of type \(\hat{a}\) or \(\hat{c}\) but is not an edge path. Decompose \(\gamma\) into the concatination of segments \(\gamma=\gamma_{0}\gamma_{1}\ldots\gamma_{n}\) with no vertices of type \(\hat{a}\) or \(\hat{c}\) in the interior of the \(\gamma_{i}\). Note that at least one of the \(\gamma_{i}\) contains no edges of \(\Psi\). By developing we see that any such \(\gamma_{i}\) is half a great circle from a vertex of type \(\hat{a}\) or \(\hat{c}\) to a vertex of type \(\hat{c}\) or \(\hat{a}\), respectively, so the length of \(\gamma_{i}\) is \(\pi\)--see Figure 12. Thus if there are two segments of this type, \(\ell(\gamma)\geq 2\pi\), so suppose there is exactly one segment \(\gamma_{i}\) which does not contain an edge of \(\Psi\). We may assume \(\gamma_{0}\) is this segment. Notice in particular that since the endpoints of \(\gamma_{0}\) always have different type, \(\gamma_{0}\) cannot be closed, or in other words \(n>0\). Figure 10. The case \(n=3\) Let \(\overline{\gamma}_{0}\) be the image of \(\gamma_{0}\) under the development to \(\widehat{\Sigma}\). Rotate \(\overline{\gamma}_{0}\) within \(\widehat{\Sigma}\) relative to its endpoints until it becomes an edge path with vertices of type \(\hat{a}\) or \(\hat{c}\) in its interior. Then lift this rotation to \(\gamma_{0}^{\prime}\) in \(\Psi\) so that its endpoints agree with the original endpoints of \(\gamma_{0}\) (see Figure 13). This new path is clearly locally geodesic in its interior and has the same length as \(\gamma_{0}\). Let \(\gamma^{\prime}=\gamma_{0}^{\prime}\gamma_{1}\dots\gamma_{n}\). Note that \(\ell(\gamma^{\prime})=\ell(\gamma)\). We claim that \(\gamma^{\prime}\) is locally geodesic. To do this, we only need to check the endpoints of \(\gamma_{0}^{\prime}\). Since \(\gamma_{0}\) is the unique non-edge path in the decomposition of \(\gamma\), we know that \(\gamma_{1}\) and \(\gamma_{n}\) are both edge paths in \(\Psi\). (The case \(n=1\) is acceptable.) Let \(v_{0}\) be the vertex between \(\gamma_{0}\) and \(\gamma_{1}\), and let \(v_{n}\) be the vertex between \(\gamma_{n}\) and \(\gamma_{0}\) (if \(n=1\), let these vertices be the distinct endpoints of \(\gamma_{0}\)). Consider \(lk(v_{j},\Psi)\) for \(j=1\) and \(j=n\). The intersection of the \(\varepsilon\)-sphere of \(v_{j}\) with \(\gamma_{0}\) and \(\gamma_{j}\) gives points in \(lk(v_{j},\Psi)\) Figure 11. A geodesic cutting corners Figure 12. A geodesic segment \(\gamma_{0}\) in \(\Psi\) which is not an edge path which are distance at least \(\pi\) apart. Since \(\gamma_{j}\) is an edge path, it gives a vertex \(v_{j}^{\prime}\) of \(lk(v_{j},\Psi)\), and since \(\gamma_{i}\) is not an edge path it gives a point \(p_{j}\) in the interior of an edge \(e_{j}\) of \(lk(v_{j},\Psi)\). Since \(p_{j}\) is not a vertex and the length of the edges of the link are \(\pi/3\), we must have that \(d(v_{j}^{\prime},p_{j})>\pi\) in the link, and moreover, both vertices of \(e_{j}\) are distance \(\geq\pi\) from \(v_{j}^{\prime}\). But now the rotation \(\gamma_{0}^{\prime}\) gives points \(p_{j}^{\prime}\) in \(lk(v_{j},\Psi)\) which are vertices of the edge \(e_{j}\), and hence has distance at least \(\pi\) from \(v_{j}^{\prime}\). This implies that \(\gamma^{\prime}\) is locally geodesic at \(v_{j}\) for \(j=1\) and \(j=n\). Hence \(\gamma^{\prime}\) is a closed local geodesic in \(\Psi\). But \(\gamma^{\prime}\) is also an edge path, so by our previous remarks, we must have \(\ell(\gamma)=\ell(\gamma^{\prime})\geq 2\pi\). This exhausts all possibilities for \(\gamma\), so we see that any local geodesic loop must have length \(\geq 2\pi\), implying \(\Psi\) is CAT(1). ## 6. The Moussong metric In this section, we complete the proof of Theorem A. Our key lemma is the following, which we spend the section proving. **Lemma 6.1**.: _If \(G_{\Gamma}\) is finite with \(\Gamma\) connected and not type \(A_{4}(3)\), then \(\widehat{\Theta}_{\Gamma}\) is \(\mathrm{CAT}(1)\) under the Moussong metric._ We note that the case where \(\Gamma\) is Coxeter is well known, since \(\widehat{\Theta}_{\Gamma}\) is the Coxeter complex, which, under the Moussong metric, is isometric to a sphere with its usual round metric. It remains to show the Lemma for non-Coxeter \(\Gamma\). Our methods for proving the Lemma are case-specific, and as of yet we are unable to treat the \(A_{4}(3)\) Shephard group. We hope to find a unified presentation in the future, hopefully one that includes \(A_{4}(3)\). We note here that, assuming the lemma, the proof of Theorem A follows quickly: Proof of Theorem A (assuming Lemma 6.1).: Let \(\Gamma\) be an extended Coxeter diagram with no subdiagram of the form \(A_{4}(3)\) and with \(\mathcal{S}^{f}=\mathcal{S}^{fs}\). By Proposition 3.8, the link of a vertex \(v_{T}\) for \(T\in\mathcal{S}^{f}\) in the local development decomposes as \(lk(v_{T},F_{T})*\widehat{\Theta}_{\Gamma(T)}\). Suppose \(\Gamma(T)\) is connected. Since \(T\) generates a finite Shephard group and \(\Gamma(T)\) is not \(A_{4}(3)\), Lemma 6.1 implies \(\widehat{\Theta}_{\Gamma(T)}\) is CAT(1). If \(\Gamma(T)\) is the disjoint union \(\Gamma_{1}\sqcup\ldots\sqcup\Gamma_{n}\) of connected subdiagrams of \(\Gamma\), then by Corollary 3.9, we know that \(\widehat{\Theta}_{\Gamma(T)}\) is isometric to the spherical join \(\widehat{\Theta}_{\Gamma_{1}}*\ldots*\widehat{\Theta}_{\Gamma_{n}}\). By assumption, no \(\Gamma_{i}\) is \(A_{4}(3)\), so each \(\widehat{\Theta}_{\Gamma_{n}}\) is CAT(1), and hence so is their spherical join \(\widehat{\Theta}_{\Gamma(T)}\). Let \(K_{0}=|(\mathcal{S}^{f}_{>\varnothing})^{\prime}|\). We identify \(K_{0}\) with a subspace of \(K\); namely, it is easy to verify that \(lk(v_{\varnothing},F_{\varnothing})\cong K_{0}\) as simplicial complexes. We put the subspace metric on \(K_{0}\) coming from this identification. It is also straightforward to verify that \(lk(v_{T},K_{0})\cong lk(v_{T},F_{T})\). Thus by [1, Lem. 4.4.1], \(lk(v_{T},F_{T})\) is CAT(1). Therefore the orthogonal join of \(\widehat{\Theta}_{\Gamma(T)}\) and \(lk(v_{T},F_{T})\) is CAT(1). Since all vertices in the local development are a translate of some \(v_{T}\), this means the local developments are nonpositively curved. Since \(\mathcal{G}\) is a simple complex of groups over a simply connected fundamental domain, it follows that \(\mathcal{G}\) has CAT(0) development \(\Theta\). By the definition of \(\mathcal{G}\), \(G\) acts properly (all stabilizers are conjugates of the finite parabolics of \(G\)) and cocompactly (the fundamental domain is the compact space \(K\)) on \(\Theta\), and hence \(G\) is CAT(0). ### The 2-generator Shephard groups We begin with the 2-generator, or dihedral, Shephard groups \(\Gamma=I_{2}(p,m,q)\). Let \(\mathcal{P}\) be the regular complex polygon associated to \(\Gamma\). By [10], the "girth" of \(\mathcal{P}\) is \(m\). In said article, the polygon (which is notated "\(p\{m\}q\)") is represented as a "hypergraph", and "girth of the polygon" means the combinatorial girth of this hypergraph, i.e., the minimal number of edges in a nontrivial closed loop. Since our complex \(\widehat{\Theta}\) is the incidence graph of this polytope (and hence the corresponding hypergraph), it follows that the (combinatorial) girth of \(\widehat{\Theta}\) is \(2m\). Since the length of an edge is \(\pi/m\), it follows that any non-trivial loop in \(\widehat{\Theta}\) has length at least \(2\pi\), and hence \(\widehat{\Theta}\) is \(\mathrm{CAT}(1)\). **Example 6.2**.: Consider \(\Gamma=I_{2}(p,4,2)\) for any \(p\geq 2\). The associated complex \(\widehat{\Theta}_{\Gamma}\) is isomorphic to the barycentric subdivision of \(K_{p,p}\). See [11, SS4.8] for more details. Depending on which non-starry regular complex polytope for \(\Gamma\) we prefer (since it is not self-dual), the vertices of \(K_{p,p}\) come from the vertices (or edges) of the complex polytope for \(\Gamma\), and the barycenters of edges of \(K_{p,p}\) come from the edges (resp. vertices) of the polytope. Some diagrams for low values of \(p\) are given in Figure 14. Black dots are the vertices (or edges) of the polytope for \(\Gamma\), and white dots are edges (resp. vertices) of the polytope. There is an edge between two dots if the respective faces are nested. **Example 6.3**.: Let \(\Gamma=I_{2}(3,3,3)\). Then \(\widehat{\Theta}_{\Gamma}\), shown in Figure 15, is the incidence graph of the _Mobius-Kantor configuration_\(8_{3}\). See [11, SS4.8] for more details. The figure given below for \(\widehat{\Theta}_{\Gamma}\) is from [12, Fig. 3]. Here, the black dots represent vertices of this configuration, and the white dots are the lines (or vice versa, since the polytope is self-dual). ### The groups \(B_{n}(p,2)\) The Shephard groups \(G=G_{\Gamma}\) for \(\Gamma=B_{n}(p,2)\) are unique among the other finite Shephard groups, as they make up the only infinite family of finite non-Coxeter Shephard groups. However, this family becomes easy to deal with because of the following two lemmas. **Lemma 6.4**.: _[_12_, SS12.2, p. 118]_ _Let \(\gamma_{n}^{p}\) denote the (unique non-starry) regular complex polytope with symmetry group \(G_{\Gamma}\) for \(\Gamma=B_{n}(p,2)\) (sometimes \(\gamma_{n}^{p}\) is called a generalized \(n\)-cube). Then,_ \[\gamma_{n}^{p}=\prod^{n}\gamma_{1}^{p}, \tag{6.1}\] _where \(\prod^{n}\gamma_{1}^{p}\) denotes an \(n\)-fold direct product of \(\gamma_{1}^{p}\)._ **Lemma 6.5**.: _The barycentric subdivision of the \(n\)-fold spherical join of a point has top dimensional cells isometric to the simplex of shape \(B_{n}\)._ Proof.: Let \(Q=[0,1]^{n}\) be the \(n\)-cube with its standard cellulation. The vertex link \(lk(0,Q)\) is an "all-right simplex" (i.e., a spherical simplex with all edge lengths equal to \(\pi/2\)). This is the \(n\)-fold spherical join of one point (see [1, Def. I.5.13] for the definition of spherical join). Let \(Q^{\prime}\) be the barycentric subdivision of \(Q\) (see [1, Def. I.7.42] for the definition of the barycentric subdivision of metrized cells). Then \(lk(v,Q^{\prime})\) is (simplicially) isometric to \(lk(v,Q)^{\prime}\), the latter of which is the barycentric subdivision of the all-right simplex, and hence the barycentric subdivision of the \(n\)-fold spherical join of a point. Let \(v\) be the vertex of \(Q^{\prime}\) coming from the top dimensional face of \(Q\). The symmetry group of \(Q\) is the type \(B_{n}\) Coxeter group and thus \(lk(v,Q^{\prime})\) is isometric to the Coxeter complex for the \(B_{n}\) Coxeter group. In particular, the top dimensional cells of \(lk(v,Q^{\prime})\) are simplices of shape \(B_{n}\). It follows easily from the definition of the barycentric subdivision that the top dimensional cells of \(lk(0,Q^{\prime})\) are isometric to the top dimensional cells of \(lk(v,Q^{\prime})\); the result follows. From these, we can show **Proposition 6.6**.: _For all \(n\) and \(p\), \(\widehat{\Theta}(B_{n}(p,2))\) is isometric to the barycentric subdivision of the \(n\)-fold spherical join of a set of \(p\) points, and hence is \(\mathrm{CAT}(1)\)._ Proof.: First, we recall basic constructions relating to posets. The (upper) cone \(c\mathcal{P}\) on a poset \(\mathcal{P}\) is the poset obtained by adding an element \(1_{\mathcal{P}}\) (called the _cone point_) to \(\mathcal{P}\) which we declare maximal in the order on \(\mathcal{P}\). The derived complex \(\mathcal{P}^{\prime}\) is the set of all linearly ordered subsets (or "chains") of \(\mathcal{P}\), ordered by inclusion. The derived complex \(\mathcal{P}^{\prime}\) is always an abstract simplicial complex, and we denote its geometric realization by \(|\mathcal{P}^{\prime}|\). The product \(\mathcal{P}\times\mathcal{Q}\) is the poset whose underlying set is usual set-theoretic product \(\mathcal{P}\times\mathcal{Q}\) with order \((p,q)\leq(p^{\prime},q^{\prime})\) if and only if \(p\leq p^{\prime}\) and \(q\leq q^{\prime}\). The join \(\mathcal{P}\ast\mathcal{Q}\) is the poset \((c\mathcal{P}\times c\mathcal{Q})\setminus\{(1_{\mathcal{P}},1_{\mathcal{Q}})\}\). We note that there is an order-preserving isomorphism \(c(\mathcal{P}\ast\mathcal{Q})\cong c\mathcal{P}\times c\mathcal{Q}\) which maps the cone point of \(c(\mathcal{P}\ast\mathcal{Q})\) to \((1_{\mathcal{P}},1_{\mathcal{Q}})\). We now turn to the setting of \(B_{n}(p,2)\). Let \((\gamma_{n}^{p})_{prop}\) denote the poset of proper faces of \(\gamma_{n}^{p}\) (i.e., all faces except \(\varnothing\) and \(\mathbb{C}^{n}\)), and let \((\gamma_{n}^{p})_{>\varnothing}\) denote the set of non-empty faces of \(\gamma_{n}^{p}\) (but including \(\mathbb{C}^{n}\)). The poset \((\gamma_{n}^{p})_{>\varnothing}\) is the cone on \((\gamma_{n}^{p})_{prop}\) with cone point \(\mathbb{C}^{n}\). By the comment after Definition 2.12, there is a poset isomorphism \[\prod^{n}(\gamma_{1}^{p})_{>\varnothing}\cong\left(\prod^{n}\gamma_{1}^{p} \right)_{>\varnothing},\] where the product on the left hand side is the poset product, and that on the right is the polytope product. Thus by Lemma 6.4 we have \[\prod^{n}c(\gamma_{1}^{p})_{prop}=\prod^{n}(\gamma_{1}^{p})_{>\varnothing}\cong( \gamma_{n}^{p})_{>\varnothing}=c(\gamma_{n}^{p})_{prop}.\] And by our previous comments about products and joins of posets, we see that this gives the poset isomorphism \[c\left(\begin{smallmatrix}n\\ \not{\aleph}(\gamma_{1}^{p})_{prop}\end{smallmatrix}\right)\cong c(\gamma_{n }^{p})_{prop},\] where \(\not{\aleph}^{n}\) is the \(n\)-fold spherical join. In particular, the cone points are preserved in this map, so \[\overset{n}{\not{\aleph}}(\gamma_{1}^{p})_{prop}\cong(\gamma_{n}^{p})_{prop}.\] Thus by taking geometric realizations, we have a simplicial homeomorphism \[\big{|}\big{(}\begin{smallmatrix}n\\ \not{\aleph}(\gamma_{1}^{p})_{prop}\end{smallmatrix}\big{)}^{\prime}\big{|} \cong|(\gamma_{n}^{p})_{prop}^{\prime}|.\] By our definitions, \(\widehat{\Theta}\cong|(\gamma_{n}^{p})_{prop}^{\prime}|\). Moreover, the diagram \(B_{1}(p,2)\) is a single vertex labeled \(p\); its regular complex polytope \(\gamma_{1}^{p}\) is a copy of \(\mathbb{C}\) with \(p\) distingished points as vertices, so the poset \((\gamma_{1}^{p})_{prop}\) is a set of \(p\) points, none of which are comparable. Thus \(\not{\aleph}^{n}(\gamma_{1}^{p})_{prop}\) is the poset of cells of the \(n\)-fold spherical join of a set of \(p\) points, which we will call \(\Delta\), and \(\big{(}\begin{smallmatrix}\not{\aleph}^{n}(\gamma_{1}^{p})_{prop}\end{smallmatrix} \big{)}^{\prime}\) is the poset of cells of the barycentric subdivision \(\Delta^{\prime}\) of \(\Delta\). In other words, there is a simplicial homeomorphism \[\Delta^{\prime}\cong\widehat{\Theta}(B_{n}(p,2)).\] The Moussong metric declares the top dimensional simplices of \(\widehat{\Theta}(B_{n}(p,2))\) to be simplices of type \(B_{n}\). Lemma 6.5 implies that the top dimensional cells of \(\Delta^{\prime}\) are also simplices of type \(B_{n}\); hence, this map is a simplicial isometry. Since \(\Delta\) is \(\operatorname{CAT}(1)\) (it is a spherical join of \(\operatorname{CAT}(1)\) spaces), then \(\Delta^{\prime}\) is \(\operatorname{CAT}(1)\) [1, Lem. I.7.48], and thus so is \(\widehat{\Theta}(B_{n}(p,2))\). ### The group \(A_{3}(3)\) We now consider \(\widehat{\Theta}=\widehat{\Theta}(A_{3}(3))\). Throughout this section, let \(\mathcal{H}\) denote the regular complex polytope associated to \(A_{3}(3)\). Our arguments rely on the combinatorial structure of this polytope--see Appendix A for more details. Here we utilize Charney's combinatorial \(\operatorname{CAT}(1)\) criteria (CCCC) that we introduced in Section 5. Note that \(\widehat{\Theta}\) carries a natural marked \(A_{3}\) simplicial complex structure: we call a vertex of \(\widehat{\Theta}\) type \(\hat{a}\) if it is a vertex of \(\mathcal{H}\), type \(\hat{b}\) if it is an edge of \(\mathcal{H}\), and type \(\hat{c}\) if it is a face of \(\mathcal{H}\). We also note that the canonical metric for marked \(A_{3}\) simplicial complexes agrees with the Moussong metric on \(\widehat{\Theta}\); both place an angle of \(\pi/3\) at the vertices of type \(\hat{a}\) and \(\hat{c}\), and an angle of \(\pi/2\) at the vertices of type \(\hat{b}\). **Proposition 6.7**.: _Under the above marking, \(\widehat{\Theta}(A_{3}(3))\) satisfies CCCC, and is therefore \(\operatorname{CAT}(1)\) under its canonical metric (and hence the Moussong metric)._ Proof.: The links of vertices of type \(\hat{a}\) and \(\hat{c}\) are isomorphic to \(\widehat{\Theta}(I_{2}(3,3,3))\) which has girth \(6\), and the link of a vertex of type \(\hat{b}\) is isomorphic to \(\widehat{\Theta}(I_{2}(3,2,3))\), which by Corollary 3.9 is isomorphic to \(K_{3,3}\). It remains to show that \(\widehat{\Theta}\) satisfies condition (3) of the criteria. Path (i): Translating to \(\mathcal{H}\), this setup is equivalent to taking two faces of \(\mathcal{H}\) (corresponding to the vertices of type \(\hat{c}\)) which intersect in two distinct vertices of \(\mathcal{H}\) (the vertices of type \(\hat{a}\)). By Proposition A.2, these faces must intersect along an edge containing these two vertices. This edge in the polytope corresponds to a vertex of type \(\hat{b}\) which is joined to each vertex of \(\gamma\), and each of the triangles formed are filled as in Figure 3(i). Path (ii): Path (ii.a) says we have three edges in the complex polytope (vertices of type \(\hat{b}\)) which pairwise intersect at three distinct vertices of the polytope (vertices of type \(\hat{a}\)). By Proposition A.1, these edges must be contained in some common face (a vertex of type \(\hat{c}\)), giving Figure 3(ii.a). The (ii.b) case follows from passing to the dual polytope. Thus we conclude that \(\widehat{\Theta}(A_{3}(3))\) satisfies CCCC, and is therefore CAT(1). ### The group \(B_{3}(2,3)\) By [13, Sec. 12.4], we know \(G_{B_{3}(2,3)}\cong(\mathbb{Z}/2\mathbb{Z})\times G_{A_{3}(3)}\). This manifests on the level of complexes; \(\widehat{\Theta}(B_{3}(2,3))\) is a subdivision of \(\widehat{\Theta}(A_{3}(3))\). Specifically, doubling the fundamental domain for \(\widehat{\Theta}(B_{3}(2,3))\) along the face corresponding to the order-2 generator is isometric to the fundamental domain for \(\widehat{\Theta}(A_{3}(3))\). This is illustrated in Figure 16, where the fundamental domain for \(B_{3}(2,3)\) is outlined in bold, with the corresponding stabilizers labeling the edges. The vertices of \(\Delta_{A_{3}(3)}\) are labeled according to the convention used in the previous section. Since this subdivision respects the metric on \(\widehat{\Theta}(A_{3}(3))\), it follows that \(\widehat{\Theta}(B_{3}(2,3))\) is CAT(1). This exhausts the list of finite Shephard groups with connected diagram (excluding \(A_{4}(3)\)), thus completing the proof of Lemma 6.1, and by extension, Theorem A. ## Appendix A Combinatorics of the Hessian polyhedron In this section we provide explanations for many technical lemmas regarding the \(A_{3}(3)\) Shephard group and lay out explicit data utilized in the computations therein. Throughout, let \(\mathcal{H}\) denote the standard Hessian polyhedron, that is, the non-starry regular complex polytope with diagram \(A_{3}(3)\). We begin by recalling basic facts about the facets of \(\mathcal{H}\). The following can be found in [13, Ch. 12.3]. The 27 vertices of \(\mathcal{H}\) are given by \[(0,\omega^{i},-\omega^{j})\qquad(-\omega^{j},0,\omega^{i})\qquad(\omega^{i},- \omega^{j},0)\] where \(\omega=\exp(2\pi\sqrt{-1}/3)\) and \(i,j=1,2,3\). As in Coxeter, we find it convenient to use the shorthand \[0ij\qquad j0i\qquad ij0\] for \(i,j=1,2,3\) to represent the respective vertices. (For example, 013 is the vertex \((0,\omega^{1},-\omega^{3})\).) The generators of the \(A_{3}(3)\) Shephard group have a nice description as permutations of these vertices, which is listed in Table 6. \[a =(101\ 201\ 301)(102\ 202\ 302)(103\ 203\ 303)(110\ 210\ 310)(120\ 220\ 320)(130\ 230\ 330)\] \[b =(012\ 230\ 103)(013\ 102\ 320)(021\ 203\ 130)(023\ 310\ 201)(031\ 120\ 302)(032\ 301\ 210)\] \[c =(011\ 012\ 013)(021\ 022\ 023)(031\ 032\ 033)(101\ 102\ 103)(201\ 202\ 203)(301\ 302\ 303).\] There are 72 edges of \(\mathcal{H}\). Two vertices lay on a common edge if and only if their symbols agree in an even number of positions, meaning they agree in two positions or no position. For example, 013 and 023 share an edge, while 013 and 022 do not. Thus it is straightforward to verify if two given points share an edge in \(\mathcal{H}\) despite the (somewhat) large number of edges. In addition, since "edges" of a complex polytope are (affine) complex lines, and two points determine a unique line, two vertices share _at most_ one edge. We next describe the (2-)faces. Since \(\mathcal{H}\) is self-dual, it has 27 faces. One of the faces has vertices \[012,021,103,203,303,130,230,330,\] which is the orbit of 012 under the subgroup generated by \(a\) and \(b\). The other faces are the translates of this face by elements of the symmetry group. We provide the vertices of these faces here for posterity (including the face mentioned above) in Table 7. We note that the list of vertices is sufficient to determine the entire face; an edge is included in a face if and only if each of its vertices are. That is to say that if three vertices in a face span an edge of \(\mathcal{H}\), then this edge is part of said face. We conclude with two further observations which can be derived from examining Table 7. First, **Proposition A.1**.: _If \(E_{1}\), \(E_{2}\), and \(E_{3}\) are distinct edges of \(\mathcal{H}\) which pairwise intersect non-trivially but have trivial total intersection, then there is a face \(F\) of \(\mathcal{H}\) containing \(E_{1}\), \(E_{2}\), and \(E_{3}\)._ This essentially says there are no "empty triangles" in \(\mathcal{H}\). The following says that the intersection of faces is well behaved in a certain way. **Proposition A.2**.: _If \(F_{1}\) and \(F_{2}\) are distinct faces of \(\mathcal{H}\), then \(F_{1}\cap F_{2}\) is either empty, a single vertex, or a single edge._ Again, both of these may be proven "by hand" (or more conveniently, by computer) using Table 7 and the previously stated characterization of the edges of \(\mathcal{H}\). \begin{table} \begin{tabular}{l l